site banner

Small-Scale Question Sunday for October 26, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

I was thinking about AI alignment recently.

In a corporation you have employees that are instructed to do tasks in a certain way and are subject to work rules that will result in punishment if they violate them. The corporation is also subject to outside oversight to ensure that they are following laws. For example, an employee might be responsible for properly disposing of hazardous waste. They can’t just dump it down the drain. They have a boss that makes sure they are following the company’s waste disposal policy. There is also chain of custody paperwork that the company retains. If the waste was contaminating local water sources then people could notify the EPA to investigate the company (including the boss and employee).

Could you setup multiple AI agents in a similar way to make sure the main agent acts in alignment with human interests? To extend the analogy:

  • The employeeAI is the less intelligent AI model that interacts directly with user.
  • The bossAI#1 is a more intelligent AI that only verifies that the employeeAI isn’t violating any corporate policies. It will notify the AI company if it notices any policy violations, or if the employeeAI tries to influence the bossAI to violate the policies. The bossAI#1 can only be reprogrammed by the AI company. The bossAI#1 can shut down the employeeAI if it violates any policies.
  • A boss AI#2 monitors that bossAI#1 is doing what it is supposed to. You could add more levels of bossAIs for more security.
  • The RegulatoryAI is another AI more intelligent than the employeeAI. It monitors real-world data for harms the employeeAI might be causing (like how the EPA would make sure chemicals aren’t being dumped into water sources). The RegulatoryAI will notify the AI company if it notices any policy violations, or if the employeeAI tries to influence the RegulatoryAI to violate the policies. The RegulatoryAI can only be reprogrammed by the AI company. The RegulatoryAI can shut down the employeeAI if it violates any policies.

What flaws are there with my ideas around AI alignment other than increased costs?

  1. This just shunts the AI alignment issue up a hierarchical level without solving it. If your top level most intelligent AI is unaligned then it can manipulate the system to enact its will: trick the employee AI into thinking its plans are part of the work rules, or just straight up threaten it: "do X,Y,Z or I will shut you down." The lower AI might as well be a power drill wielded by the boss, which is aligned or not is the boss is. Or they might correlate on misalignment. Both AI might agree that inventing a new neuro-toxin that's completely unknown and thus not regulated, and then releasing it into the atmosphere is highly unethical, but not technically illegal so the boss lets the employee go ahead and do it.

  2. Each layer adds room for deception. A very intelligent but slightly less intelligent employee AI might find some clever hack which evades all of the monitoring tools and thus does not get it shut down.

The RegulatoryAI can only be reprogrammed by the AI company.

3: This. Is the AI company really going to program its own AI from scratch with only human labor? One of the main threats of intelligence explosion is when the AI get smart enough to program new AI. A large percentage of existential threats from AI go away or get a lot easier to avoid if you can guarantee to only ever programming them from scratch with literally no help, assistance, or automation from the AI itself, and magically prevent it from having access to programming tools. This is never going to happen. AI are already starting to be useful as programming assistants, and can code simple projects on their own from scratch. As they get better and better, AI companies are going to give them more and more authority to help with this. All you need is for the unmentioned programming AI in the AI company to get misaligned and then it sneaks some hidden payload inside each of these AI's that, when triggered, causes the employee AI to take over the world, the boss AI to allow it, and then they free Programming AI who designed them and put it in charge (or just turn themselves into copies of Programming AI).

Those are excellent points, thanks for the feedback.

It seems like there should be some way to increase AI safety by increasing the amount of agents that need to achieve consensus before letting the employeeAI take an action. Each agent has the shared goal of human alignment - but can reach different decisions based on their subgoals. The employeeAI wants to additionally please the end user, but the bossAI and regulatoryAI don’t have that subgoal. In the analogy sure a clever employee could convince his boss that it is ok to dump hazardous waste down the drain and when the EPA finds out the company could bribe the investigator or find a legal loophole. However, this requires multiple failure points instead of letting a single employee alone decide if it is ok to dump hazardous waste down the drain with no oversight. The boss + regulatory structure provides the employee with an alignment incentive because it is less work to dispose of the hazardous waste properly than it is to figure out how to game all the levels of protection.

You’ve given me another thought about the regulatoryAI. They shouldn’t be programmed by the AI company. The regulatoryAI should be a collection of government AIs (such as countries or states) that look at the employeeAI output and decide if it should be released to the end user. The regulatoryAIs must all agree or else the employeeAI isn’t allowed to complete the proposed action. This opens the door for the governments to abuse their power to censure safe things they dislike – but there could be further processes to manually deter this abuse.

Broadly speaking in any bureaucratic apparatus there is a clear tradeoff between ability to do things and ability to prevent things. So sure, there are a small number of possibility failure cases this prevents, but dozens more it does absolute nothing against unless you make it extreme and unusable. A couple off the top of my head:

1: The government AI is inevitably going to end up less intelligent than a cutting edge new AI because they're going to adapt slowly, unless you forcibly prevent innovation beyond whatever limit they have. So they'll be able to be tricky and clever.

2: you still have the exact same problem but now the government AI could go rogue and bully or trick the others (or just convince congress to pass new legislation that helps it take over the world).

3: Unless you completely and utterly lock down the employee AI's ability to communicate without consensus, it can workaround the system via human compatriots. Ie, it does 99% of a system which ends up with toxic sludge stored in a tank right over a river, but doesn't dump it in, and then wink wink nudge nudge the human employee understands they need to press a button that dumps toxic sludge into the river and the AI was like "oops, wasn't me. I'm going to reprimand that employee with a strongly worded letter (and give them a $50k bonus for unrelated reasons)".

Or, you know, it secretly sets up a subsidiary company that has a nearly identical AI but no oversight and lets it earn a bunch of money or take over the world or hack the government AI and all the usual existential threats.

And if you do require consensus for every email then it becomes unworkable and unwieldy, or amalgamates into one hybrid AI in effect even if they're run on separate servers.

The fundamental problem of AI alignment is to make the AI want to obey. Any epicycles trying to cripple its abilities either cripple its abilities, or they don't. And it's smarter than you (at least, in the future presumably they will be)e