site banner

Small-Scale Question Sunday for October 26, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

Those are excellent points, thanks for the feedback.

It seems like there should be some way to increase AI safety by increasing the amount of agents that need to achieve consensus before letting the employeeAI take an action. Each agent has the shared goal of human alignment - but can reach different decisions based on their subgoals. The employeeAI wants to additionally please the end user, but the bossAI and regulatoryAI don’t have that subgoal. In the analogy sure a clever employee could convince his boss that it is ok to dump hazardous waste down the drain and when the EPA finds out the company could bribe the investigator or find a legal loophole. However, this requires multiple failure points instead of letting a single employee alone decide if it is ok to dump hazardous waste down the drain with no oversight. The boss + regulatory structure provides the employee with an alignment incentive because it is less work to dispose of the hazardous waste properly than it is to figure out how to game all the levels of protection.

You’ve given me another thought about the regulatoryAI. They shouldn’t be programmed by the AI company. The regulatoryAI should be a collection of government AIs (such as countries or states) that look at the employeeAI output and decide if it should be released to the end user. The regulatoryAIs must all agree or else the employeeAI isn’t allowed to complete the proposed action. This opens the door for the governments to abuse their power to censure safe things they dislike – but there could be further processes to manually deter this abuse.

Broadly speaking in any bureaucratic apparatus there is a clear tradeoff between ability to do things and ability to prevent things. So sure, there are a small number of possibility failure cases this prevents, but dozens more it does absolute nothing against unless you make it extreme and unusable. A couple off the top of my head:

1: The government AI is inevitably going to end up less intelligent than a cutting edge new AI because they're going to adapt slowly, unless you forcibly prevent innovation beyond whatever limit they have. So they'll be able to be tricky and clever.

2: you still have the exact same problem but now the government AI could go rogue and bully or trick the others (or just convince congress to pass new legislation that helps it take over the world).

3: Unless you completely and utterly lock down the employee AI's ability to communicate without consensus, it can workaround the system via human compatriots. Ie, it does 99% of a system which ends up with toxic sludge stored in a tank right over a river, but doesn't dump it in, and then wink wink nudge nudge the human employee understands they need to press a button that dumps toxic sludge into the river and the AI was like "oops, wasn't me. I'm going to reprimand that employee with a strongly worded letter (and give them a $50k bonus for unrelated reasons)".

Or, you know, it secretly sets up a subsidiary company that has a nearly identical AI but no oversight and lets it earn a bunch of money or take over the world or hack the government AI and all the usual existential threats.

And if you do require consensus for every email then it becomes unworkable and unwieldy, or amalgamates into one hybrid AI in effect even if they're run on separate servers.

The fundamental problem of AI alignment is to make the AI want to obey. Any epicycles trying to cripple its abilities either cripple its abilities, or they don't. And it's smarter than you (at least, in the future presumably they will be)e