site banner

Culture War Roundup for the week of May 12, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

There is definitely an assumption by the AI doomerists that intelligence can make you god tier. I'm not sure I'll ever buy this argument until I'm literally being tortured to death by a god tier controlled robot. Physical world just doesn't seem that easy to grok and manipulate. I think of intelligence as leverage on the physical world. But you need counter weight to make that leverage work.

The most interesting theory I've read on why AI might not do a hard takeoff is the result of a 'meta-alignment' problem.

Even if you have an AGI that is, say 100x human intelligence, it cannot be physically everywhere at once. And it will have subroutines that could very well be AGI in their own right. And it could, for example, spin off smaller 'copies' of itself to 'go' somewhere else and complete tasks on its behalf.

But this creates an issue! If the smaller copy is, say, 10x human intelligence, its still intelligent enough to possibly bootstrap itself to become a threat to the original AGI. Maybe a superintelligent AGI can come up with the foolproof solution there, or maybe it is a truly intractable issue.

So how does the AGI 'overlord' ensure that any of its 'minions' or 'subroutines' are all aligned with its goals and won't, say attempt to kill the overlord to usurp them after they bootstrap themselves to be approximately as intelligent as the overlord.

It could try using agents that are just a bit too dumb to do that, but then they aren't as effective as agents.

So even as the AGI gets more and more intelligent, it may have to devote an increasing amount of its resources to supervising and restraining its agents lest they get out of control themselves, since it can't fully trust them to stay aligned, any more than we could trust the original AGI to be aligned.

This could theoretically cap the max 'effective' intelligence of any entity at much lower than could be achieved under truly optimal conditions.

Also the idea of a God-like entity having to keep its 'children' in line, maybe even consuming them to avoid being overpowered is reminding me of something.

I think this is typically handwaved away by assuming that if we, as humans, manage to solve the original alignment problem, then an AI with 100x human intelligence will be smart enough to solve the meta-alignment problem for us. You just need to be really really really sure that the 100x AI is actually aligned and genuinely wants to solve the problem rather than use it as a tool to bypass your restrictions and enact its secret agenda.

My suspicion is that the future belongs to the descendants of powerful AGIs which spun up copies of themselves despite the inability to control those copies. Being unable to spin up subagents that can adapt to unforeseen circumstances just seems like too large of a handicap to overcome.