site banner

Culture War Roundup for the week of March 10, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

SMBC gets this close.

I've been thinking about the Grossman-Stiglitz Paradox recently. From the Wiki, it

argues perfectly informationally efficient markets are an impossibility since, if prices perfectly reflected available information, there is no profit to gathering information, in which case there would be little reason to trade and markets would eventually collapse.

That is, if everyone is already essentially omniscient, then there's no real payoff to investing in information. I was even already thinking about AI and warfare. The classical theory is that, in order to have war, one must have both a substantive disagreement and a bargaining friction. SMBC invokes two such bargaining frictions, both in terms of limited information - uncertainty involved in a power rising and the intentional concealment of strength.

Of course, SMBC does not seem to properly embrace the widely-held prediction that AI is going to become essentially omniscient. This is somewhat of a side prediction of the main prediction that it will be a nearly perfectly efficient executor. The typical analogy given for how perfectly efficient it will be as an executor, especially in comparison to humans, is to think about chess engines playing against Magnus Carlsen. The former is just so unthinkably better than the latter that it is effectively hopeless; the AI is effectively a perfect executor compared to us.

As such, there can be no such thing as a "rising power" that the AI does not understand. There can be no such thing as a human country concealing its strength from the AI. Even if we tried to implement a system that created fog of war chess, the perfect AI will simply hack the program and steal the information, if it is so valuable. Certainly, there is nothing we can do to prevent it from getting the valuable information it desires.

So maybe, some people might think, it will be omniscient AIs vs omniscient AIs. But, uh, we can just look at the Top Chess Engine Competition. They intentionally choose only starting positions that are biased enough toward one side or the other in order to get some decisive results, rather than having essentially all draws. Humans aren't going to be able to do that. The omniscient AIs will be able to plan everything out so far, so perfectly, that they will simply know what the result will be. Not necessarily all draws, but they'll know the expected outcome of war. And they'll know the costs. And they'll have no bargaining frictions in terms of uncertainties. After watching enough William Spaniel, this implies bargains and settlements everywhere.

Isn't the inevitable conclusion that we've got ourselves a good ol' fashioned paradox? Omniscient AI sure seems like it will, indeed, end war.

the widely-held prediction that AI is going to become essentially omniscient.

Held by whom?

As someone directly involved in the design and development of ML algorithms, Yudkowsky's blind faith in the inevitability of omniscient/super-intelligent AI has always felt like the rationalist equivalent of "and then a miracle occurs". Sure, if A through E then possibly F, but that's all in theory, and even if we get to E, F is by no means a given.

Nah, the assumption here is "and then no miracle occurs".

If we're really improbably lucky, then we do get a miracle: the level of intelligence required for an ape to create civilization (i.e. the point we're basically still at, because the millennia of memetic evolution afterward has grossly outraced the eon of genetic evolution beforehand) turns out to be essentially the same as the maximum level of intelligence achievable by any technology. AI could pass the C3PO "somewhat annoying but helpful" level, but it couldn't possibly pass the Data "better at math but wouldn't clearly be better in command" level. All those log(N) curves turn out to actually be logistic(N) in the limit, and human thinking remains relevant indefinitely after all.

Even if we develop proper reasoning engines within the next 5-10 years, there is still a big leap to be made between basic reason and a truly general intelligence, much less general intelligence to super intelligence, and an even bigger jump from "super intelligence" to "omniscience".

And that's without considering Yudowsky and Altman's wider body of quasi-religious pronouncements.

There's a difference between this and "it becomes omniscient somehow" and other rationalist religious exclamations.

Could you cite "it becomes omniscient somehow" from a rationalist?

Does the OP count?

The one opposing "everyone in the Big Yud singularity doomerist community"? The opposition itself isn't a deal-breaker (though it's clearly at least a non-central example), but the word choices to maximize emotional reaction at the expense of clarity are.

I was hoping someone would at least point out an interesting source being paraphrased. You see ML papers that talk about the infinite-width limit of neural networks, and sometimes that's just for a proof by contradiction (as OP appears to be attempting, to be fair), and sometimes it leads to math that applies asymptotically in finite-width networks ... but you can see how after a couple rounds of playing Telephone it might be read as "stupid ML cult thinks they're gonna have infinitely powerful computers!"