site banner

Culture War Roundup for the week of May 12, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Thank you very much for this post. Your three-question analysis really helps highlight my differences with most people here on these issues, because I weight #2 being "no" even higher than you do (higher than I do #1, which I also think is more likely "no" than "yes").

That said, I'd like to add to (and maybe push back slightly) on some of your analysis of the question. You mostly make it about human factors, where I'd place it more on the nature of intelligence itself. You ask (rhetorically):

We probably seem magical to animals, with things like guns, planes, tanks, etc. If that’s the difference between animal intelligence → human intelligence, shouldn’t we expect a similar leap from human intelligence → superhuman intelligence?

And my (non-rhetorical) answer is no, we shouldn't expect that at all, because of diminishing returns.

Here's where people keep consistently mistaking my argument, no matter how many times I explain: I am NOT talking about humans being near the upper limit of how intelligent a being can be. I'm talking about limits on how much intelligence matters in power over the material world.

Implied in your question above is the assumption that if entity A is n times smarter than B (as with, say, humans and animals) then it must be n times more powerful; that if a superhuman intelligence is as much smarter than us as we are smarter than animals, it must also be as much more powerful than us than we are more powerful than animals. I don't think it works that way. I expect that initial gains in intelligence, relative to the "minimally-intelligent" agent provide massive gains in efficacy in the material world… but each subsequent increase in intelligence almost certainly provides smaller and smaller gains in real-world efficacy. Again, the problem isn't a limit on how smart an entity we can make, it's a limit on the usefulness of intelligence itself.

Now, I've had a few people acknowledge this point, and accept that, sure, some asymptotic limit on the real-world utility of increased intelligence probably exists. They then go on to assert that surely, though, human intelligence must be very, very far from that upper limit, and thus there must still be vast gains to be had from superhuman intelligence before reaching that point. Me, I argue the opposite. I figure we're at least halfway to the asymptote, and probably much more than that — that most of the gains from intelligence came in the amoeba → human steps, that the majority of problems that can be solved with intelligence alone can be solved with human level intelligence, and that it's probably not possible to build something that's 'like unto us as we are unto ants' in power, no matter how much smarter it is. (When I present this position, the aforementioned people dismiss it out of hand, seeming uncomfortable to even contemplate the possibility. The times I've pushed, the argument has boiled down to an appeal to consequences; if I'm right, that would mean we're never getting the Singularity, and that would be Very Bad [usually for one or both of two particular reasons].)

Interesting point. I'd say your position is certainly at least plausible. The downside is that it's yet another "hard to say for certain" take. Add it to the pile with all the rest, I guess.

To push back a bit, I'd say that even if it ended up being basically true that intelligence beyond human-level wasn't good for much, wouldn't it still be useful to "think" far faster than humans could? And wouldn't it still be useful to be able to spin up an arbitrary number of genius AIs to think about any problem you wanted to?

And wouldn't it still be useful to be able to spin up an arbitrary number of genius AIs to think about any problem you wanted to?

Sure, but more in the "putting people out of work"-style future (a la Tyler Cowen's "Average is Over"), than anything like the revolutionary futures envisioned by singularitarians.

Now, I've had a few people acknowledge this point, and accept that, sure, some asymptotic limit on the real-world utility of increased intelligence probably exists. They then go on to assert that surely, though, human intelligence must be very, very far from that upper limit, and thus there must still be vast gains to be had from superhuman intelligence before reaching that point. Me, I argue the opposite. I figure we're at least halfway to the asymptote, and probably much more than that — that most of the gains from intelligence came in the amoeba → human steps, that the majority of problems that can be solved with intelligence alone can be solved with human level intelligence, and that it's probably not possible to build something that's 'like unto us as we are unto ants' in power, no matter how much smarter it is. (When I present this position, the aforementioned people dismiss it out of hand, seeming uncomfortable to even contemplate the possibility. The times I've pushed, the argument has boiled down to an appeal to consequences; if I'm right, that would mean we're never getting the Singularity, and that would be Very Bad [usually for one or both of two particular reasons].)

This seems like a potentially interesting argument to observe play out, but it also seems close to a fundamental unknown unknown. I'm not sure how one could meaningfully measure where we are along this theoretical asymptote in relationship between intelligence and utility, or that there really is an asymptote. What arguments convinced you both that this relationship would be asymptotic or at least have severely diminishing returns, and that we are at least halfway along the way to this asymptote?

What arguments convinced you both that this relationship would be asymptotic or at least have severely diminishing returns, and that we are at least halfway along the way to this asymptote?

Mostly personal observation of the utility (or lack thereof) of the higher levels of human intelligence versus the average, combined with general philosophic principles favoring diminishing returns and asymptotic limits as the null hypothesis, along with a natural skepticism towards claims of vast future potential (why I'm also deeply irritated by Eric Weinstein's whole recurring "we need new physics" riff; or similar arguments held forth by, say, UFOlogists).

Edit: consider also, as toy examples, the utility of knowing pi to an increasing number of digits; or the utility of increasing levels of recursion in modeling other agents and the speed of convergence to game-theoretic equilibria.