site banner

Culture War Roundup for the week of February 17, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Zeno's AGI.

For a long time, people considered the Turing Test the gold standard for AI. Later, better benchmarks were developed, but for most laypeople with a passing familiarity with AI, the Turing Test meant something. And so it was a surprise that when LLMs flew past the Turing Test in 2022 or 2023, there weren't trumpets and parades. It just sort of happened, and people moved on.

I wonder if the same will happen with AGI. To quote hype-man Sam Altman:

trying Grok 3 has been much more of a "feel the AGI" moment among high-taste testers than i expected!

Okay, actually he said that about Chat GPT 4.5, but you get the point. The last 6 months have seen monumental improvements in LLMs, with DeepSeek making them much more efficient and xAI proving that the scaling hypothesis still has room to run.

Given time, AI has been reliably able to beat any benchmarks that we throw at it (remember the Winograd schema?). I think if, 10 years ago, if someone said that AI could solve PHD level math problems, we'd say AGI had already arrived. But it hasn't. So what ungameable benchmarks remain?

  1. AGI should lead to massive increases in GDP. We haven't seen productivity even budge upwards despite dumping trillions into AI. Will this change? When?

  2. AI discoveries with minimal human intervention. If a genius-level human had the breadth of knowledge that LLMs do, they would no doubt make all sorts of novel connections. To date, no AI has done so.

What stands in the way?

It seems like context windows might be the answer. For example, what if we wanted to make novel discoveries by prompting an AI. We might prompt a chain-of-reasoning AI to try to draw connections between disparate fields and then stop when it finds something novel. But with current technology, it would fill up the context window almost immediately and then start to go off the rails.

We stand at a moment in history where AI advances at a remarkable pace and yet is only marginally useful, basically just a better Google/Stack Overflow. It is as smart as a genius-level human, far more knowledgable, and yet also remarkably stupid in unpredictable ways.

Are we just one more advance away from AGI? It's starting to feel like it. But I also wouldn't be surprised if life in 2030 is much the same as it is in 2025.

budge upwards despite dumping trillions into AI

a) We didn't.

b) it takes time to integrate new tech into business and to figure out how to best use it. Reasoner models are what, 3 months old now?

But I also wouldn't be surprised if life in 2030 is much the same as it is in 2025.

You'll be a little lucky if you're even alive. Pacific War 2: Electric Boogaloo and it's possible thermonuclear complications aside, there's many, many people who think like Ziz, there doesn't seem to be a good way of preventing jailbreaks reliably and making very deadly pathogens that kill in a delayed manner is not hard if you don't care about your own survival that much. And in any case, It looks like for a ~500k$ people will be able to run their own OS AGI in isolation, meaning moderately rich efilist lunatics could run their own shitty biolabs with help and spend as much time figuring out jailbreaks as needed, with no risk of snitching.

And in any case, It looks like for a ~500k$ people will be able to run their own OS AGI in isolation, meaning moderately rich efilist lunatics could run their own shitty biolabs with help and spend as much time figuring out jailbreaks as needed, with no risk of snitching.

Possible, but also possible that you can just cheaply run massive automated genetic testing on trillions of particles, with billions of sensors located at every major human transit point that can pick up on those pathogens before their delayed death sentence kicks in and before they spread as widely as their proponents hope. It's all fiction for now, we'll see who wins (or perhaps not). I'm pretty optimistic humanity will survive beyond 2030.

There's no known disease that could wipe out all life on earth if every single person got it simultaneously. Prion disease is essentially the only 100% fatal disease and it does not kill quickly enough to stop reproduction.

I'm not a strong domain expert in microbiology, but it strikes me as a not particularly insurmountable challenge to design a pathogen that would kill 99.99% of humans. I think it you gave me maybe $10 million and a way to act without drawing adverse attention, I'd be able to pull it off. (With lots of time reading textbooks or maybe an additional masters)

The primary constraint would be access to a BSL-4 lab, because otherwise the miscreants would probably be the first to die to a prototype of the desired strain.

We already have gain-of-function research, the bare minimum, serial passage isn't that difficult. With expertise roughly equivalent to a Master's student, or a handful, it would be easy enough to gene-edit a virus, cribbing sections from a variety of pathogens till you get one you desire. I see no reason in principle why you couldn't optimize for contagiousness, a long incubation period and massive lethality.

This is easy for most nation-states, but thankfully most of them aren't omnicidal. Very difficult for lone actors, moderately difficult if they have access to scientific labs and domain expertise. I think we've been outright lucky in that no organized group has really tried.

Just because there isn't an existing pathogen that kills all humans (and there isn't, because we're alive and talking), doesn't mean it isn't possible.

I am not qualified to make technical statements about the ease of developing biological weapons but let me apply some outside-the-box thinking.

You are almost certainly wrong about how easy this is.

I am basing this on computer engineers who make statements like "an undergrad could build this in a weekend" and are wrong almost 100% of the time. Things always take longer than you think.

I don't know what specific obstacles you would face on your way to build a bioweapon, but I predict that you don't either. It's not the known unknowns that get you. It's the unknown unknowns.

Please don't try to prove me wrong, though :) And I agree that serious bioweapons are likely within the capacity of major states.

I have a reasonable plan in mind for what I'd do with the $10 million. I'd probably pivot away from my branch of medicine and ingratiate myself into an Infectious Disease department, or just sign up for a masters in biology.* The biggest hurdle would be the not getting caught part, but there's an awful amount of Legitimate Biology you can do that helps the cause, and ways to launder bad intent. Just look at apologia for gain of function.

There's also certainly Knightian uncertainty involved, but there are bounds to how far you can go while pointing to unknown unknowns. I don't think I'd need $1 billion to do it, as I'm confident it couldn't be done $3.50 and a petri-dish.

And whatever the actual cost and intellectual horsepower + domain knowledge is, it only tends downwards, and fast!

*If you can't beat disease, join them