This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Zeno's AGI.
For a long time, people considered the Turing Test the gold standard for AI. Later, better benchmarks were developed, but for most laypeople with a passing familiarity with AI, the Turing Test meant something. And so it was a surprise that when LLMs flew past the Turing Test in 2022 or 2023, there weren't trumpets and parades. It just sort of happened, and people moved on.
I wonder if the same will happen with AGI. To quote hype-man Sam Altman:
Okay, actually he said that about Chat GPT 4.5, but you get the point. The last 6 months have seen monumental improvements in LLMs, with DeepSeek making them much more efficient and xAI proving that the scaling hypothesis still has room to run.
Given time, AI has been reliably able to beat any benchmarks that we throw at it (remember the Winograd schema?). I think if, 10 years ago, if someone said that AI could solve PHD level math problems, we'd say AGI had already arrived. But it hasn't. So what ungameable benchmarks remain?
AGI should lead to massive increases in GDP. We haven't seen productivity even budge upwards despite dumping trillions into AI. Will this change? When?
AI discoveries with minimal human intervention. If a genius-level human had the breadth of knowledge that LLMs do, they would no doubt make all sorts of novel connections. To date, no AI has done so.
What stands in the way?
It seems like context windows might be the answer. For example, what if we wanted to make novel discoveries by prompting an AI. We might prompt a chain-of-reasoning AI to try to draw connections between disparate fields and then stop when it finds something novel. But with current technology, it would fill up the context window almost immediately and then start to go off the rails.
We stand at a moment in history where AI advances at a remarkable pace and yet is only marginally useful, basically just a better Google/Stack Overflow. It is as smart as a genius-level human, far more knowledgable, and yet also remarkably stupid in unpredictable ways.
Are we just one more advance away from AGI? It's starting to feel like it. But I also wouldn't be surprised if life in 2030 is much the same as it is in 2025.
a) We didn't.
b) it takes time to integrate new tech into business and to figure out how to best use it. Reasoner models are what, 3 months old now?
You'll be a little lucky if you're even alive. Pacific War 2: Electric Boogaloo and it's possible thermonuclear complications aside, there's many, many people who think like Ziz, there doesn't seem to be a good way of preventing jailbreaks reliably and making very deadly pathogens that kill in a delayed manner is not hard if you don't care about your own survival that much. And in any case, It looks like for a ~500k$ people will be able to run their own OS AGI in isolation, meaning moderately rich efilist lunatics could run their own shitty biolabs with help and spend as much time figuring out jailbreaks as needed, with no risk of snitching.
Possible, but also possible that you can just cheaply run massive automated genetic testing on trillions of particles, with billions of sensors located at every major human transit point that can pick up on those pathogens before their delayed death sentence kicks in and before they spread as widely as their proponents hope. It's all fiction for now, we'll see who wins (or perhaps not). I'm pretty optimistic humanity will survive beyond 2030.
There's no known disease that could wipe out all life on earth if every single person got it simultaneously. Prion disease is essentially the only 100% fatal disease and it does not kill quickly enough to stop reproduction.
I'm not a strong domain expert in microbiology, but it strikes me as a not particularly insurmountable challenge to design a pathogen that would kill 99.99% of humans. I think it you gave me maybe $10 million and a way to act without drawing adverse attention, I'd be able to pull it off. (With lots of time reading textbooks or maybe an additional masters)
The primary constraint would be access to a BSL-4 lab, because otherwise the miscreants would probably be the first to die to a prototype of the desired strain.
We already have gain-of-function research, the bare minimum, serial passage isn't that difficult. With expertise roughly equivalent to a Master's student, or a handful, it would be easy enough to gene-edit a virus, cribbing sections from a variety of pathogens till you get one you desire. I see no reason in principle why you couldn't optimize for contagiousness, a long incubation period and massive lethality.
This is easy for most nation-states, but thankfully most of them aren't omnicidal. Very difficult for lone actors, moderately difficult if they have access to scientific labs and domain expertise. I think we've been outright lucky in that no organized group has really tried.
Just because there isn't an existing pathogen that kills all humans (and there isn't, because we're alive and talking), doesn't mean it isn't possible.
Sure, but everything you describe here are things that
This is a huge problem for ending life on Earth; living is 100% fatal but humans keep having kids. If you set an incubation period that is too long, then people can just
postlive through it. I also think a long incubation period would dramatically raise the chances that your murdercritter mutates to a less harmful form.Well, prion disease may be associated with spiroplasma bacterial infection, but it still hasn't killed all humans.
I think it's far from clear that AI mitigates the issue more than it currently exacerbates. I'm in agreement that it's already technically possible, and we're only preserved by the modest sanity of nations and a lack of truly motivated and resource-imbued bad actors.
In a world with ubiquitous AI surveillance, environmental monitoring and lock-downs of the kind of biological equipment that modern labs can currently buy without issue, it would clearly be harder to cook up a world-ending pathogen.
We don't live in that world.
We currently reside in one where LLMs already possess the requisite knowledge to aid a human bad actor in following through with such a plan. There are jailbroken models that would provide the necessary know-how. You could even phrase it as benign questioning, a lot of advanced biotechnology is inherently dual-use, even GOF adherents claim it has benefits, though most would say it doesn't match the risks.
In a globalized world, a long incubation period could merely be a matter of months. A bad actor could book a dozen intercontinental flights and start a chain reaction. You're correct that over time, a pathogen tends to mutate towards being less lethal towards its hosts, but this does not strike me as happening quickly enough to make a difference in an engineered strain. The Bubonic Plague ended largely because all susceptible humans died and the remaining 2/3rds of the population had some degree of innate and then acquired immunity.
Look at HIV, it's been around for half a century, but is no less lethal without medication than when it started out (as far as I'm aware).
Prions would not be the go-to. Too slow, both in terms of spread and time to kill. Good old viruses would be the first port of call.
It kinda seems like we do live in a world where any attempt to kill everyone with a deadly virus would involving using AI to try to find ways to develop a vaccine or other treatment of some kind.
They mutate so rapidly, though, and humans have survived even the worst of the worst (such as rabies).
Not that I am not saying you couldn't kill a lot of people with an infectious agent. You could kill a lot of people with good old-fashioned small pox! I just think the vision of a world sterilized of human life is far-fetched.
It's ironic, though - the people who are most worried about unaligned AI are the people who are most likely to use future AI training content to spell out plausible ways AI could kill everyone on Earth, which means that granting unaligned agentic AI is a threat for the purposes of argument, increases the risks of unaligned agentic AI attempting to use a viral murder weapon regardless of whether or not that is actually reliable or effective.
Sorry, side tangent. I don't take the RISKS of UNALIGNED AI nearly as seriously as most of the people on this board, but I do sort of hope for the sake of hedging those people are considering implementing the unaligned AI deterrence plans I came up with after reflecting on it for 5 minutes
instead ofalong with posting HERE IS HOW TO KILL EVERY SINGLE HUMAN BEING over and over again on the Internet :pETA: not trying to launch a personal attack on you (or anyone on the board) to be clear, AFAIK none of y'all wrote the step-for-step UNALIGNED AI TAKES OVER THE WORLD guide that I read somewhere a while back. (But if you DID, I'm not trying to start a beef, I just think it's ironic!)
The downside to this is having to hope that whatever mitigation is in place is robust and effective enough to make a difference by the time the outbreak is detected! The odds of this aren't necessarily terrible, but you want it to have come to that?
I expect hope than a misaligned AI competent enough to do this would be intelligent enough to come up with such an obvious plan, regardless of how often it was discussed in niche internet forums.
How would you stop it? The existing scrapes of internet text suffice. To censor it from the awareness of a model would require stripping out knowledge of loads of useful biology, as well as the obvious fact that diseases are a thing, and that they reliably kill people. Something that wants to kill people would find that connection as obvious as 2+2=4, even if you remove every mention of bioweapons from the training set. If it wasn't intelligent enough to do so, it was never a threat.
Everything I've said would be dead-simple, I haven't gone into any detail that a biology undergrad or a well-read nerd might not come up with. As far as I'm concerned, it's sufficient to demonstrate the plausibility of my arguments without empowering adversaries in any meaningful way. You won't catch me sharing a .txt file with the list of codons necessary for Super Anthrax to win an internet argument.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My really vague understanding is that long incubation times give the immune system more time to catch the infection early, which doesn't matter as much when it's very new and nobody has antibodies. So eventually everything that had a long one evolves to be shorter on its second pass through the population.
In theory long incubation + 100% mortality rate seems like it would take out a good chunk of the population in the first wave, but in practice people would just Madagascar through it.
Oh sure, but depending on the agent (particularly if it is viral, right?) if you're spreading it to billions of people you're introducing a lot of room for it to gain mutations that might make it less deadly. At least that would be my guess.
Definitely seems plausible. Hopefully instead of using AI to create MURDERVIRUSES people will use it to scan wastewater for signs of said MURDERVIRUSES.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link