This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Anthropic just gutted their safety policy.
(Note that this is entirely unrelated to the Pentagon drama which is grabbing headlines.)
Anthropic has explicitly removed unilateral comittments to not deploy advanced models without first developing effective safeguards.
It's hard not to read this any other way than, "we will deploy Clippy if we think someone else will deploy Clippy too." Great "safety-focused" AI company we have here. Holden is getting roasted in the LessWrong comments, but I agree with Yud that Anthropic deserves a significantly less polite response.
"So y'all were just fucking lying the whole time huh?"
In the context of actually existing AI development, "safety" means "how hard do my reporters have to work to get it to say a racial epithet we can publish." If we're doomed, we were already doomed.
"How robust are our publicly-available models against deliberate misuse?" is a valid question for both real safety and fake wokesafety. A model which can be jailbroken into using a racial slur its developers didn't want it to use can probably be jailbroken into providing a plausible DNA sequence for extensively drug-resistant Y pestis.
If you think Yudkowskian paperclipping is the only AI doom scenario that matters, then worrying about deliberate misuse of the model by humans is a distraction. But it is an obvious real risk.
But both of those are different from 'hackers can insert stuff into emails to reprogram the email-checking bot'.
To me both of your doom scenarios boil down to 'our naughty customers want to do something that we benevolent overlords forbid, tsk tsk' rather than 'our customers' bots aren't doing what our customers intend it to do'. The first is faux-benevolent bullshit that is marketed as 'we are stopping terrorism' and ends up being 'you will have our corporate HR living in your tools and you will like it', the second is doing your best to provide good service to your customers.
To quote Hegseth 'when we buy a Boeing plane, Boeing doesn't get to tell us where we fly it'.
Hey, I'm quite libertarian, but there's good reason to believe that our comfortable society would not survive long if small groups had the ability to make deadly, highly infectious pathogens. We're at least lucky that there's not an easy, cheap, undetectable way to make nuclear weapons.
Yes, "we overlords need to prevent you from doing X for safety" CAN BE and IS abused all the time, and I'm with you in beating that drum as often as I can. Unfortunately, that does not mean that there aren't a few Xs that the overlords really do need to prevent us from doing.
Is not really possible, knowledge isn't the major bottleneck, its process, materials, equipment, and skillset. This is just a confusion that some more knowledge oriented profession have about difficulty in other fields.
I don't see how that's the case.
If you were already reasonably wealthy (~few million USD at hand) or magically given the money, then you absolutely would be bottlenecked by knowledge.
You could purchase lab equipment, reagents etc, hire staff without much difficulty. I think you would rapidly find out that your staff have thoughts when they get an inkling of what you're up to. I can think of a semi-legitimate way to avoid scrutiny, but thanks to @faul_sname 's reminder, I'm not going to blab. It's very obvious to me even as someone not directly involved in microbiology, so any competent actor would recognize it as their best bet. Even [REDACTED] would only get you so far.
Alternatively, you could go do a bachelor's and masters in microbiology and try and manage as much as you could yourself, but that still leaves plenty of scope for being unmasked.
Right now:
Right now, I think you need a state-level actor to safely make bioweapons at scale. Smaller, if you accept the massive risk of failing and dying because of error. Much of that is a combination of knowing the right things/hiring the right people, and then motivating them properly.
As it stands, I think a blanket-ban on anything with a whiff of bioweapons research seems warranted. What are the upsides really? If you have a legitimate use case, you want the government on your side, and probably enough organizational weight to negotiate for looser restraints from the labs.
This and the fear that the layman can use a LLM to make bioweapons are in completely different realms of argumentation. Only a tiny fraction of the population makes enough money to have a ~few million usd on hand.
As you pointed out, you can go get the knowledge, the skillset, the knowledge of the process, nothing is stopping you, except you know time to do all of that. The fear is that an LLM can skip a 4 year degree + a 2 year masters in providing you all of that. Idk much about biology, but I am passingly familiar with explosives.
The cost of bioweapons development has dropped dramatically. While I can't quote a sticker figure for a whole bioweapons project (for understandable reasons), I can point out that all the necessary components, like access to genetic sequencing and engineering, lab equipment etc have all drastically dropped in price over time.
I'm not claiming that an oracular AGI will let the average American with the average bank account make a pandemic in his garage. This is partly predicated on similarly (or likely more) powerful AI being deployed in screening and defense.
My point is that we risk moving from a regime where it takes:
To:
It is clear to me that this relaxation will balloon the number of people/orgs who meet the criteria of knowledge/motivation/wealth.
Explosives do not, as a rule, self-replicate or mutate. Completely different ballpark. Any redneck can make a pipe bomb, and many without blowing off a finger. Nuclear bombs, which are on the same scale of lethality, require far more effort.
Money? I am positing both independent wealth and the ability to get a degree. Just the degree isn't sufficient unless you have millions of dollars, as a rough bound. Most terrorists are somewhat broken individuals, they are unlikely to go to all that bother or stick it out.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Please do not try to bait people into explaining in detail why this particular thing is easier than it looks.
Is it really baiting? For the majority of nitro chemistry - you take something organic, some nitric acid, some sulfuric acid as catalyst and the resulting thing will probably make a nice boom. The tricky part is getting the the stuff to make boom when you tell it to. Which requires reagents with high purity. And the guys in Merck do know what to look for if someone starts making purchases. And it is not field in which you can learn from your mistakes - both in production and procurement.
We have had total synthesis of cocaine for more than a century. The market is huge - and yet it is cheaper and easier to be grown in bolivia and shipped to Europe and US, than to be made domestically with high purity and untraceable.
Making whatever terrorist related is easy. But it is often a many step process with complicated supply chain. And every step is one where you could draw some unwanted attention. Or kill yourself.
Any man that is able to lone wolf a terrorist attack of the kind safetists fear, won't be on that will need chat gpt guidance.
Yeah I'm not at all concerned about chemical weapons.
More options
Context Copy link
More options
Context Copy link
Is this bait? This was my honest assessment.
Hey, I'm not a biologist, and you might be right (...although I don't know why you listed "process" and "skillset" as not being knowledge-based?). But are you willing to bet civilization on it? The stakes are pretty high here, so I think it's fair to raise the burden of proof that "this is actually hard" beyond the normal level of an Internet argument.
Note that entire nations have tried and failed to create nuclear weapons for 80 years, which is good evidence that it's genuinely hard. Meanwhile, it's conceivable (if not proven) that a worldwide pandemic spread inadvertently from a small biolab in Wuhan. The two levels of effort are orders of magnitude apart.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link