site banner

Culture War Roundup for the week of May 19, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

Scott briefly observes, "The only thing about COVID nobody talks about anymore is the 1.2 million deaths.

That’s 1.2 million American deaths. Globally it’s officially 7 million, unofficially 20 - 30 million. But 1.2 million American deaths is still a lot. It’s more than Vietnam plus 9/11 plus every mass shooting combined - in fact, more than ten times all those things combined. It was the single highest-fatality event in American history, beating the previous record-holder - the US Civil War - by over 50%. All these lives seem to have fallen into oblivion too quietly to be heard over the noise of Lab Leak Debate #35960381.

Maybe it’s because they were mostly old people? Old people have already lived a long life, nobody can get too surprised about them dying. But although only a small fraction of COVID deaths were young people, a small fraction of a large number can still be large: the pandemic killed 250,000 <65-year-old Americans, wiping out enough non-seniors to populate Salt Lake City. More military-age young men died in COVID than in Iraq/Afghanistan. Even the old people were somebody’s spouse or parent or grandparent; many should have had a good 5 - 10 years left.

Usually I’m the one arguing that we have to do cost-benefit analysis, that it’s impractical and incoherent to value every life at infinity billion dollars. And indeed, most lockdown-type measures look marginal on a purely economic analysis, and utterly fail one that includes hedonic costs. Rejecting some safety measures even though they saved lives was probably the right call. Still, I didn’t want to win this hard. People are saying things like “COVID taught us that scientists will always exaggerate how bad things will be.” I think if we’d known at the beginning of COVID that it would kill 1.2 million Americans, people would have thought that whatever warnings they were getting, or panicky responses were being proposed, were - if anything - understated.1

A better comparison for 1.2 million Americans dying would be the Spanish Flu: An estimated 675,000 Americans died, while the total population was estimated to be round 106,000,000. (The 2020 estimated population was around 331,500,000.)

One problem I have with the online debates about covid policy is there's no clear counterfactual: 2021 deaths were higher than 2020 deaths, which is bad for arguments that containment policies were only protecting the most vulnerable at the expense of the general population, because the most vulnerable had disproportionately died in 2020 and management had improved. It's possible that a different set of policies would have resulted in disproportionately more QALYs lost by lower-risk demographics, due to the non-linear dynamics of disease transmission (don't forget rates of mutation). I don't really care to defend any policy, since there were a lot of avoidable mistakes, but I think the criticism should be more specific and measured.

(Edit: Scott's Lockdown Effectiveness: Much More Than You Wanted To Know, published July 1, 2021 - anyone know if there's been much change in the understanding of NPI effectiveness?)

I dislike how he brushes over 'lab leaks'. That should've been the real story, it's more important than all other factors and especially more important than feeling sad about the death toll.

Nothing was learnt from COVID. Literally nothing, gain of function research is still continuing. Everyone knows that gain of function research caused this disaster. But nobody can be bothered to do anything about it, Trump has frozen federal funding into gain of function. A funding freeze is not remotely proportionate for the megadeath machine.

https://www.dailymail.co.uk/health/article-14711269/ebola-lab-placed-shutdown-halting-disease-research.html

Speaking anonymously, an HHS source revealed that one of the researchers poked a hole in the other's protective equipment during a vicious 'lovers' spat'.

Dr Connie Schmaljohn, the lab's director, was also placed on administrative leave after she allegedly failed to report the incident to other officials.

In a previous incident in May 2018, anthrax may have been accidentally released from the boiler room at one of the labs in Fort Detrick, Maryland, and into a nearby river where people were planting lilypads. No illnesses were reported as a result of the potential release.

This is a BSL-4 lab by the way, America's top people. Wuhan was BSL-3. These doctors have been behaving like clowns with the most dangerous technology on the planet. There's no sign of any professionalism, considering the danger of their work. The acceptable number of lab leaks is zero, it's the same as the acceptable number of accidental nuclear strikes. The AI community seems to care more about bioweapon risk, that's a big part of the whole AI safety rhetoric. But why should anyone care about whether AIs can synthesize bioweapons when the experts are already doing it so carelessly?

This stuff should be done out on South Georgia island near the south pole, or somewhere incredibly remote with a huge mandatory quarantine period, if and only if it's absolutely necessary. Otherwise, anyone who tries to do gain of function, especially with humanized mice like they were doing for COVID (like Daszak boasted about in his tweets) should be treated like Osama Bin Laden, with special forces coming in to shoot them on sight.

The right of scientists to publish cool papers and do interesting research in convenient locations does not come above the right to life, freedom and property for tens, hundreds of millions.

The AI community seems to care more about bioweapon risk, that's a big part of the whole AI safety rhetoric. But why should anyone care about whether AIs can synthesize bioweapons when the experts are already doing it so carelessly?

Nearly all of us also want GoF shut down, to be clear.

There is, however, some significant difference between "a vaccine-resistant smallpox pandemic", as bad as that would be, and the true final form of bioweapons that a superintelligent AI could possibly access.

The absolute best-case of what that looks like, as in "we know 100% that this can be done, we just don't know how yet" is an incompatible-biochemistry alga with reduced need for phosphate and a better carbon-fixer than RuBisCO (we know RuBisCO is hilariously bad by the standards of biochemistry; C4 and CAM plants have extensive workarounds for how terrible it is because natural selection can't just chuck it out and start over). Release this, it blooms like crazy across the whole face of the ocean (not limited to upwelling zones; natural algae need the dissolved phosphate in those, but CHON can be gotten from water + air), zooplankton don't bloom to eat it because of incompatible biochemistry, CO2 levels drop to near-zero because of better carbon fixation, all open-air crops fail + Snowball Earth. Humanity would probably survive for a bit, but >99% of humans die pretty quickly - and of course the AI that did it is possibly still out there, so focussing only on subsistence plausibly gets you knocked over by killer robots a few years later.

Medium-case is grey goo.

Worst-case is "zombie wasps for humans"/"Exsurgent Virus"; an easily-spread infection that makes human victims intelligently work to spread it. To be clear, this means it's in every country within a week of Patient Zero due to airports, and within a couple more weeks it's worked its way up to the top ranks of government officials as everyone prioritises infecting their superiors. Good. Luck. With. That.

It is possible for things, like normal GoF, to be extremely bad and yet still be a long way from the true, horrifying potential of the field.

I’m more sanguine about this stuff now, and not because it’s wrong. It’s because there are essentially an infinity of ways for super intelligent ASI to wipe out the human race - these are just the ways we can think of, and it’s going to be much smarter than us. If it happens, it’ll happen anyway, any safeguards will be redundant. It’s like trusting a bear with the possibility space for killing a fox or something - it can come up with a method (and a feasible one), but it’s one of a thousand ways a smart human could come up with.

All of those sound bad, but also very speculative?

We have a recent worked example of what can happen with GoF (true regardless of the true origins of covid-19); shouldn't we prioritize making sure that doesn't happen again over "stop Skynet"/"Butlerian Jihad Now" type stuff?

It's like hearing that Ford Pintos can explode due to their fuel tank design and responding with "OMG, cars can explode! Terrorists might start planting car bombs, I should work on anti-terrorism!"

The last one is very speculative; I have a suspicion it might be impossible. The middle one is somewhat less speculative; something akin to it is probably possible, but there are degrees of success and you're probably looking at more like "eats organic matter at a foot a day" than the "lol eats planet in minutes" sci-fi shit. The first one is proven possible by PNA, the aforementioned terribility of RuBisCO, and the wide variety of possible biomolecules only some of which are used. Anybody who knows second-to-third-year biochem knows that that design is 100% chemically and physically possible; the roadblock is the incredible difficulty of designing a full biochemistry ex nihilo (it'll be a while before anyone succeeds at this without AI aid, although I'd still rather nobody tried). I get that not everyone does know this, but seriously, this is uncontroversial in terms of "is this possible, given a blueprint?"; it is. That's why I said it's the best-case of "what the final form of bioweapons looks like"; they can be worse, but they can't be better.

shouldn't we prioritize making sure that doesn't happen again over "stop Skynet"/"Butlerian Jihad Now" type stuff?

I mean, I'd rather that 200 million people die next year from a pandemic over everyone dying 10 years from now. I'd rather that even if I'm one of the 200 million. I'm not seeing the issue.

The issue is that you are prioritizing problems that are arguably possible (well, one of them) but have never manifested in an even directionally similar way over one that just happened a few years ago, repercussions of which were quite severe and still being felt.

I resisted "millenarian cultist" analogies so as not to be uncharitable, but you didn't want to talk about Ford Pintos, so fuck it:

It's certainly possible that Jesus will descend and start casting the goats (that's you) into a lake of fire at any moment -- this is roughly the worst thing that could happen (for you); shouldn't you prioritize Christian worship more highly than (I assume) you do?