This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Scott briefly observes, "The only thing about COVID nobody talks about anymore is the 1.2 million deaths.
A better comparison for 1.2 million Americans dying would be the Spanish Flu: An estimated 675,000 Americans died, while the total population was estimated to be round 106,000,000. (The 2020 estimated population was around 331,500,000.)
One problem I have with the online debates about covid policy is there's no clear counterfactual: 2021 deaths were higher than 2020 deaths, which is bad for arguments that containment policies were only protecting the most vulnerable at the expense of the general population, because the most vulnerable had disproportionately died in 2020 and management had improved. It's possible that a different set of policies would have resulted in disproportionately more QALYs lost by lower-risk demographics, due to the non-linear dynamics of disease transmission (don't forget rates of mutation). I don't really care to defend any policy, since there were a lot of avoidable mistakes, but I think the criticism should be more specific and measured.
(Edit: Scott's Lockdown Effectiveness: Much More Than You Wanted To Know, published July 1, 2021 - anyone know if there's been much change in the understanding of NPI effectiveness?)
I dislike how he brushes over 'lab leaks'. That should've been the real story, it's more important than all other factors and especially more important than feeling sad about the death toll.
Nothing was learnt from COVID. Literally nothing, gain of function research is still continuing. Everyone knows that gain of function research caused this disaster. But nobody can be bothered to do anything about it, Trump has frozen federal funding into gain of function. A funding freeze is not remotely proportionate for the megadeath machine.
https://www.dailymail.co.uk/health/article-14711269/ebola-lab-placed-shutdown-halting-disease-research.html
This is a BSL-4 lab by the way, America's top people. Wuhan was BSL-3. These doctors have been behaving like clowns with the most dangerous technology on the planet. There's no sign of any professionalism, considering the danger of their work. The acceptable number of lab leaks is zero, it's the same as the acceptable number of accidental nuclear strikes. The AI community seems to care more about bioweapon risk, that's a big part of the whole AI safety rhetoric. But why should anyone care about whether AIs can synthesize bioweapons when the experts are already doing it so carelessly?
This stuff should be done out on South Georgia island near the south pole, or somewhere incredibly remote with a huge mandatory quarantine period, if and only if it's absolutely necessary. Otherwise, anyone who tries to do gain of function, especially with humanized mice like they were doing for COVID (like Daszak boasted about in his tweets) should be treated like Osama Bin Laden, with special forces coming in to shoot them on sight.
The right of scientists to publish cool papers and do interesting research in convenient locations does not come above the right to life, freedom and property for tens, hundreds of millions.
Nearly all of us also want GoF shut down, to be clear.
There is, however, some significant difference between "a vaccine-resistant smallpox pandemic", as bad as that would be, and the true final form of bioweapons that a superintelligent AI could possibly access.
The absolute best-case of what that looks like, as in "we know 100% that this can be done, we just don't know how yet" is an incompatible-biochemistry alga with reduced need for phosphate and a better carbon-fixer than RuBisCO (we know RuBisCO is hilariously bad by the standards of biochemistry; C4 and CAM plants have extensive workarounds for how terrible it is because natural selection can't just chuck it out and start over). Release this, it blooms like crazy across the whole face of the ocean (not limited to upwelling zones; natural algae need the dissolved phosphate in those, but CHON can be gotten from water + air), zooplankton don't bloom to eat it because of incompatible biochemistry, CO2 levels drop to near-zero because of better carbon fixation, all open-air crops fail + Snowball Earth. Humanity would probably survive for a bit, but >99% of humans die pretty quickly - and of course the AI that did it is possibly still out there, so focussing only on subsistence plausibly gets you knocked over by killer robots a few years later.
Medium-case is grey goo.
Worst-case is "zombie wasps for humans"/"Exsurgent Virus"; an easily-spread infection that makes human victims intelligently work to spread it. To be clear, this means it's in every country within a week of Patient Zero due to airports, and within a couple more weeks it's worked its way up to the top ranks of government officials as everyone prioritises infecting their superiors. Good. Luck. With. That.
It is possible for things, like normal GoF, to be extremely bad and yet still be a long way from the true, horrifying potential of the field.
I agree that shutting GoF down would be good, and also that COVID was very far from the upper end of the badness scale.
But I have to be contrary here.
All of the algae in the world, combined, pull down a total of about 2e14 kg of CO2 from the atmosphere per year. The atmosphere as a whole has 2e15 kg of CO2. All living things on Earth, combined, contain about 5e14 kg of carbon. So you're positing that there is a new species which rapidly becomes the largest source of biomass on Earth over the course of a decade or more (probably much more, carbon capture gets harder as co2 concentration decreases), and during that time, nothing natural or engineered figures out how to eat it.
I don't buy it. I think using a biological agent to permanently wipe out the biosphere is a much harder problem than either "kill all humans" or "wipe out the biosphere by any means possible, including but not limited to Very Large Rock Dropped From Very High Up™".
No, I'm positing that it does so faster than that. Algal blooms are fast; they're just limited by nutrients to small areas. Here, the entire ocean can support a max-density algal bloom.
And no, this wouldn't permanently wipe out the biosphere. Life would survive and eventually recover, because even without something evolving to eat it (and it would, although it'd likely take a while), it probably wipes itself out from lack of carbon and/or the oceans freezing over and eventually the dead algae on the seafloor get subducted, incinerated, and re-released as CO2 via volcanoes (and there are quite a lot of reservoirs of life that are shielded from "oh noes the CO2 is gone" on quite-long timescales). As noted, this probably wouldn't even be enough to wipe out humanity by itself (because we could build closed biospheres not subject to being leeched, and top up whatever leaks did occur with coal-burning power stations) - we'd lose most of humanity because we wouldn't remotely be able to build enough fast enough to support the current population, but we wouldn't quite be wiped out absent further disruption (e.g. chaos from all the starving mobs preventing/destroying the closed biospheres, industrial collapse leading to being unable to do maintenance, or killer robots showing up).
Ah, I missed the bit where the goal of this wasn't to wipe out the entire biosphere. In that case though, I don't particularly see how this is all that much scarier than vaccine resistant smallpox / airborne HIV / whatever your default human nightmare pathogen is (except if you're not a human, if you're not a human this is indeed much worse).
Vaccine-resistant smallpox would not be anywhere near 99% mortality; smallpox is deadly, but not that deadly (even if everyone got it). I mean, we did live with it in the Old World since the Iron Age or so.
Airborne HIV would suck pretty horribly. I imagine there'd be more survivors, though, and it'd take much less time to recover once it died out. You need something like "airborne HIV with anthrax spores" to be worse. I won't say that that's impossible, but GoF and even deliberate weaponisation isn't going to get there since AFAIK there aren't any lifelong-infection viruses without an envelope (enveloped virus = negligible environmental persistence, because envelopes aren't all that stable); you'd need a development project aimed at de novo virus synthesis and to be targetting these properties, AIUI.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Good point, it was wrong to say that it's not worth caring about AI bioweapon risk. ASI ought to be so strong that it doesn't need bioweapons though, I think it can wrap us around its finger such that we serve its will without any biowarfare. Just dominating the internet would be a civilization-scale parasitism like your wasps, it would be inside our entire command-and-control structure and in a position of assured dominance.
More options
Context Copy link
I’m more sanguine about this stuff now, and not because it’s wrong. It’s because there are essentially an infinity of ways for super intelligent ASI to wipe out the human race - these are just the ways we can think of, and it’s going to be much smarter than us. If it happens, it’ll happen anyway, any safeguards will be redundant. It’s like trusting a bear with the possibility space for killing a fox or something - it can come up with a method (and a feasible one), but it’s one of a thousand ways a smart human could come up with.
I mean, yeah, obviously the solution to AI risk is to not build hostile superhuman AI. Just pointing it out.
@RandomRanger I figure this does double duty as a reply to you.
Agreed. But unfortunately...
Superhuman AI is probably an inevitable consequence of the evolution of intelligence. There is a good chance the solution to the FERMI thing is staring us in the face / we’re about to find out.
If you mean the Fermi Paradox, it's... complicated. If you're talking about the Great Filter, no, AI catastrophe cannot be the Great Filter because the AI itself still counts as being an alien civilisation for Fermi Paradox purposes.
To get AI being an answer to the Fermi Paradox, you have to go into Doomsday Argument territory, and also assume FTL. I laid out the case you can make here. Whether to take Doomsday Arguments seriously is dubious.
Only if we assume that AI not only shares the broad goals of human civilization (expansion, growth, conquest of space) but executes them effectively. Smart humans still makes mistakes. Smart ASI is still likely to make mistakes, and even if it does so much less frequently those mistakes are likely to be far greater in consequence.
Imagine a global benevolent ASI designed to achieve human flourishing that accidentally exterminates the human race. This sounds ridiculous but is perfectly possible and perhaps even likely on a long enough timeframe because of the extreme power such an ASI might possess. This is a closed loop solution to the Fermi paradox. An ASI might even develop a moribund or nihilistic tendency that leads to the above with no desire for recovery (by eg. cloning or remaking human civilization).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
All of those sound bad, but also very speculative?
We have a recent worked example of what can happen with GoF (true regardless of the true origins of covid-19); shouldn't we prioritize making sure that doesn't happen again over "stop Skynet"/"Butlerian Jihad Now" type stuff?
It's like hearing that Ford Pintos can explode due to their fuel tank design and responding with "OMG, cars can explode! Terrorists might start planting car bombs, I should work on anti-terrorism!"
The last one is very speculative; I have a suspicion it might be impossible. The middle one is somewhat less speculative; something akin to it is probably possible, but there are degrees of success and you're probably looking at more like "eats organic matter at a foot a day" than the "lol eats planet in minutes" sci-fi shit. The first one is proven possible by PNA, the aforementioned terribility of RuBisCO, and the wide variety of possible biomolecules only some of which are used. Anybody who knows second-to-third-year biochem knows that that design is 100% chemically and physically possible; the roadblock is the incredible difficulty of designing a full biochemistry ex nihilo (it'll be a while before anyone succeeds at this without AI aid, although I'd still rather nobody tried). I get that not everyone does know this, but seriously, this is uncontroversial in terms of "is this possible, given a blueprint?"; it is. That's why I said it's the best-case of "what the final form of bioweapons looks like"; they can be worse, but they can't be better.
I mean, I'd rather that 200 million people die next year from a pandemic over everyone dying 10 years from now. I'd rather that even if I'm one of the 200 million. I'm not seeing the issue.
Even with AI aid, I dont think anyone would bother with this. You know something about biochem, and I know something about numerics. By the time you have the compute to simulate without any experiment all considered molecules reliably enough to not have a catastrophic error in one of them, you have long ago cracked all encryptions ever made, found the vaccum instabilites if there are any, etc. Why bother with even grey goo at that point?
As Yudkowsky impolitely notes, it's not like AI aid means you can't also do experiments.
I understood you to be talking about the one-shot?
Because I said "ex nihilo"? I was making a distinction between "modify an existing biochemistry in various ways" and "invent a wholly-new biochemistry" (the latter is far harder), not talking about research methods.
...maybe you thought I meant in silico? I didn't.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The issue is that you are prioritizing problems that are arguably possible (well, one of them) but have never manifested in an even directionally similar way over one that just happened a few years ago, repercussions of which were quite severe and still being felt.
I resisted "millenarian cultist" analogies so as not to be uncharitable, but you didn't want to talk about Ford Pintos, so fuck it:
It's certainly possible that Jesus will descend and start casting the goats (that's you) into a lake of fire at any moment -- this is roughly the worst thing that could happen (for you); shouldn't you prioritize Christian worship more highly than (I assume) you do?
I have actually spent years learning biochem and have a minor degree of fame under my real name due to my precociousness in doing so. Biochem is not a spook and understanding of it is actually meaningful. It is... irritating to have some random just go "nuh-uh". If I could give you the kind of feel for biochem that would lead you to see all of this as obvious I would; I already gave you the quick rundown and you dismissed it.
I think the policy of "ignore all dangers until they've happened at least once" is not a very good one even for normal dangers, and is practically a reductio ad absurdum in the case of apocalyptic dangers (because apocalyptic dangers kill off humanity, thus being impossible to look back on, it reduces to "ignore all apocalyptic dangers", which if any of them are real means you sleepwalk into them).
A good-faith survey of the current world basically rules out interventionist deities being active on Earth. Deism is quite plausible (note the identity of deism with the simulation hypothesis), but while it is plausible that such a creator might judge us after death, there is basically no way to tell what the grading rubric actually is. Maybe it's the Christian God. Maybe it's Allah who'll smite me for idolatry if I think Jesus is divine. Maybe God's a social justice warrior. Maybe God agrees with Jack LaSota. Maybe God is testing for ability to act rationally about X-risk. I dunno, and for the most part the big question mark cancels out to "this shouldn't affect how I live my life" (because for every rubric there's an anti-rubric which cancels it out; this is the problem with Pascal's Wager if you aren't privileging the "Christian God" hypothesis, because there might also be an anti-Christian-God who punishes Christians).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link