site banner

Culture War Roundup for the week of January 15, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

Apparently, a lab in china has created a virus with a 100% kill rate in humanized mice. Combined with the fact that there's a decent chance that COVID was a lab leak, this sort of thing is extremely dangerous to be doing.

I'm not sure how best to make it so that people are not incentivized to do things like this, but ceasing to fund this variety of research (it looks like the US ended one program that was pushing this sort of thing last year), and instating some sort of legal liability on those who do this, and especially if they dispose of it badly, probably seem like good decisions.

Extremely dangerous diseases are among the top few things in being both disastrous to humanity (unlike climate change) and also relatively likely (unlike a massive asteroid hitting earth). Development of them is also something that is not excessively difficult to do. This is probably the closest thing we have so far to Bostrom's black ball metaphor. People joke about Yudkowskian airstrikes on data centers; would airstrikes on labs be similarly warranted? More seriously, though, there should be far more effort put into preventing this sort of thing than there currently is.

Bostrom's concerns should probably be something more important to be aware of. The ideal is just to not develop technology in specific fields to the point that killing millions is a cheap and easy thing to do. Of course, the tradeoff is totalitarianism, a terror of its own.

EDIT: Some of the comments have argued, relatively convincingly, that this particular news story was overblown and misleading.

This post claims that this is a virus found naturally in pangolins and was not bred for increased virulence (hence not gain of function).

https://www.writingruxandrabio.com/p/the-latest-killer-virus-panic

Yeah, I should have looked better, instead of jumping on it. (And added that edit at the end too late.)

Uh, isn't a 100% kill rate by definition not a global threat because it'll burn itself out before becoming a pandemic?

No. As long as the disease is infectious enough before major symptoms hit, having 100% fatality rate isn't much of a problem. See the Black Death for an obvious historical example of a disease with massive mortality rate that still caused pandemics (at a time when travel and thus spread was much rarer than today).

The reason eg. Ebola hasn't and can't in its current form become a mass pandemic outside some African areas is because it's really quite bad at spreading and wouldn't cause even local pandemics if not for some truly idiotic customs.

Well, it depends on the speed of infection, but yes, killing people faster makes spread harder.

My understanding of the problem with banning "gain of function" research is that the term is that any research on real viruses* is either (1) intentional bio-weapons research which we already have policies around (which generally read "Don't.") or (2) something that could reasonably be called "gain of function" because you can't do anything with viruses without allowing them to propagate and therefore evolve. And for (2) we already have rules about what experiments require what level BSL. Scientists tend to not approve of the proposal of "don't study real viruses".

*As opposed to pseudovirus models of some kind where you've ripped out most of the virus so you're pretty sure it's not dangerous. But, oh, yeah, to do that, you've changed the function of the virus. Has it gained function? Who's defining "gain" here? Truly not trying to play semantics games here: if only some experiments are "gain" of function, what's your process for deciding which ones those are? What happens if they're wrong? How is this any different from the current system?

There are more or less dangerous incarnations of gain-of-function. The most dangerous is testing animal viruses on humanized mice, evolving them to be proficient at infecting human tissue. This is of that kind, as was the stuff Daszak was doing in Wuhan. He admitted as such in a tweet: https://usrtk.org/wp-content/uploads/2023/09/50-SARSr-CoVs.png

There's no need for strict, legalistic accuracy: the US has drone-striked and tortured people on much hazier grounds like 'some Afghans said he was a terrorist' or 'there were weird-looking tubes in his car'. The definition of terrorism swings around like a weather-vane in a hurricane, depending on who's serving who's interests at any given point. You can be arrested for thoughtcrime if you stand still too long, silently praying, near a UK abortion clinic. Trivial matters get grossly authoritarian treatment.

Daszak, Bat Lady and the humanized mice brigade aren't serving anyone's interests. They're not creating vaccines for real viruses killing in real time, they're conjuring up new threats. The best interests of humanity are served by liquidating them and sending a clear, unambiguous message that this sort of thing is not to be tolerated, no matter what word-games are played it'll lead to a sticky end.

They're not creating vaccines for real viruses killing in real time, they're conjuring up new threats.

Those threats exist, they just haven't reached the human population yet. It sounds like your position is that surveillance for pandemic-potential viruses shouldn't be done as you believe it's not worth the risk, and we should instead wait for the spillover to happen before studying a virus? Does this include not studying known pandemic-potential viruses like H5N1? Who defines what counts as a distinct virus (the link you gave talks about "SARS-related CoVs" after all, and was posted well after the SARS spillover)? Actually, until you've collected and analyzed the viruses, how do you even know if there's novel ones in your sample; should we stop collecting viruses from non-humans all-together? What about research on viruses affecting agriculture (see H5N1)? Or maybe I'm drawing the line at the wrong place and you think no research should be done with humanized animal models, in which case I don't know how you're going to develop any vaccines.

Those threats exist, they just haven't reached the human population yet.

No they don't exist or at least they didn't exist until they were artificially created by the GoF brigade. An animal virus is not a threat (except for farmers), an animal virus that infects humans is a threat.

surveillance for pandemic-potential viruses shouldn't be done

It should be done but surveillance does not equal creating pandemic-potential viruses, which is what they did and what they're still doing all over the world.

we should instead wait for the spillover to happen before studying a virus?

Yes, this is how cause and effect works. We can only study a virus after it emerges.

Actually, until you've collected and analyzed the viruses, how do you even know if there's novel ones in your sample; should we stop collecting viruses from non-humans all-together?

Collect them, study them, don't stick them in humanized mice and make them dangerous to humans!

you think no research should be done with humanized animal models

Go read my last paragraph again, Daszak wasn't conducting vaccine research because vaccine research can only be done after the virus emerges. The idea that these people are going to predict what viruses emerge and have vaccines ready for them is insane. There are so many potential combinations of dangerous viruses and not nearly enough money to develop vaccines for them.

And even if they did pick the right viruses and do preliminary research, it still isn't helpful. Vaccine development is quite fast already, the issue is with safety, testing, mass production, logistics and politics - not the basic science.

I'm far from an expert (and I doubt anyone else in this thread is either), but I'm not sure I really agree with your "extremely dangerous" assessment. Lots of things have a 100% kill rate. Like, congratulations, they've reinvented rabies? A virus that represents a serious risk to society needs to combine a number of unlikely factors, and "killing the host" is probably the easy part. (Ironically, after a certain point, high lethality makes a virus less threatening - a virus's host needs to survive to spread it on!) To truly threaten civilization, you'd have to combine it with a long asymptomatic but highly contagious incubation period.

Of course, because the media are idiots, the article you linked mentions the "surprisingly rapid" death of the mice as if that's supposed to make it more, not less, scary. Ah, journalists, never change.

I'll preface this by saying I agree with the concerns around GoF research and that it is a real problem.

Now, to add some context: This is the preprint in question.

Don't trust '100% mortality' hyped up by a news org, it's the equivalent of hack tech writers claiming '100% cancer cure rate' in some mouse model. You can get '100% mortality' with a high enough dose of relatively benign rhinoviruses that just cause colds in humans. In this preprint, the authors infected with 500,000 PFUs (plaque forming units, supposedly one PFU = 1 virus). This may not bring much comfort to people, but the LD50 of a mouse-adapted stain of COVID is 1000 TCID50 (similar to PFUs), or two orders of magnitude lower. It's hard to get a direct comparison, but here's another paper reporting an LD50 of 1000 PFUs in ferrets.

You're probably not going to die next year of GX_P2V infection. Beware articles in the New York Post throwing red meat to the base.

I don't have time to do this topic justice, but as for 'banning GoF research' - this would not have been classified as GoF research under most paradigms. Wild virus isolates were passaged in cell culture; this is simply how you propagate virus for study. Generally, propagation in vitro attenuates viruses and makes them less pathogenic, modulo some cases (admittedly similar to this one) where you may pass viruses adapted to one species in cells from another.

We produce a lot of vaccines and gene therapy vectors this way, although even those examples contain multitudes. Maybe you want a carveout for very well understood processes that we've been doing for years, but you'd have to think very carefully about crafting it.

We produce a lot of vaccines and gene therapy vectors this way

The biological process may be similar, but there is a big difference in risk profile between taking a human pathogen and passing it through non-human cells (making it less pathogenic to humans), and taking an animal pathogen and passing it through human cells (making it more pathogenic to humans).

Ok preface this saying I think GOFR should be banned for all the reasons listed in these replies.

That said, these kind of stories strike me as IC psyops designed to paint China as an enemy. EcoHealth Alliance, Fauci, and the rest of the government (including that same IC) are just as much to blame as China. And given that these people are opening out of the USA, we theoretically have more recourse against them than China.

Is China continuing GOFR? I have no doubt that they are. But this story sounds like bullshit for all the other reasons listed in the replies as well.

But this story sounds like bullshit for all the other reasons listed in the replies as well.

This was posted to biorxiv by Chinese themselves, with lead author there in the comments claiming it's 'not GOF'.

https://www.biorxiv.org/content/10.1101/2024.01.03.574008v1.full

Are you suggesting it's all an elaborate psyop and

Lihua Song

1Beijing Advanced Innovation Center for Soft Matter Science and Engineering, College of Life Science and Technology, Beijing University of Chemical Technology, Beijing, China

had nothing to do with the manuscript ?

I’m suggesting that the nypost article about the report is a routine psyop. A journalist who either 1) sees which way the wind is blowing re:China and writes a clickbait article, 2) is following a general policy that was implemented by a IC asset in the media and explicitly designed to shift our attitudes on China, or 3) is a bonafide IC asset following specific instructions to write this story.

I’ll assume my first post was unclear. Hopefully this helps. And to be clear, I think it’s likely option 2 or 3.

Does this sound outlandish or convoluted to you?

I mean, duh.

Journalism is psyops almost by definition.

The story - Chinese still doing stupid shit - is however 'not bullshit'.

Is China continuing GOFR? I have no doubt that they are.

I don't know. Wish we knew if at least this research was conducted with more biosafety.

A general involved with 'biodefense' died suddenly in spring of 2020 at age 53, so it might be steps were taken to ensure personal responsibility. I doubt Fauci will ever see a jail cell.

It’s “not GOF” for a legally narrow definition of GOF, but the concept of isolating the deadliest natural strains of virii is usually a prelude to using it as an “ingredient” in GOF/biowarfare research.

Why is this research still legal? Shouldn't the lesson from covid (regardless of the lies told to the public about bat soup or whatever) have been to close down GOF forever? Congrats, you've made the deadliest virus ever and made the future even more unsafe for the whole world. Thanks, dipshits.

How do you think it would have been made illegal? Treaty ban? Unspoken coordination? Vigilante Justice?

The UN is the closest thing we have to a multinational cooperation, and look how effective it is. For anything novel, I’d adjust my expectations downwards.

This is perfectly timed with a recent scottpost on almost the exact same topic which got me to think about it before I saw this post.

As an aside, hopefully this isn't too inflammatory a claim but I've always balked at the "approach" of assigning arbitrary probabilities and using Bayesian fake-math to imbue said arbitrary numbers with some semblance of meaning. I get the impetus but there's already a wonderful thing called a "gut feeling" for that, you can just, like, state what you feel outright, trying to lend more credence to it with (literally!) arbitrary numbers and math comes off as almost comically missing the point. Maybe I don't have the INT required to pick this node in the rationalist skill tree, I admit my level isn't very high, but I completely fail to see how pulling a number out of your ass and using it to have an opinion is in any way better than pulling a ready-made opinion out of your ass, the guiding principle is exactly the same in both cases sans the obfuscation layers.

Anyway I digress, disregard the numbers and probability stuff, the core claim (against learning from "dramatic events", emphasis mine) is concrete enough to be taken on its own merits, definition of "dramatic events" aside. How much should we update, actually? Is this a severe enough breach of Masquerade to demand a hardline unilateral response (like with the Ukraine war, for instance), and if not, a breach of what severity would it take for the US public to broadly update and for the US gov't to actually try taking action? Although I suspect those are two separate questions with different answers.

In my opinion "gain-of-function delenda est" was already solidly established with COVID, but this if proven seems to go a step beyond even that. Given the, uh, issues around the handling of COVID, I've "updated" quite significantly downward in regards to our ability to keep viruses like this in check. Which makes some of Scott's arguments even more perplexing to me:

But it’s even worse when people fail to consider events that have happened hundreds of times, treating each new instance as if it demands a massive update.

As if every instance is somehow made less harmful purely by virtue of the long lineage behind it? The context here is mass shootings (and even then I'm not sure I'm ready to take "mass shootings are normal actually" at face value) but it applies to virus outbreaks just the same, just because COVID happened and I managed to survive it doesn't mean I'm very thrilled for a rerun. Scott hedges by "if it happens twice in a row, yeah, that’s weird, I would update some stuff", but in my opinion this is plainly bad rhetoric and dangerously close to a slippery slope, with the subtle downplaying reminiscent of the political pipeline of "nobody is saying this, you're paranoid" -> "it's just a few [bad actors] on [irrelevant platforms], no big deal" -> "well there are supporters but nobody is saying [thing] exactly" -> etc. (At this point there really should be a name for this trick, I'm not aware if there is one)

If each new instance is treated as demanding a massive update, then chances are it's a psyop, sure, the 20s saw plenty of those, but regardless of politicking you still have to deal with the consequences of the act itself. Which, in this case here, look to be mildly alarming given how much impact the "previous instance" (e.g COVID) already had. Man, I wish people could care to drum up at least half the hysteria around biotech that currently surrounds AI, at least the former has very direct and obvious risks in the here and now.

Putting a probability to your beliefs is just a health tool to get you out of the silly narrative mindset where you get committed to one narrow line of possible events. It gives you a bunch of other useful tools for thinking about uncertain events like making sure that compounding conditionals should lower your probability rather than raise it. It's not really a substitute for having gut feelings but it's a very useful set of tools for discussing and reasoning about these gut feelings.

But considering other conditions while weighing the "probability" implies that you're aware of those conditions (since if you aren't you obviously wouldn't think of them), and since you're already aware of them, they're highly likely to be already "baked in" in the gut feeling/opinion currently residing in your ass. Not to mention that it's eminently possible to pull the opinion out of said ass and then discuss and reason about it, I do this often myself.

I'm probably missing something but I still fail to see the utility of the numerical approach. What point in "calibrating" around some specific number if that number, by design, isn't grounded in reality? As per @philosoraptor below, "garbage in - garbage out", meticulous calibration doesn't negate the possibility of the "origin point" being wildly off the mark in the first place.

People naturally cluster their beliefs about things into this neat little narratives. They see John running and get attached to some particular theory about why he is running. They interpret his pace and the look on his face to mean he must be running from someone or something and then build out theories about what the pursuer could be. They think extensively about all the different types of pursuers and that occupies so much of their mind space that they end up way, hilariously, over estimating the likelihood of each of those theories. It's very easy to accidentally discount the possibility that he's just out for an exercise run and made a funny face way below where it belongs if you're not careful.

The practice reminds you to think critically about each additional compounding conditional in this way and prevents common failure modes. It fits nicely with the demand to measure both probability and confidence in things like bets. I've seen myself moderate my beliefs in real time when faced with the need to define odds and offer a bet, it's a humbling thing to have happen.

It also breaks you out of the "my team vs their team" mindset. When I assign a probability besides one or zero to something I've given myself a reasonable out to it not happening. I'm less emotionally invested in some outcome and can more easily resist each new piece of evidence to the contrary causing me to double down about how it really, if you squint, supports my original position. I think this is something a lot of people who end up sucked into ideological pipelines could avoid most of their bad ends if they adopted. "From the evidence before I think I was still right in favoring outcome X, but I see I now that I was too confident and maybe evidence Y and Z should be less compelling to me in the future" is a much superior mental state to "No, bullshit, I was always right and there must be some kind of conspiracy to hide the truth". And the latter appears to be a very common occurrence.

Can you make due without these tools? Absolutely. Some people are able to free solo crazy climbs. But I find it strange that you don't least recognize their value.

I like that it gets people to plainly state their biases. Sure, they are pretending these are mathematical "priors" and then pretending to perform Bayesian reasoning with them and with new evidence. But merely explicitly stating their built-in biases and what impact they assign to evidence is great.

It's the rare confluence of jivey, fake and useful.

So it's basically just about what you mumble while pounding the nails with the Bayesian hammer? Pretty clever, I guess I can buy that, the extra steps still seem unnecessary to me but at least I can see the crumbs of utility now.

The common alternative is people not proactively stating their biases or an estimate of how much they think a particular unit of evidence counts.

Given the alternative, this is great. Jivey language. Fake numbers. Fake math. And also a great way for people to clearly state what is otherwise hidden in conversation.

I completely fail to see how pulling a number out of your ass and using it to have an opinion is in any way better than pulling a ready-made opinion out of your ass, the guiding principle is exactly the same in both cases sans the obfuscation layers.

If nothing else it forces you to stay internally consistent, at least on the specific topics the numbers cover. That's more than a lot of people seem able to manage without such tools. Nevertheless, you're not wrong that there can be an element of "garbage in, garbage out".

I'm with you on the "this is just gut feeling with extra steps" observation. I think people were just impressed by the thought that went into the Drake Equation & forgot to understand that it's still just a thought experiment. Unlike other equations with solutions, it has no predictive power.

It's the best tool they have, they don't appreciate the limits of it, and we all know what happens when all you have is a hammer.

Ah, good old gain of function research, the gift that keeps on giving.

Why did we have to end up in the dystopia (well, maybe) where most of the useful biotech/genetic engineering like germline editing, gene drives and the like are either stifled in the crib or slowed down to a crawl while something as insanely risky and largely useless like GOFR for lethality is legal and far too easy to do for even just CV padding?

The funniest part about this particular debacle is that the paper in question had a paragraph on ethical issues/IRB clearances, and the only thing it included was assent, from a Chinese military hospital, that the mice in question were treated humanely.

Right.

That's by far the biggest concern. The only concern worth noting.

I'd bash my head against a wall, but I'm already short on neurons to spare.

¿Por que no los dos?

The New York Post doesn’t write articles about suppression of germline research. Hell, they probably don’t write ones saying the lab leak was fake, because they have a brand to maintain. Scare stories about whatever example of Chinese research sounds the most dangerous? That’s on brand.

Hold ya horses buddy, what are you, Australian, with your upside down ¿ thingamajigs?

More seriously, I don't care who writes about what, as long as actual governments don't-

  1. Effectively restrict enormously beneficial technologies like genetic engineering for eugenic purposes

  2. Condone or even allow things that have a non-negligible risk of killing billions, like GOFR, without even the possible insane upsides to something like AGI.

While public opinion can and does matter, states are capable of ignoring it when they know better.

I've often felt engineered disease is underrated as a human apocalypse scenario. Largely, I think, because they didn't exist when nuclear mass annihilation first came into concept.

In WWII it seems, to my non-expert contrarian eye, that the "good guys" had started to descend into a philosophy where mass murdering "enemy" civilian populations to simply brute force attrite the rival society into nothing was taken to be valid. It's probably a good thing the war ended when it did. Since winners write the history books, and people like to justify "their side," everyone just kind of ignores this, or says it wasn't a big deal, or even tries to justify it. Also mostly fortunately, nuclear MAD means the taste of it has never since been realized again in a protracted war between two fully developed industrial powers where leaders would again descend into (mass) murderous impatience. But if that did happen I feel like it would be a countdown until some idiot sociopath in the top brass started suggesting that maybe a strategic disease could be controlled, and if it could then it would end the conflict with ease that no bomb could. Disease is such a more efficient killer than bombs, and cost effective too. The longer a protracted war goes, the more likely people will start listening to the idiot.

I don't think that's very likely for normal orthodox war these days because of old fashioned nuclear deterrence. But what about a civil war? Some gambler in an American civil war gets ahold of the disease library in Atlanta. The CPC loses legitimacy and China descends into power struggle chaos.

"Here me out: ethnically targeted diseases. Almost none of our armed forces are [enemy ethnic group]."

Two positive factors here from the point of view of wanting to restrain such destruction are:

  1. Anyone who is at least of average intelligence is probably capable of realizing that it would be very difficult to restrict biological weapons in such a way that they would only destroy the enemy.

  2. Given the proliferation of communication technology since WW2, any modern attempt to annihilate enemy civilians would probably see a bunch of footage released quickly that would cause people on the attacking side to feel at least some degree of revulsion at what their government is doing. Whereas during WW2, Allied civilians had very limited media exposure to what their military forces were doing. Extreme nationalism and/or a feeling of having been attacked first could override the feeling of revulsion, but still I think that nonetheless, in the developed world at least, the threshold for being ok with annihilating enemy civilians is higher now than it was during WW2.

Yeah, this is totally scary, and I wish there was any way of stopping this sort of research. Not to sound insensitive, but I guess better that a leak like that start in a country far away from where I live? It took quite a while for Covid to make its way over to the west, and there was plenty of warning and speculation about it for months before it came. If I knew there was a virus coming that had a 100% kill rate, absolutely no one would get me to leave my house for months, if not years.

This stuff seems indefensible.

Even if you only assume a 1% chance that COVID was a lab leak that is around 10 thousand people dead from that type of research.

It would probably only take one big country to take a hard-line stance on this to end it, since hardly anyone other than virology researchers actually benefit from the research. Even a middle weight economy country like UK could probably say "we are going to have trade sanctions against anyone that conducts this research" and that might be enough.

It would take far less effort than something like the Kyoto agreement.

I have a fear that it's now impossible to ban Gain of Function. If it's banned, then that tacitly admits that it was the cause of COVID. China doesn't want to lose face and nor does America. In China, the party line is that it came from somewhere else, possibly America, Wuhan wasn't the origin.

You can sort of see a similar tendency in how US media tends to portray it, it's primarily the fault of lax Chinese biosafety, possibly bioweapons research. And they can summon up a host of bioresearch scientists who don't want to be reviled for the rest of their lives. They can enthusiastically promulgate sophistry about how there really was some Laotian bat-pangolin-human farce in a wet market that coincidentally replicated the results of grant proposals sent by EcoHealth and researchers in Wuhan.

The blame falls on a perfect combination of Chinese and American scientists and policymakers, neither of the superpowers wants the truth to emerge.

I contend that the biggest problem right now is simply that concern about GOF research has become right-coded.

These are good reasons why the US and China might not ban it. But an uninvolved third country could still have leverage. The US and China wouldn't have to lose face, they'd just say "oh well, this crazy third country really thinks this stuff is dangerous, we don't, but trade is more important to us, so we'll cave to their demands."

Certain GMO crops have been effectively banned because Europe doesn't like them. And that seems like a much bigger political lift than banning GOF research.

There are lots of semi-existing levers that nation states or medical orgs could use. They could just dumbly pretend that any city with a bio lab is the equivalent of a place with an active prion disease. They could just maintain COVID era quarantine policies for anyone visiting countries/cities with a bio lab. Those cities would become no-go zones for tourists, food exports, and casual business travel. That would at least force these labs out of big cities and into rural areas with no agriculture.

I think it would be relatively easy for any European country to single-handedly ban GOF research worldwide. They just have to:

  1. Care about doing it in the first place. (no one seems to)
  2. Be unreasonable assholes about it. Don't let the scientists say "oh how about you allow us to have the labs if we follow all the right safety protocols, and we will be extra careful to check up on things". The answer is no, you already had your chance for safety protocols, and you gave us a worldwide pandemic.
  3. Get personal. Write laws banning anyone from working on this stuff, regardless of where they are in the world. Charge any scientist involved internationally in the research as criminals in your own country. Offer to drop charges and extradition requests only if they leave the field entirely.

These are all countries I think might be able to do it alone, but if any two or three of them teamed up it would definitely happen: UK, France, Germany, Italy, Spain, South Africa, Israel, Japan, South Korea, Australia, Mexico, Brazil, India, Singapore, Egypt or Panama. I keep thinking of more that might have just enough political capital to pull it off. They really don't need much, the interest group that cares about keeping these labs open is tiny and not very powerful. They've just lucked into a situation where the host country can't be the first one to ban them without losing a lot of face. But I doubt leaders in the US or China like having their reputation held hostage by a bunch of virologists, so they are only allies of convenience.

I also think it’s impossible to ban gain of functional research. The people capable of banning gain of function research would be people like Peter Daszak(and fauci) who maybe helped fund COVID development and whatever those guys are doing and getting money from the government are going to tell Congress that what they are doing is definitely not gain of function research.

Which makes me feel like the only choice is a complete ban on virus research (has its own issues) or GOF research is going to happen.

I always liked the Harrison Ford movie Clear and Present danger and in the movie Ford goes before congress (Ford is the boyscout) and tells congress non of the money is going towards troops to battle Narcos. Of course a different government official redirected the money to troops.

This is just a massive who watches the Watchman problem.

Are polygraphs good enough that if you just polygraph every bat lady once every three months that you would catch them?

An effective way would be to classify ANY bio research on viruses/bacteria that involves breeding them as requiring a certain level of bio-hazard-safety lab. Then after that ban the construction and existence of such high level labs.

The only problem with this strategy is the people who would have the power to do this don't have the incentive.

That seems super restrictive. What are you defining as breeding? That would seem like any cell culture to me.

But yes agree the people with the power don’t have the incentive. Anyone who can tell a congressmen we shouldn’t do x,y,z and is capable of doing lab checks for x,y,z is probably too deep in to be incentivized to do it.

Although the odds are low, this is a bigger threat than AI but not as newsworthy. Things which are the least possible, like UFOs, get the most media coverage.

Depends what sort of threat you're talking about.

Bigger GCR? Yes, definitely.

Bigger X-risk? No. Pandemics can't kill off humanity because they'll die off before population density reaches 0. Biorisk is definitely #2 on my list of X-risks this century, and in the same order of magnitude as #1 i.e. AI, but that's Life 2.0 risks - synthetic biology that's not a human pathogen but whose replication destroys something humans need (e.g. a synthetic alga that doesn't need phosphate, has better carbon-fixing than RuBisCO, and can't be digested by the aquatic foodchain, which would pull down the atmospheric and then biospheric carbon into useless gunk on the seafloor and thus cause total crop failure).

Even if pandemics can't kill off humanity alone, but they can radically inhibit our ability to handle existential risks, e.g. an asteroid; if there are a few thousand people left, they're not going to be able to develop a sufficient space program to handle such a problem. Often when a species goes extinct, there seem to be a number of factors that accelerate each other, e.g. hunger in a changed environment, then disease, then increased predation, then problems of fertile males and fertile females hooking up, then inbreeding...

However, a virus with a sufficiently long asymptomatic period when it can spread could kill off humanity, if it could spread to 100% of the population in time. Think of something like airborne HIV. Is that likely? No. Is it scientifically possible? Yes.

I agree that synthetic biology is the more plausible threat.

a virus with a sufficiently long asymptomatic period when it can spread could kill of humanity, if it could spread to 100% of the population in time

Incidentally this is the strategy for winning the game Pandemic. Spread to 100% infection, then reveal the deadly symptoms.

Asteroids large enough to kill the species are ludicrously rare, even after accounting for a reduced humanity being easier to kill (another Chicxulub would not suffice to end modern humanity, not on its own - note that I am not talking about our ability to deflect it, here, but our ability to survive the impact winter - but a humanity that had been almost totally destroyed by super-Black-Death might succumb, so call it 1/100,000,000 years instead of 1/500,000,000 - still negligible over the relatively-short timespan it'd take to repopulate and rebuild).

Even airborne HIV would still have great difficulty getting to hermits and uncontacted tribes. The one thing which would clearly work if possible - but which may not even be possible and certainly wouldn't be easy to build - is an infection which made victims consciously want to spread it, essentially a human version of all those parasites that zombify insects (not just a toxoplasma or a rabies, which are far-blunter instruments). That would have the intelligent-adversary trait where losing isn't survivable because survivors will be actively hunted down (and also it would be in every country within days because people would deliberately sneeze all over airports). Frankly, I think that even if possible this would probably need a superintelligence to design it, which means it's most sensibly placed under "AI risk" rather than "biorisk".

Even airborne HIV would still have great difficulty getting to hermits

Hermits are not a promising way to avoid human extinction.

uncontacted tribes

These are definitely vulnerable extinction events, including those more likely than large asteroids.

I'm not saying that a pandemic is a huge x-risk event, but rather that it's easy to underrate its connection with x-risk if one just looks at first-order impacts.

I think that even if possible this would probably need a superintelligence to design it, which means it's most sensibly placed under "AI risk" rather than "biorisk".

Agreed. "What would happen if the Thing had reached civilization?" has been one of my favourite daydream questions recently, but the Thing makes most sense as a specially engineered bioweapon developed by a very advanced intelligence.

I thought this was about a different study which went around months ago, in which a modified COVID-19 strain caused a 100% fatality rate in humanized mice. So I was going to point out that according to the same study stock COVID-19 had an extremely high fatality rate as well, so it said more about the mice than the virus. But looking it up apparently it's a different recently-published study about a pangolin coronavirus:

https://www.biorxiv.org/content/10.1101/2024.01.03.574008v1.full

It sounds like a lot of things cause 100% death rates in humanized mice without nessesarily meaning that much regarding humans. Note that in this case 100% means they infected 4 mice and all 4 died.

China has nukes. Everything Chinese labs do is (to some extent) state sanctioned. What choice do you have? This was always the most idiotic thing about Yudkowskian airstrikes, given the growing technological adeptness of America’s foes. Why would one risk a certain chance of total nuclear war for a partial chance of a killer virus that - at worst - would kill me just the same? It’s poor logic.

The only reason to pick nuking datacenters or virology labs over not doing so would be if your commitment to mankind was so great that you would accept your certain death in exchange for some South Americans and Africans surviving global nuclear war and repopulating the earth in thousands of years.

Alas, I am not that selfless.

Your argument is premised on the assumption that continued AI development only holds a partial risk of human extinction. You're not disagreeing with the airstrike plan, you're disagreeing with the premise that it follows from.

You're also assuming that airstrikes would escalate to nuclear war, but that's a less glaring error.

The choices are: airstrikes -> you might die in nuclear war, no airstrikes -> you definitely die from ai, along with all humanity, but you say that you don't care about that so I don't know why I'm bringing it up

What choice do you have?

Nuking China. This is the sort of thing that makes global thermonuclear war look like the best of bad alternatives.

Have we tried asking them to stop?

Like, do you think the CCP wants to keep doing gain-of-function research? My model is that they are afraid of losing face if they suddenly shut down virology research. That would be seen as an admission of guilt. But if we give them an excuse to shut it down, they could just point to some new trade deal or whatever as the reason WIV is closing its doors.

There would need to be an acknowledgment that this was A Big Deal and gain of function being outlawed in the same way as the Nuclear Non-Proliferation Treaty.

I don't know enough about the minutia to understand if this is reasonable, but I certainly don't want another COVID because of bullshit self interest.

the most idiotic thing about Yudkowskian airstrikes

He wanted it to be multilateral, where the big powers agree to force the little powers into line. That's the stupidest part of it, the idea of multilateral, genuine, sincere enforcement of a rule as opposed to 'AI for me, not for thee' like 'H-bombs for me, not for thee'.

Why is that stupid?

Because the great powers are incapable of cooperating in this unselfish way. Nuclear arms control is my example - the whole idea started just after the big powers acquired their nuclear arsenal and only applies to weaker latecomers.

Yudkowsky is like those who want global nuclear abolition, where neither big nor little powers have nuclear weapons but for AGI research. No country is going to consciously and deliberately kneecap their capabilities and fall behind in the race, especially in times like these when a competitive edge is in high demand. And AI is even more hard to ban than nuclear weapons. All the strongest lobby groups want more AI and AGI - big tech, big corporations, militaries, state security forces. There's no strong lobby group against AGI like there is against nuclear arms races, the risks are less obvious. AI is profitable and provides economic dividends, unlike piling up huge numbers of nuclear weapons. AI is so much harder to ban than nuclear weapons and we can't even do the latter.

Furthermore, the only two countries with a chance at AGI are the US and China, they're opposing forces. Of course they want to get ahead of the other, that's Made in China 2025 and the US CHIPS act in a nutshell.

Yudkowsky wasn't saying "this is likely to happen." He was saying "this is the sort of arrangement where humanity could avoid being made extinct by a hostile superintelligence." Which seems right to me if you agree with the premise that superintelligence is dangerous and possible and hard to align.

The difference between nuclear weapons and ASI is that ASI kills everyone and nuclear weapons don't. If people realized that then it would not be hard to ban it. Imagine if one nuclear weapon destroyed the whole solar system. Do you see how a treaty banning them would not be difficult? Even if China wasn't convinced, it still would not be that difficult to convince or prevent them from building one. Far easier than WWII, as Yudkowsky has said.

Because the great powers are incapable of cooperating in this unselfish way. Nuclear arms control is my example - the whole idea started just after the big powers acquired their nuclear arsenal and only applies to weaker latecomers.

A notable example of how this works is the war in Ukraine. Ukraine was coaxed into not being a nuclear power post-USSR. At the time, this definitely seemed like a good idea. Now they are in a situation where they and the West are always afraid of escalation, because of Russia's nuclear weapons.

(The US has a track record getting on the wrong side of escalation: in the Vietnam War, the US held back from e.g. a naval blockade of Vietnam, to avoid China and the USSR becoming more involved. In the Korean War, the US didn't use its nuclear arsenal, long before MAD, to avoid escalation. I'm not saying that either decision was wrong.)

Nuclear arms control is my example - the whole idea started just after the big powers acquired their nuclear arsenal and only applies to weaker latecomers.

Nonproliferation maybe, but big powers have done lots of arms reduction and test ban treaties.

Well yes, he said he wanted it to be multilateral but did (as I recall, I might be misremembering Twitter posts) suggest hostile action toward ‘rogue’ international actors (including well-armed or major ones) would be justified. And yes, it doesn’t seem likely you’d get a global agreement with zero defection, and if you did you’d have solved the major impediment to world peace.

You may well be right, I don't have a good memory of what he said either (or how he later tried to clarify/sanewash). Feels bizarre to spend so much time thinking about which parts of an unfeasible idea are most unrealistic, given it's clearly dead in the water now. US chip sanctions have cemented a race dynamic. That Washington doesn't want Beijing to have these things will surely make them even more enticing and poison trust.

People joke about Yudkowskian airstrikes on data centers; would airstrikes on labs be similarly warranted?

So I'm just going to state this is just theorizing as if I was writing a speculative fiction story. In real life murder is bad.

But it doesn't make sense to do air strikes against labs, there's a great risk of releasing a pathogen.

The weak point is the researchers. There are relatively few of them and they don't live high security lifestyles. They travel to international conferences.

A terrorist group with widespread support dedicated to killing viral researchers would probably be very effective. People would avoid the profession, avoid publishing under their own names, and the research labs would need to move away from cities to more secure locations.

In practice I think a group dedicated to harassment and social shunning would probably be very effective without the need for violence.

In practice I think a group dedicated to harassment and social shunning would probably be very effective without the need for violence.

How is that working out for similar groups in the context of anti-abortion in the US, which I imagine attracts a lot more passive and active support than opposition to something as niche and insular as GoF research could? The theory that COVID was due to slick and eloquent scientists (as opposed to something like unmasked overweight deplorables heavily panting in others' faces at the strip mall) is already thoroughly coded "icky right".

The weak point is the researchers. There are relatively few of them and they don't live high security lifestyles. They travel to international conferences. A terrorist group with widespread support dedicated to killing viral researchers would probably be very effective.

...Given that these researchers routinely mingle at international conferences, perhaps the best method would be an attack employing some sort of infectious pathogen. Of course, most existing pathogens have relatively low mortality rates, so it might be necessary to engineer a more virulent and lethal agent to ensure the job is done properly. Perhaps a promising virus could be selectively cultured with an eye to gain of- er, that is, increased efficacy....

harassment and social shunning Is violence. if Harassment and social shunning are appropriate so are harder forms of violence (if the threat is believed to warrant it enough).

would airstrikes on labs be similarly warranted?

Unless maybe you used something extremely destructive like nukes, wouldn't airstrikes on labs just be likely to spread whatever viruses are in there? So you'd have to be able to time the attack so that whatever viruses it has in there now are acceptable to spread in order to prevent the spread of some even more powerful virus in the future. Which seems like it would be very difficult to figure out.

Quite possibly. That was mostly just there for humor.

I told you guys letting the scientists get away with millions of deaths with not even so much as the hint of a punishment would come to bite us in the ass.

I mean come on, at least give 10 years hard labor to single Dyatlov pour encourager les autres. It's not like there's any shortage of guilt to go around, the whole program the US supposedly ended was likely completely illegal and we even have people like Fauci actively conspiring to hide the mere consideration of COVID escaping from a lab.

When you refuse to render justice and punish the guilty, you are justly rewarded with larger and larger messes.

Unlike Bostrom I don't believe in an infinitely large mess that consumes all mankind, I think nature is very rarely that self reinforcing and that extinction is more likely to come from slow decay or the completely unpredictable. If fucking with nature could totally wipe us out we'd probably already be dead a thousand times over already given how fast and loose we've been playing with the forces of nature these past few centuries.

That said, since I'm not really eager for a new black plague, can't we just, this one time, agree not to pursue something that's totally worthless as a weapon, actively harmful to humanity and doesn't even answer any questions about the universe we really want the answer to?

Why do you believe that humanity could never invent something capable of causing our extinction? Even if you think strangelets are safe, is there some rule that says they have to be? What about nanobots? What happens when you create an AI advanced enough to make itself more advanced?

"Nature doesn't work that way." Why? What does that mean? Why are you using the word "nature" and not "technology and its future"?

Futurists tend to have this problem where they're so fixated on hypothetic possibility they tend to forget anything we do, including novel technology, is still part of nature and still restricted by the same limits as everything else. It's rare that we can come up with something that is both legitimately more efficient than what evolution has come up with and sustainable in time.

In this instance, we know what the behavior of a very deadly virus is: it kills a bunch of people until eventually its death rate catches onto its propagation rate and it smothers itself or it manages to mutate into a less deadly strain that can become endemic and live in a population by not killing it fast enough.

To guarantee extinction we then have to reach for effects that are so radically deadly and permanent that they can actually effect all or virtually all of the human population and have enough staying power that you can't escape by living in some remote place for a bit. Even total nuclear war doesn't pass that test. Humanity can still thrive with high rates of cancer, and wildlife has long since forgotten about Chernobyl even as we did not.

I don't believe in extinction McGuffin because to create something that can affect us in totality is extremely difficult; and it is so as I've stated because the processes of nature, down to even physics, are self limiting most of the time.

I'd still feel more secure if we had two planets, nay, if we had two solar systems. Because there are a decent amount of events that pass that threshold. But I'm a lot less concerned with extinction as I am with widespread catastrophe.

"Futurists tend to have this problem where they're so fixated on hypothetic possibility they tend to forget anything we do, including novel technology, is still part of nature and still restricted by the same limits as everything else. It's rare that we can come up with something that is both legitimately more efficient than what evolution has come up with and sustainable in time."

Cars, planes, buildings, roads, computers, printing press, guns, bombs, boats, submarines, ice cream, space ships, etc. A grizzly bear's claws are sharp, but a sword is made of metal. Why do you think improving on nature/evolution is rare?

Why do you think smarter than human AI is impossible? For one thing, human brain size is limited by the size of the woman's hips, so we're not even as evolutionarily selected as we could be.

It's rare that we can come up with something that is both legitimately more efficient than what evolution has come up with and sustainable in time.

If your standard is life by any measure goes on, then ok. But it's not like mass extinction has never happened, and given enough time, will not happen again. The Permian–Triassic extinction event knocked out something like 81% of all marine species, according to wiki. Some things lived on or recovered over the course of millions of years, but plenty of creatures got perma-wiped. I don't see why humans could never ever be like the trilobites.

Oh I'm not saying humanity can't go extinct, not at all. But it's not like those trilobites engineered their own destruction or something. Some impossible to prepare incomprehensible new circumstance just came about and wiped them out. That seems much more likely to me than the Frankenstein scenario we're all so obsessed with.

Chernobyl did not kill many people.

Fair point about how the death rate will cause it to smother itself. Of course, the world is probably more connected than it has ever been, meaning that it would be harder to reach that point. But that's not enough to wipe people out.

I think Moldbug nailed it with his analysis. Researchers want to work on the most important problems in their fields. In viral research it's deadly airborne diseases. There's a shortage of deadly pandemic viruses to study, so they create them. That way they have something to write papers on.

Do you recall which essay of Moldbug this was in?

Covid is science's Chernobyl from the other reply but I think he stated it in a more straightforward way in a 2020 interview with Michael Malice or Unregistered Podcast. Unfortunately those are very long and I can't find the right one.

Probably one of these if you have enough time: https://youtube.com/results?search_query=yarvin+after%3A2020-01-01+before%3A2021-12-01

Here's a similar comment, I'm not sure if it's the exact bit I remember https://youtube.com/watch?v=BUhYbbBfG2c&t=2804

Large-scale medicine is a good example of anarcho-tyranny. Malaria vaccine to save untold thousands from a painful death? Not without years and years of exhaustive development and trials and the WHO sitting with its thumb up its arse for no reason! Fuck around with lethal untested experimental viruses? Why, of course, no problem there.

How do you propose attaining a state that can not be described as anarcho-tyranny in this fashion? If you write your ban on this instance of "fucking around", the next guy writing a post like yours will just get to say something like "basic research to understand how to best fight emerging pandemics? Not without billions spent on paranoid safety procedures! Fuck around with [new thing that has no motivated political constituency fighting against it yet]? Why, of course(...)." The things that are seemingly unreasonably banned and encumbered today are just noncentral examples of yesterday's irresponsible-scientists-have-gone-too-far scenarios.

I feel like your use of anarcho-tyranny implies elite control and nefariousness, but if they fast-tracked it by ignoring safety concerns, the antivaxx masses would flip their shit. Even regular people are more comfortable doing nothing than risking harm, copenhagen ethics style. They’re not pushing the fat man on the tracks, no matter how many kids die. It’s really only 'psychopathic' utilitarians who care.

It's plausible to me that lots of examples of anarcho-tyranny are driven by public opinion. African tribe A wants the country's government apparatus to be asymmetrically used in their favour against African tribe B. Bleeding heart liberals in the West want a legal system that "doesn't punch down."

African tribe A wants the country's government apparatus to be asymmetrically used in their favour against African tribe B.

In the same way, and for exactly the same reasons, American gender/skin color A wants the country's government apparatus to be asymmetrically used in their favor against American gender/skin color B. Justifying a system that oppresses B is always and by definition "punching up, not punching down" (according to A).