site banner

Culture War Roundup for the week of May 15, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

The anti-doomer's flowchart, courtesy of Ross Scott.

You may remember that, a while back, Ross Scott (of Civil Protection, Freeman's Mind, and Ross's Game Dungeon fame) hosted a discussion with Big Yud himself over AI risk. I couldn't finish the video, but I gathered that Ross was not impressed by Yud's arguments from the premise of AI gaining consciousness and thus wasn't really grasping what Yud saw as the problem. For the many of you who are averse to long videos, the above image lays out Ross's positions on AI risk, with reasons for why.

Ross comes close to understanding what the real risks are in his top-right "unforeseen consequences" node, but then he somehow links that with free will and consciousness, which is just a moronic misunderstanding of the AI-risk position. Unfortunately he doesn't seem to have found a convincing argument against AI doom.

People here like to harp on the bad arguments of doomers. The arguments of anti-doomers are so much worse. I think it would interesting to make a taxonomy of the bad arguments.

Here are some common bad arguments I can think of off the top of my head.

AI is not a threat because...

  • It won't have consciousness

  • LLMs don't currently have some abilities

  • LLMs don't have "real" understanding when they write a coherent 10 paragraph graduate-level essay

  • You can't conclusively prove AI will kill us in the near future

  • Other threats have come (nuclear weapons) and we've always come through them

  • We will be able to "pull the plug" if we see things going badly

  • There have been no Earth-shattering releases since GPT-4 two months ago

  • AI will not be able to manipulate the physical world

  • I asked Chat-GPT something and it gave me a bad answer

  • Doomers like EY have unlikeable personalities

  • The human brain his hithero-undiscovered quantum computing abilities

  • AI doom scenarios feel "far-fetched". I can't personally imagine what will happen.

And, of course, the one pointed out by Scott.

  • The situation is completely uncertain. We can’t predict anything about it. We have literally no idea how it could go. Therefore, it’ll be fine.

Which ones did I miss?

Other threats have come (nuclear weapons) and we've always come through them

I would actually really like to see a rebuttal of this one, because the doomer logic (which looks correct to me) implies that we should all have died decades ago in nuclear fire. Or, failing that, that we should all be dead of an engineered plague.

And yet here we are.

The threat model is different. Nuclear weapons are basically only useful for destroying things; you don't build one because a nuke makes things better for you in a vacuum, but because it prevents other people from doing bad things to you, or lets you go do things to other people. Genetic engineering capabilities don't automatically create engineered plagues, some person has to enact those capabilities in that fashion. I'm not familiar with the state of the art in GE, but I was under the impression that the knowledge required for that kind of catastrophe was wasn't quite there. Further, I think there are enough tradeoffs involved that accidents are unlikely to make outright x-risk plagues, the same way getting a rocket design wrong probably makes 'a rocket that blows up on takeoff' instead of 'a fully-functional bullet train'.

AI doom has neither of those problems. You want AI because (in theory) AIs solve problems for you, or make stuff, or let you not deal with that annoying task you hate. And, according to the doomer position, once you have a powerful enough AI, that AI's goals win automatically, with no desire for that state required on any human's part, and the default outcome of those goals does not include humans being meaningfully alive.

If nuclear bombs were also capable, by default, of being used as slow transmutation devices that gradually turned ordinary dirt into pure gold or lithium or iron whatever else you needed, and if every nuke had a very small chance per time period of converting into a device that rapidly detonated every other nuke in the world, I would be much less sanguine about our ability to have avoided the atomic bonfire.

So I have two points of confusion here. The first point of confusion is that if I take game theory seriously, I conclude that we should have seen a one-sided nuclear war in the early 1950s that resulted in a monopolar world, or, failing that, a massive nuclear exchange later that left either 1 or 0 nuclear-capable sides at the end. The second point of confusion is that it looks to me like it should be pretty easy to perform enormously damaging actions with minimal effort, particularly through the use of biological weapons. These two points of confusion map pretty closely to the doomer talking points of instrumental convergence and the vulnerable world hypothesis.

For instrumental convergence, I will shamelessly steal a paragraph from wikipedia:

Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.

This sounds reasonable, right? Well, except now we apply it to nuclear weapons, and conclude that whichever nation first obtained nuclear weapons, if it wanted to obtain the best possible outcomes for itself and its people, would have to use their nuclear capabilities in order to establish and maintain dominance, and prevent anyone else from gaining nuclear capabilities. This is not a new take. John von Neumann was famously an advocate of a "preventive war" in which the US launched a massive preemptive strike against Russia in order to establish permanent control of the world and prevent a world which contained multiple nuclear powers. To quote:

With the Russians it is not a question of whether but of when. If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o'clock, I say why not one o'clock?

And yet, 70 years later, there has been no preemptive nuclear strike. The world contains at least 9 countries that have built nuclear weapons, and a handful more that either have them or could have them in short order. And I think that this world, with its collection of not-particularly-aligned-with-each-other nuclear powers, is freer, more prosperous, and even more peaceful than the one that von Neumann envisioned.

In terms of the vulnerable world hypothesis, my point of confusion is that biological weapons actually look pretty easy to make without having to do anything fancy, as far as I can tell. And in fact there was a whole thing back in 2014 with some researchers passaging a particularly deadly strain of bird flu through ferrets. The world heard about this not because there was a tribunal about bioweapon development, but because the scientists published a paper describing their methodology in great detail.

The consensus I've seen on LW and the EA forum are that an AI that is not perfectly aligned will inevitably kill us in order to prevent us from disrupting its plans, and that even if that's not the case we will kill ourselves in short order if we don't build an aligned god which will take enough control to prevent that. The arguments for both propositions do seem to me to be sound -- if I go through each point of the argument, they all seem broadly correct. And yet. I observe that, by that set of arguments, we should already be dead several times over in nuclear and biological catastrophes, and I observe that I am in fact here.

Which leads me to conclude that either we are astonishingly lucky in a way that cannot be accounted for by the anthropic principle (see my other comment), or that the LW doomer worldview has some hole in it that I have so far failed to identify.

It's not a very satisfying anti-doom argument. But it is one that I haven't seen a good rebuttal to.

That instrumental convergence paragraph comes with a number of qualifiers and exceptions which substantially limit its application to the nuclear singleton case. To wit:

Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.

I could try to draw finer distinctions between the situations of post-WW2 USA and a hypothetical superintelligent AI, but really the more important point is that the people making the decisions regarding the nukes were human, and humans trip over the "some element in its utility function bars the action" and "self-interested" segments of that text. (And, under most conceptions, the 'rational agent' part, though you could rescue that with certain views of how to model a human's utility function.)

Humans have all sorts of desires and judgements that would interfere with the selection of an otherwise game-theoretically optimal action, things like "friendship" and "moral qualms" and "anxiety". And that's not even getting into how "having a fundamental psychology shaped by natural selection in an environment where not having any other humans around ever meant probable death and certain inability to reproduce their genes" changes your utility function in a way that alters what the game-theoretic optimal actions are.

One of the major contributors to the lack of nuclear warfare we see is that generally speaking humans consider killing another human to be a moral negative, barring unusual circumstances, and this shapes the behavior of organizations composed of humans. This barrier does not exist in the case of an AI that considers causing a human's death to be as relevant as disturbing the specific arrangement of gravel in driveways.

I haven't spent enough time absorbing the vulnerable world hypothesis to have much confidence in being able to represent its proponents' arguments. If I were to respond to the bioweapon myself, it would be: what's the use case? Who wants a highly pathogenic, virulent disease, and what would they do with it? The difficulty of specifically targeting it, the likelihood of getting caught in the backwash, and the near-certainty of turning into an international pariah if/when you get caught or take credit makes it a bad fit for the goals of institutional sponsors. There are lone-wolf lunatics that end up with the goal of 'hurt as many people around me as possible with no regard for my own life or well-being' for whom a bioweapon might be a useful tool, but most paths for human psychology to get there seem to also come with a desire to go out with a blaze of glory that making a disease wouldn't satisfy. Even past that, they'd have the hurdles of figuring out and applying a bunch of stuff almost completely on their own (that paper you linked has 9 authors!) with substandard equipment, for a very delayed and uncertain payoff, when they could get it faster and more certainly by buying a couple of guns or building a bomb or just driving a truck into a crowd.

I could try to draw finer distinctions between the situations of post-WW2 USA and a hypothetical superintelligent AI, but really the more important point is that the people making the decisions regarding the nukes were human, and humans trip over the "some element in its utility function bars the action" and "self-interested" segments of that text. (And, under most conceptions, the 'rational agent' part, though you could rescue that with certain views of how to model a human's utility function.)

My point was more that humans have achieved an outcome better than the one that naive game theory says is the best outcome possible. If you observe a situation, and then come up with some math to model the situation, and then use that math to determine the provably optimal strategy for that situation, and then you look at what the actual outcomes of the situation and the actors obtain an outcome better than the one your model says is optimal, you should conclude that either the actors got very lucky or that your mathematical model does not properly model this situation.

And that's not even getting into how "having a fundamental psychology shaped by natural selection in an environment where not having any other humans around ever meant probable death and certain inability to reproduce their genes" changes your utility function in a way that alters what the game-theoretic optimal actions are.

I think you're correct that the "it would be bad if all other actors like me were dead" instinct is one of the central instincts which makes humans less inclined to use murder as a means to achieve their goals. I think another central instinct is "those who betray people who help them make bad allies, so I should certainly not pursue strategies that look like betrayal". But I don't think those instincts come from peculiarities of evolution as applied to savannah-dwelling apes. I think they are the result of evolution selecting for strategies that are generally effective in contexts where an actor has goals which can be better achieved with the help of other actors than by acting alone with no help.

And I think this captures the heart of my disagreement with Eliezer and friends -- they expect that the first AI to cross a certain threshold of intelligence will rapidly bootstrap itself to godlike intelligence without needing any external help to do so, and then with its godlike intelligence can avoid dealing with the supply chain problem that human civilization is built to solve. Since it can do that, it would have no reason to keep humans alive, and in fact keeping humans alive would represent a risk to it. As such, as soon as it established an ability to do stuff in the physical world, it would use that ability to kill any other actor that is capable of harming it (note that this is the parallel to von Neumann's "a nuclear power must prevent any other nuclear powers from arising, no matter the cost" take I referenced earlier).

And if the world does in fact look like one where the vast majority of the effort humanity puts into maintaining its supply chains is unnecessary, and actually a smart enough agent can just directly go from rocks to computer chips with self-replicating nanotech, and ALSO the world looks like one where there is some simple discoverable insight or set of insights which allows for training an AI with 3 or more orders of magnitude less compute, I think that threat model makes sense. But "self-replicating useful nanotech is easy" and "there is a massive algorithmic overhang and the curves are shaped such that the first agent to pass some of the overhang will pass all of it" are load bearing assumptions in that threat model. If either of them does not hold, we do not end up in a world where a single entity can unilaterally seize control of the future while maintaining the ability to do all the things it wants to.

TL;DR version: I observe that "attempt to unilaterally seize control of the world" has not been a winning strategy in the past, despite there being a point in the past when very smart people said it was the only possible winning path. I think that, despite the very smart people who are now asserting that it's the only possible winning path, it is still not the only possible winning path. There are worlds where it is a winning path because all paths are winning paths for that entity -- for example, worlds where a single entity is capable enough that there are no benefits for it of cooperating with others. I don't think we live in one of those worlds. In worlds where there isn't a single entity that overpowers everyone else, the game theory arguments still make sense, but also empirically doing the "not game-theoretically optimal" thing has given humanity better outcomes than doing the "game-theoretically optimal" thing, and I expect that a superintelligence would be able to do something that gave it outcomes that were at least that good.

BTW this comes down to the age-old FOOM debate. Millions of words have been written on this topic already (note that every word in that phrase is a different link to thousands-to-millions of words of debate on the topic). People who go into reading those agreeing with Yudkowsky tend to read those and think that Yudkowsky is obviously correct and his interlocutors are missing the point. People go into reading those disagreeing with Yudkowsky tend to read them and think that Yudkowsky is asserting that an unfalsifiable theory is true, and evading any questions that involve making concrete observations about what one would actually expect to see in the world. I expect that pattern would probably repeat here, so it's pretty unlikely that we'll come to a resolution that satisfies both of us here. Though I'm game to keep going for as long as you want to.

The Anthropic principle for one:

https://en.wikipedia.org/wiki/Anthropic_principle

The probability of "extinction events in the past" given "we are here to observe it" is 0%. We can't infer, therefore, anything about the chance of these events happening based on prior history.

But, you might (wisely) point out that nuclear weapons are not actually extinction events. And, so far, in humanity we have seen very limited use of nukes. This gives us weak evidence that uniquely dangerous weapons can be contained. Here's why it's not a great argument.

  1. It's an N of 1.

  2. Nukes and AI are different. The technology to create nuclear weapons can be controlled by anti-proliferation efforts. AI could be much harder to contain (short of bombing GPU clusters). Nukes also have a bounded downside. It's a very large downside but it's bounded. The technology is well understood. One nuke isn't going to destroy the world. Neither will a full nuclear exchange. However, a runaway AI could destroy the world. We have 1 megaton bombs. There is no reason to believe that 1 teraton bombs will happen anytime in the near future. However, with AI it's possible to imagine a near-term situation where the capabilities of AI increase by orders of magnitude quickly. What is N today could be 1 billion N in 10 years.

I think the anthropic principle is fine for pointing out why we don't see things with bimodal outcomes of "everything is fine" / "everyone is dead".

But nuclear and biological weapons don't look like that. If 5% of worlds have no nuclear war, 40% have one that killed half the population, and the other 55% have one that wipes out everyone, 80% of observers should be in the "half of the population died in a nuclear war" worlds.

Which means one of the following:

  1. Nuclear war will generally kill everyone in pretty short order (and thus by the anthropic principle most observers are in worlds where nuclear war has never started)

  2. We're quite lucky even taking the anthropic principle into account: most observers are in more disastrous worlds than us

  3. Nuclear war isn't actually very likely: most observers are in worlds where it never gets started

  4. Something else weird is going on (e.g. simulation hypothesis is correct).

Hypothesis 1 seems unlikely to me since the models I've seen of even a full counter-value exchange don't seem to kill more than half the people in the world. Hypothesis 3 seems like the sort of world that does not contain the Cuban missile crisis, Petrov, the Norwegian rocket incident.

Which leaves us with the conclusion that either hypothesis 2 is correct and we're just lucky in a way that is not accounted for by the anthropic principle, or our world model has a giant gaping hole in it.

I think it's probably the "giant gaping hole" one. And so any doomer explanation that also would have predicted nuclear (or biological) doom has this hole. And it's that point I would like to see the doomers engage with.

Am I missing something or is Ross under the mistaken impression that an AI will need to attain consciousness in order to be superintelligent? So far as I'm aware nothing is preventing an AI from becoming a superintelligent p-zombie that nonetheless decides to kill us all.

On what basis is Ross Scott qualified to discuss AI risk? I know Yud isn't the most formally qualified person in the world but he's at least spent a very long time thinking about the topic and philosophy generally. What intellectual basis do random youtubers have to talk about instrumental convergence and so on?

Point blank, if he thinks AI is less of a threat than homelessness (as stated in the bottom) then he's wildly missing the target area. Look what we did to great apes - they're on the endangered list. We're basically a great ape with a bigger brain. Intelligence is enormously useful! We used it to take over the entire world in a vanishingly short timespan by biological standards. Intelligence includes communications ability, charisma and so on, basically all mental stats.

Look what we did to the rest of the Homo genus! Not a single one survives. There's some debate as to whether we murdered or bred them into extinction, or a mix of both. Either way, it didn't end well for any of our competitors.

We should not bring in any new competitors to a battle royale that we just won, especially if they possess enormous potential. Computer minds can be enormously large in physical terms, enormously power-hungry, enjoy the resources of a giant global supply chain as opposed to whatever proteins can be scrounged up on a tiny budget. Computers can run at gigahertz, they can train on huge amounts of information, self-modify... Even if we don't understand the complexity of the brain, we could stumble upon something far superior that only works if you have gigawatts to throw at it. Far better to upgrade human intelligence slowly and steadily than invite competition.

'Maybe it won't become sapient' or 'maybe it won't become superintelligent' are not risks we should be taking. Of course it's going to become superintelligent, there's no universal cap on intelligence that fits just above the smartest people in history. Why would 20 watts be the point at which there are literally no further returns to scale?

I don't take this series of posts as really being about trying to wring a great novel take on AI out of Scott so much as testing what impact a long conversation with Yud has on reasonably smart and curious laymen. This has important implications on messaging for doomers.

Precisely; I should have mentioned in my post above that Ross Scott is much more of an internet layman and a techno-optimist and is thus probably the exact kind of person Yud needs to convince, and didn't.

This question is destructive. There was a recent retrospective on Monkey Pox. It looked at predictions by expert (granted on Twitter). The experts were about 5x more likely to be wrong than right. Such experts when adjusted for reach were about 1000x more likely to be wrong than right.

Experts based on credentials aren’t experts at all.

What intellectual basis do random youtubers have to talk about instrumental convergence and so on?

About as much as any one of us do on this forum. Ross is by no means an expert, but he is decidedly (for lack of a better term) a 'weird' guy who discusses all kinds of weird stuff through his videos, they aren't exactly stardard game reviews/retrospectives, and he does it reasonably compellingly too. His unique, even if non-expert, outlook alone I think justifies some interest in what he thinks (I say this as a admitted fan of his videos)

I think I made an error in word choice. 'Discuss' is fine. People discuss things all the time. I also thought his videos were fun.

But has he read any of the papers, books, content written on the topic? I'm pretty sure he hasn't, going off the state of his flowchart. If you commit to a debate where thousands of people will watch you, you should at least have some in-depth understanding of what the other side has been saying. Robin Hanson for instance is credible, he's put a lot of thought into it.

My first take on the flowchart is that "consciousness" is horribly abused as a concept here. Not to abuse authority here, but: I've spent most of my academic career writing about consciousness, with the last 5 years focused on AI consciousness, I'm on the boards of multiple journals in the field, and have numerous publications in top cogsci and philosophy and even AI journals on the topic. I would say that almost without exception, anyone who knows anything about AI risk and consciousness realises there's very little connecting the two.

The interesting part of consciousness is the hard problem (aka qualia, subjectivity, zombies), and that is explicitly divorced from the kind of cognitive capabilities that could be scary; it's the mystery layer on reality, and fwiw I do think it's genuinely mysterious. I have no idea whether future superintelligent AIs are likely to be conscious -- or rather, my thoughts on the subject are complex, meandering, and dense. By contrast, it's pretty straightforward to see how an agential AI that outmatches us in capacities like strategic planning, social cognition, and behaviour anticipation is scary as fuck.

I don't care if it's conscious. I care whether it's able to outthink me.

I know you said your thoughts about AI and consciousness are complex, but what are they, roughly?

Remind me about this if I haven’t replied or posted next week! Not ignoring your nice question at all but will need to find 30 mins to do it justice.

I don't care if it's conscious. I care whether it's able to outthink me.

I don't think so. AI is good at answering questions in which the answers can be readily constructed from easier answers, not so good at formulating out-of-the-box solutions. "How do I made my storefront rank higher on Amazon?" is a question AI cannot solve ,and those who can figure this out are paid a lot.

Here's gpt-4's answer, which isn't bad all things considered, not especially out-of-the-box necessarily, but it seems fairly competent to me. Though of course the implementation details are where the real problem lies.

/images/16846790675818799.webp

not bad as a starting point if you know little to begin with

What makes you say that AI can't solve this type of question, categorically? The people that have the skills to optimize SEO have had a lot of exposure to SEO related topics, state of the art techniques and so on along with a good ability to combine information to solve relatively novel problems. What part of that requires a human brain, let alone consciousness?

But SEO is always changing, that is the problem. What works today may not work a month from now. if Chat GPT is using old information for answers, this may not work now. If the answer is in anyway remotely unpredictable in nature, unlikely. Let's assume google makes some tweak to its algo in which only websites with a certain attribute rank higher, something not at all obvious like a needle in haystack, I don't see how any AI solution could find this. If AI can replace experts, why is big tech have so many highly paid employees? Surely they can cut costs greatly if even just a small % were outsourced to AI. Experts are paid a lot because they are good at adapting quickly to new information and changing and unpredictable environments.

"I would say that almost without exception, anyone who knows anything about AI risk and consciousness realises there's very little connecting the two." One possible connection is that if AIs exterminate us and don't have consciousness the future has zero value, but if they do have consciousness, well something was going to eventually replace us and perhaps we can be proud of our AI children.

"risks happening right now: white collar jobs being replaced"

checks BLS stats: unemployment rate lowest ever, and labor force participation rising or stable

https://libertystreeteconomics.newyorkfed.org/wp-content/uploads/sites/2/2023/03/LSE_2023_participation-gap_amiti_ch3.png

Jobs lost due to AI will probably be replaced by new ones such as 'AI risk consultant' or other bullshit jobs.

"AI used to develop viruses"

Huh? That doesn't even make sense. How can the "risks bigger than AI" , which includes nuclear war, be bigger than the possible risk of AI starting a nuclear war?

AI risk seems like another religion or pseudoscience. I remember similar hype from the early 2000s about nanotechnology destroying the world. It was called the grey goo scenario.

Huh? That doesn't even make sense. How can the "risks bigger than AI" , which includes nuclear war, be bigger than the possible risk of AI starting a nuclear war?

If we say that nuclear war has a disutility of 0.9 and a human-started nuclear war has a probability of 0.2 this puts human-started nuclear war at a risk factor of 0.18.

If we say that unfriendly AI has a 0.3 chance of occuring and then a 0.6 chance of successfully starting nuclear war, this gives it a risk factor of 0.162: lower than human-started nuclear war.

AI risk seems like another religion or pseudoscience. I remember similar hype from the early 2000s about nanotechnology destroying the world

To the best of my understanding, Yudkowsky's current risk model involves AI using nanotechnology to destroy the world.

Or just like, taking control of the US military, since obviously the pentagon will have a bunch of things run by AIs if it makes them more effective.

Nah his current risk model is more like "AI discovers fundamental new principles of science, and exploits phenomena we don't know about to kill everyone", that's what the "send an air-conditioner blueprint to the past" example he keeps talking about is meant to illustrate. The nanotech/biotech distinction doesn't seem especially sharp or important to me, it's just different ways of getting at fine-grained control of very small things.

And in the typical FOOM scenario (which is admittedly probably unlikely), you might get an AI that can do like 100 years of intellectual work of an entire civilization made of Geniuses every single second, at which point it seems like it could solve nanotech trivially.