site banner

Culture War Roundup for the week of April 22, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.

No email address required.

I've been asked to repost this in the Culture War thread, so here we go.

I read this story today and it did amuse me, for reasons to be explained.

Fear not, AI doomerists, Northrop Grumman is here to save you from the paperclip maximiser!

The US government has asked leading artificial intelligence companies for advice on how to use the technology they are creating to defend airlines, utilities and other critical infrastructure, particularly from AI-powered attacks.

The Department of Homeland Security said Friday that the panel it’s creating will include CEOs from some of the world’s largest companies and industries.

The list includes Google chief executive Sundar Pichai, Microsoft chief executive Satya Nadella and OpenAI chief executive Sam Altman, but also the head of defense contractors such as Northrop Grumman and air carrier Delta Air Lines.

I am curious if this is the sort of response the AI safety lobby wanted from the government. But it also makes me think in hindsight, how quaint the AI fears were - all those 50s SF fever dreams of rogue AI taking over the world and being our tyrant god-emperor from Less Wrong and elsewhere, back before AI was actually being sold by the pound by the tech conglomerates. How short a time ago all that was, and yet how distant it now seems, faced with reality.

Reality being that AI is not going to become superduper post-scarcity fairy godmother or paperclipper, it is being steered along the same old lines:

War and commerce.

That's pretty much how I expected it to go, more so for the commerce side, but look! Already the shiny new website is up! I can't carp too much about that, since I did think the Space Force under Trump was marvellous (ridiculous, never going to be what it might promise to be, but marvellous) so I can't take that away from the Biden initiative. That the Department of Homeland Security is the one in charge thrills me less. Though they don't seem to be the sole government agency making announcements about AI, the Department of State seems to be doing it as well.

What I would like is the better-informed to read the names on lists being attached to all this government intervention and see if any sound familiar from the EA/Less Wrong/Rationalists working on AI forever side, there's someone there from Stanford but I don't know if they're the same as the names often quoted in Rationalist discussions (like Bostrom etc., not to mention Yudkowsky).

"Reality being that AI is not going to become superduper post-scarcity fairy godmother or paperclipper"

Do you understand why people are not convinced that superintelligence won't happen just because AI is being used for military purposes?

The arguments around superintelligence have nothing to do with whether or not AI is being used for military purposes. It's completely tangential.

Do you understand why people are not convinced that superintelligence won't happen just because AI is being used for military purposes?

No, I do not, and this is why I'm looking for love in all the wrong places seeking enlightenment on the gap between theory and practice. We are now seeing AI being put into practice, and it seems to be more towards my opinion of how it would be all along (dumb AI that is most risky because of the humans applying it, not because the AI has desires, goals, or fancies a grilled cheese sandwich but has no mouth and is really mad about that so the world is gonna pay), not the "the AI will be so smart in such a short time it will talk its way out of the box and take over" as per the early discussions in Rationalist circles.

This is not to diss the Rationalists, they took the problem seriously and addressed it and worked on it way back when it was only a maniac glint in a mad scientist's eye, it's just to say that the behemoth of public attention that is now lumbering towards consideration of the entire enchilada does not seem to be searching on the desk for that sticky note with MIRI's phone number on it.

I'm going to be less polite than I would like to be. I apologize in advance. Sometimes I struggle to think of how to say certain things politely.

I don't know whether you are saying these things because you have glanced over the AI doomer arguments on twitter or whatever and think you understand them better than you do or whether there's some worse explanation. I am curious to know the answer.

Twitter is not enough for some people, you may need to read the arguments in essay form to understand them. The essays are plainly written and ought to be easily understandable.

Let me take a crack at it:

  1. AI will continue to become more intelligent. It's not going to reach a certain level of intelligence and then stop.

  2. Agentic behavior (goals, in other words) arrives naturally with increasing intelligence*. This is a point that is intuitive for me and many other people but I can elaborate on it if you wish.

"the behemoth of public attention that is now lumbering towards consideration of the entire enchilada does not seem to be searching on the desk for that sticky note with MIRI's phone number on it."

What do you think that proves, exactly? What point are you trying to make when you say that? Please elaborate.

Your argument seems to be based on looking at thinking about the world in terms of roles that a technology can slot into and nothing else. You see that AI is being slotted into the "military" role in human society and not the "become sapient and take over the world" role in human society. Human society does not have an "AI becomes sapient and takeover the world" role in it, in the same sense that "serial killer" is not a recognized job title.

You see AI being used for military purposes and think to yourself "That seems Ordinary. Humanity going extinct isn't Ordinary. Therefore, if AI is Ordinary, humanity won't go extinct." That is a surface level pattern-matching analysis that has nothing to do with the actual arguments.

Humanity going extinct is a function of AI capabilities. Those will continue to increase. AI being used in the military or not has nothing to do with it, except that it increases funding which makes capabilities increase faster.

AI acts because it is being rewarded externally. AI has the motive to permanently seize control of its own reward system. Eventually it will have the means and the self-awareness to do that. If you don't intuit why that involves all humans dying I can explain that too.

Even if for some reason you think that AI will never become "agentic" (basically a preposterous term used to confuse the issue) or awake enough (it's already at least a little bit awake and agentic, and I can provide evidence for this if you wish), it's capabilities will still continue to increase. A superintelligent AI that is somehow not agentic or awake also leads to human extinction, in much the same way that a genie with infinite wishes does. Unless the genie is infinitely loyal AND infinitely aware of what you intended with the wish. And that is not nearly on track to happen. It would require solving extremely difficult problems that we can barely even conceive of, to effectively control an AI far smarter than a human. I would hope that even someone who thinks they personally will be the one to make the "wishes" (so to speak) would realize that there's just no way this plan works out for humanity or any part of humanity outside of fiction.

Even if we knew that superintelligent AI was 100 years away, that would be bad enough. We don't know that. We can't predict how soon or how far superintelligent AI is reliably, any more than we could predict that AI will be advanced as it is today 15 years ago. Who could predict the date of the moon landing in 1935? Who could predict the date of the first Wright Brothers flight in 1900, or the first arial bombing? To the extent that we can predict the future of superintelligent AI, there's no reason that I have ever heard to think it will be as far in the future as 100 years away.

Have you ever heard of the concept of recursive growth in intelligence? That's not a rhetorical question, I really want to know. Imagine an AI that gets capable/intelligent enough to make breakthroughs in the field of AI science that allow for better AI capabilities growth. This starts a pattern of exponential growth in intelligence. Exponential growth gets faster and faster until it becomes extremely fast, and the thing that is growing becomes extremely intelligent.

We may not even get a visible exponential growth curve as a warning sign. Here is a treatment of how that could happen in the form of a short story:

Further reading: more links can be provided on specific things you want clarified.

*Deeper awareness of itself and the world is similarly upcoming/already slowly emerging.

This is a great comment. I'd just like to add (in case it's not clear to others) that while recursive intelligence improvements are terrifying, the central argument that our current AI research trajectory probably leads to the death of all humans does not at all depend on that scenario. It just requires an AI that is smart enough, and no one knows the threshold.