site banner

Culture War Roundup for the week of April 22, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I've been asked to repost this in the Culture War thread, so here we go.

I read this story today and it did amuse me, for reasons to be explained.

Fear not, AI doomerists, Northrop Grumman is here to save you from the paperclip maximiser!

The US government has asked leading artificial intelligence companies for advice on how to use the technology they are creating to defend airlines, utilities and other critical infrastructure, particularly from AI-powered attacks.

The Department of Homeland Security said Friday that the panel it’s creating will include CEOs from some of the world’s largest companies and industries.

The list includes Google chief executive Sundar Pichai, Microsoft chief executive Satya Nadella and OpenAI chief executive Sam Altman, but also the head of defense contractors such as Northrop Grumman and air carrier Delta Air Lines.

I am curious if this is the sort of response the AI safety lobby wanted from the government. But it also makes me think in hindsight, how quaint the AI fears were - all those 50s SF fever dreams of rogue AI taking over the world and being our tyrant god-emperor from Less Wrong and elsewhere, back before AI was actually being sold by the pound by the tech conglomerates. How short a time ago all that was, and yet how distant it now seems, faced with reality.

Reality being that AI is not going to become superduper post-scarcity fairy godmother or paperclipper, it is being steered along the same old lines:

War and commerce.

That's pretty much how I expected it to go, more so for the commerce side, but look! Already the shiny new website is up! I can't carp too much about that, since I did think the Space Force under Trump was marvellous (ridiculous, never going to be what it might promise to be, but marvellous) so I can't take that away from the Biden initiative. That the Department of Homeland Security is the one in charge thrills me less. Though they don't seem to be the sole government agency making announcements about AI, the Department of State seems to be doing it as well.

What I would like is the better-informed to read the names on lists being attached to all this government intervention and see if any sound familiar from the EA/Less Wrong/Rationalists working on AI forever side, there's someone there from Stanford but I don't know if they're the same as the names often quoted in Rationalist discussions (like Bostrom etc., not to mention Yudkowsky).

Related, how long do I have to wait before I can start calling LLMs a nothing burger? Everything that has come out of it seems so small and near-pointless. Marginal productivity increases at best. When does the fun stuff start happening?

Eliezer Yudkowsky has successfully held off the Skynet overlords and if you want this state of affairs to continue, you should send him more money.

Jokes aside, while I agree that so far the productivity increases are marginal, the technology is genuinely remarkable compared to what most people anticipated a few years ago. I can ask the LLM to tell me about how to do incredibly boring softwareshit and it usually tells me the right idea, saving me the effort of going to Stack Overflow and other sites and reading through it myself. And it actually writes code for me that works like 70% of the time which is great because it means that I can spend less time doing perhaps the most boring activity ever devised, writing business software for other people, and instead use the time to do something more interesting, such as pretty much anything else. All this might not seem like much, but this would actually have seemed like an utterly crazy leap of technology a few years ago. The AIs are also making good visual art and decent music left and right. I think that the economic changes are slowly creeping up, it might not seem obvious now what the current AI revolution has done, but it will be obvious in a few years.

Skynet doesn't seem to be right around the corner, but people who worry about it have a point in that, while the current AI stuff isn't Skynet, if one draws a line between AI capability 10 years ago and AI capability now, and extrapolates the same line 10 years forward... Of course extrapolating the line isn't good science, but there's no particular reason to think that the line's slope will decrease.

Personally, my attitude to all the AI risk stuff is the same as my attitude to climate change. I think the concerns about both are probably well-founded, I just don't really care much about either on the emotional level. I guess that's one of the nice things about not having kids.

I also think that AI doomers are underrating the possibly beneficial things that super-powerful AI could bring. I mean, yeah, there's a chance that humans will be replaced by AI overlords, but there's also a chance that super-powerful AIs will have no desire to destroy us and instead will give us a bunch of good things.

I also think that AI doomers are underrating the possibly beneficial things that super-powerful AI could bring. I mean, yeah, there's a chance that humans will be replaced by AI overlords, but there's also a chance that super-powerful AIs will have no desire to destroy us and instead will give us a bunch of good things.

How are you on this website without realizing how hard it is to control a superintelligent AI? Have you not thought about that? I think that you are thinking "AI can either be aligned to human values or not. Sounds like 50/50."

In fact, aligning a superintelligence to human values is extremely difficult and extremely unlikely to happen by accident. Human values are a very small slice of the possible spectrum of minds that could exist.

It kind of feels like people vastly overrate the degree to which they understand the arguments of AI doomers. Like they're just going by a few tweets they read. Twitter is not a good way to full understand a contentious subject.

I fully understand that it would be nearly impossible for humans to control a superintelligent AI. I just don't care much about it. I don't have any children. If humanity was destroyed by superintelligent AI, my attitude to it would, aside from the obvious terror, also probably include some mirth. The lords of the known world, those who conquered all those other species, now destroyed by the same cold Darwinian logic of reality.

My point is that, while the Skynet scenario is definitely possible, the altruistic AI that loves humans scenario is also possible. There's no particular reason to think that a hyperintelligent AI would have the sort of incredibly hardwired "kill all opposition" motivation that we as humans have as a result of having evolved through billions of years of eat-or-be-eaten fighting. Of course AI, just like everything else in reality, is subject to natural selection, but there is no reason to think that AI would be subject to natural selection in a way that makes it violent in the ways that us humans are violent.

"the altruistic AI that loves humans scenario is also possible."

It is not realistically possible. It would be like firing a very powerful rocket into the air and having it land on a specific crater on the moon with no guidance system or understanding of orbital mechanics. Even if you try to "point" the rocket, it's just not going to happen.

You're thinking that AI might have some baseline similarity to human values that would make it benevolent by chance or by our design. I disagree. EY touches on why this is unlikely here:

https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/

It's not a full explanation, but I have work I should be getting back to. If someone else wants to write more than they can. There are probably some Robert Miles videos on why AI won't be benevolent by luck.

Here's one:

https://youtube.com/watch?v=ZeecOKBus3Q

I'm not going to watch it again to check but it will probably answer some of your questions about why people think AI won't be benevolent through random chance (or why we aren't close to being skilled enough to make it benevolent not by chance). Other videos on his channel may also be relevant.

It is not realistically possible. It would be like firing a very powerful rocket into the air and having it land on a specific crater on the moon with no guidance system or understanding of orbital mechanics. Even if you try to "point" the rocket, it's just not going to happen.

Oh bullshit. Intelligent agents co-align. That is they modify themselves and one another to be more aligned with one another. It's not a rocket that has to be perfectly aimed, it's a billion rockets with rubberbanding.