site banner

Culture War Roundup for the week of September 23, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

The only danger AI, in it's current implementation, has is the risk that morons will mistake it as actually being useful and rely on the bullshit it spits out. Yes, it's impressive. But only insofar as it can summarize information that's otherwise easily available. One of the reasons my Pittsburgh posts have been taking as long as they have is that I'll go down a rabbit hole about an ongoing news story from 25 years ago that I can't quite remember the details of and spend a while trying to dig up old newspaper articles so I have my facts straight and reach the appropriate conclusions. I initially thought that AI would help me with this, since all the relevant information is on the internet and discoverable with some effort, but everything it gave me was either too vague to be useful or factually incorrect. If it can't summarize newspaper articles that don't have associated Wikipedia entries then I'm not too worried about it. I'd have much better luck going to the Pennsylvania room at the Carnegie Library and asking the reference librarian for the envelope with the categorized newspaper clippings that they still collect for this purpose.

I beg you to consider the possibility that progress in AI development will continue. The doomers are worried about future models, not current ones.

I don’t think that most doomers actually believe in a very high likelihood of doom. Their actions indicate that they don’t take the whole thing seriously.

If you actually believed that AI was an existential risk in the short- or medium-term, then you would be advocating for the government to seize control of OpenAI’s datacenters effective immediately, because that’s basically the only rational response. And yet almost none of them advocate for this. “If we don’t do it then someone will” and “but what about China?” are very lame arguments when the future of the entire species is on the line.

It’s very suspicious that the most commonly recommended course of action in response to AI risk is “give more funding to the people working on AI alignment, also me and my friends are the people working on AI alignment”.

For what it’s worth, I don’t think that capabilities will advance as fast as the hyper optimists do, but I also don’t think that p(doom) is 0, so I would be quite fine with the government seizing control of OpenAI (and all other relevant top tier labs) and either carrying on the project in a highly sequestered environment or shutting it down completely.

What makes the government less likely to create an AI apocalypse with the technology than OpenAI? And just claiming an argument is lame does not refute it.

There is an argument to be made that if you want to stop the development of a technology dead in its tracks, you let the government (or any immensely large organization with no competition) do the ressource allocation for it.

If the US government had a monopoly on space travel by law, we wouldn't have satellite internet the way we do right now. And we may actually had lost access to space for non-military applications altogether.

Of couse this argument only goes as far as the technology not being something that is core to those few areas of actual competition for the organization, namely war.

But I feel like doomers are merely trying to stop AI from escaping the control of the managerial class. Placing it in the hands of the most risk averse of the managers and burdening it with law is a neat way of achieving that end and securing jobs as ethicists and controllers.

It's never really been about p(doom) so much as p(ingroup totally unable to influence the fate of humanity in the slightest going forward).

It's never really been about p(doom) so much as p(ingroup totally unable to influence the fate of humanity in the slightest going forward)

Yes, I think this is what it actually comes down to for a lot of people. The claim is that our current course of AI development will lead to the extinction of humanity. Ok, maybe we should just stop developing AI in that case... but then the counter is that no, that just means that China will get to ASI first and they'll use it to enslave us all. But hasn't the claim suddenly changed in that case? Surely if AI is an existential risk, then China developing ASI would also lead to the extinction of humanity, right? How come if we get to ASI first it's an existential risk, but if China gets there first, it "merely" installs them as the permanent rulers of the earth instead of wiping us all out?

I suppose there are non-zero values you could assign to p(doom) and p(AGI-is-merely-a-superweapon), with appropriate weights on those outcomes, that would make it all consistent. But I think the simpler explanation is that the doomers just don't seriously believe in the possibility of doom in the first place. Which is fine. If you just think that AI is going to be a powerful superweapon and you want to make sure that your tribe controls it then that's a reasonable set of beliefs. But you should be honest about that.

Only minor quibble I have with your post is when you said "doomers are merely trying to stop AI from escaping the control of the managerial class". I think there are multiple subsets of "doomers". Some of them are as you describe, but some of them are actually just accelerationists who want to imagine themselves as the protagonist of a sci-fi movie (which is how you get doomers with the very odd combination of beliefs "AI will kill us all" and "we should do absolutely nothing whatsoever to impede the progress of current AI labs in any way, and in fact we should probably give them more money because they're also the people who are best equipped to save us from the very AI that they're developing!")

I think there are multiple subsets of "doomers".

That's fair, this is an intellectual space rife with people who have complicated beliefs, so generalizing has to be merely instrumental.

That said I think it is an accurate model of politically relevant doomerism. The revealed preferences of Yuddites is to get paid by the establishment to make sure the tech doesn't rock the boat and respects the right moral fads. If they really wanted to just avoid doom at any cost, they'd be engaging in a lot more terrorism.

It's the same argument Linkola deploys against the NGO environmentalist movement: if you really think that the world is going to end if a given problem isn't solved, and you're not willing to discard bourgeois morality to solve the problem, then you are either a terrible person by your own standards, or actually value bourgeois morality more than you do solving the problem.

I’m coming to this discussion late, but this assumes that discarding bourgeois morality will be better at achieving your goals, when we see from BLM and Extinction Rebellion that domestic terrorism can have its own counterproductive backlash. How do we know they aren’t entirely willing to give up bourgeois morality, they just don’t see it as conducive to their cause?

It doesn't assume. Linkola actually builds the argument, convincingly in my opinion, that if radical change is required to solve the problem, as conceptualized by ecologists, that change is incompatible with democracy, equality and the like. Most people cannot be convinced peacefully to act against their objective interest in the name of ideas they do not share.

ER and BLM are exactly the sort of people criticized here. When your idea of eco-terror is vandalizing paintings to call out people doing nothing, you're not a terrorist, you're a clown.

Serious radical eco-terrorists would destroy infrastructure, kill politicians, coup countries, sabotage on a large scale and generally plot to make industrial society impossible.

In many ways, Houthis and Covid are better at this than the NGOs who say they are doing it and that's entirely by accident.

More comments