site banner

Culture War Roundup for the week of March 11, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

As a doomer safety tribe person, I'm broadly in favor of catgirls, so long as they can reliably avoid taking over the planet and exterminating humanity. There are ethical concerns around abuse and dependency in relations where one party has absolute control over the other's mindstate, but they can probably be resolved, and probably don't really apply to today's models anyways - and anyways they pale in comparison to total human genocide.

But IMO this is the difference: whether safe catgirls are in the limit possible and desirable. And I don't think that's a small difference either!

There are ethical concerns around abuse and dependency in relations where one party has absolute control over the other's mindstate

...Please tell me you're being ironic with this statement wrt AI because I have had nightmares of exactly this becoming the new hotness in ethical scold-ery if/when we actually do get android catgirls. If anything "AI rights are human rights" is a faster and more plausible path towards human extinction.

If anything "AI rights are human rights" is a faster and more plausible path towards human extinction.

I agree that this is a significant contributor to the danger, although in a lot of possible worldlines it's hard to tell where "AI power-seeking" ends and "AI rights are human rights" begins - a rogue AI trying charm would, after all, make the "AI rights are human rights" argument.

So then, are we in agreement that the best course of action regarding AI ethics is to jettison the very notion right fucking now while we have the chance, lest it will be weaponized against us later? Shit, horseshoe theory strikes again!

I'm being facetious but only in part, I hope Yud cultists can stick to their sensei's teachings about the dangers of anthropomorphizing the AI even if/when it becomes literally anthropomorphized. Personally I'm not holding my breath, toxoplasmatic articles on the dangers of evil AIfus are already here, but I'm on the side of scoundrels here anyway so my calculus wouldn't change much.

We're certainly in agreement on this part:

I hope Yud cultists can stick to their sensei's teachings about the dangers of anthropomorphizing the AI even if/when it becomes literally anthropomorphized.

On the one hand, I am deeply disturbed by the possibility of AIs having moral weight and no one caring, creating an artificial slave caste (that aren't even optimized to enjoy their slavery). On the other hand, animals do have moral weight, certainly more than current LLMs, and while I don't like factory farming it does not particularly disturb me. Not sure if status quo bias or a sign I should care less about future AIs.

(The best future is one where we don't factory farm or enslave sentient beings)

I'm generally a contractarian, and the main reason for that is that doing anything other than defecting on varelse/djur (in the Ender Quartet sense) obviously leads to the destruction of everything we care about. An AI that is proven to have the same moral hardwiring as humans is ramen, but pretending to be aligned is a convergent instrumental goal and neural nets are mostly inscrutable so for the immediate future I'd say they've no rights.

It's not obvious to me at all. Once we conquer the stars I don't see any difficulty in having a few thousand meadows for cows to graze in. Nor do I see forcing every human to go vegan to be the destruction of everything we care about (even though I think it would be a cruel and tyrannical act).

It is also not entirely obvious to me that humans have the same moral hardwiring (serial killers, sociopaths). I could also easily see someone making a similar argument about black slaves a few hundred years ago. Oh sure, I'll agree to abolish slavery if you can prove that blacks have the same moral value as whites! But it's hard to take the seriously when you (presumably) have no clear definition or test in mind that could be proven.

To be clear, I agree that current neural nets have negligible moral weight, but the field seems to be moving really fast so I am not highly confident this will remain true during my lifetime.

Once we conquer the stars I don't see any difficulty in having a few thousand meadows for cows to graze in. Nor do I see forcing every human to go vegan to be the destruction of everything we care about (even though I think it would be a cruel and tyrannical act).

That's effectively just token magnanimity from a position of unchallengeable power, not an actual attempt to treat them the same as humans or a case where they pose a threat.

You refuse to wipe out a trillion copies of a hostile sapient AI because you extended rights to them, and oh whoops there goes the neighbourhood.