site banner

Culture War Roundup for the week of March 6, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

16
Jump in the discussion.

No email address required.

It is my belief that after the AI takeover, there will be increasingly less human-to-human interaction. This is partially because interacting with AI will be much preferable in every way, but it is also because safetyism will become ever more powerful. Any time two humans interact, there is the potential for someone to be harmed, at least emotionally. With no economic woes and nothing to do, moral busybodies will spend their time interfering with how other people spend their time, until the point where interacting with another human is so morally fraught and alienating that there is no point. Think about it, who would you rather spend time with: an AI who will do whatever you want and be whatever you want, anytime, or a grumpy human on her own schedule who wants to complain about someone who said "hi" to her without her consent? The choice seems obvious to me.

I appreciate the argument that AIs can be a super stimuli, but the need for social validation is enormously important for most people and I'm doubtful AI can meaningfully give that.

(Also most people are way more pleasant than your example)

(Also most people are way more pleasant than your example)

Not in America, at least never to me. (Okay maybe that's not entirely true, but as feminists have taught me, you have to exaggerate social harms for other people to take them seriously.)

I've thought a lot about this, and the need for social validation is a problem if the person refuses to accept the AI as 'real'. Some people will simply decide that AIs are real enough, but others will still seek validation from humans. This second group is where the moral busybodies live, and that community will evaporatively cool until it is such a toxic cesspit that no real validation is possible. Some people will still compulsively seek it, but most people will just find some way to convince themselves that the validation from the AI is legitimate.

Part of my point with this line of argument is to try to wake up the moral busybodies to how they are destroying society. These people are the ones who don't want a world where everyone is isolated into their own bubbles, but that is exactly what they are creating with their efforts.

Have you heard about the Replika controversy? A lot of people got very sad that their pseudo-gf ERPing partner got lobotomized: https://old.reddit.com/r/replika/top/

Easy to sneer at these people (especially the guy who Capitalizes Nearly Every Word) but even fairly primitive AI can provide emotional connection. Give it a face and some more compute and see what's possible.

Grandma always said not to fall in love with entities I couldn't instantiate on my own hardware.

Right now I expect it's mostly desperate men using these, but that may have more to do with broader tech adoption patterns than specific appeal. These things can function as interactive romance novel characters, and many women may find that quite compelling.

We're entering uncharted and to some extent even unimagined territory here. Anyone who has thought about this issue realized long ago that AI romance would be a thing eventually, but personally I figured that for it to have much wider appeal than marrying your body pillow, AI would have to achieve human-like sentience. And if the thing someone is falling in love with has human-like sentience, well, who am I to say that's invalid?

What I didn't imagine is that we'd build machines that talk well enough for interacting with them to light up the "social interaction" parts of our brains effectively, but that we can be pretty certain, based on their performance in edge cases and our knowledge of how they work, aren't sentient at all. People falling in love with things that have no inner existence feels deeply tragic.

I don't know. Maybe this is faulty pattern matching or an arbitrary aesthetic preference on my part, and romantic attachment to non-sentient AI is fine and great and these people will find meaning and happiness. (At least as long as they follow grandma's rule, which they can soon.)

Just as a point of order, AI can be a superstimulus or it can offer superstimuli; the phrase "a superstimuli" should never occur and it seems to happen a lot for some reason.

I want to agree with you, but I don't know. Never mind the people currently claiming Bing is too human, people used to say that about IRC chatbots and in Japan guys have straight up married Hatsune Miku and a Nintendo DS. Social connections are a bit like food or water, when you don't get any of it, a little goes a long way.