site banner

Culture War Roundup for the week of March 6, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

16
Jump in the discussion.

No email address required.

It is my belief that after the AI takeover, there will be increasingly less human-to-human interaction. This is partially because interacting with AI will be much preferable in every way, but it is also because safetyism will become ever more powerful. Any time two humans interact, there is the potential for someone to be harmed, at least emotionally. With no economic woes and nothing to do, moral busybodies will spend their time interfering with how other people spend their time, until the point where interacting with another human is so morally fraught and alienating that there is no point. Think about it, who would you rather spend time with: an AI who will do whatever you want and be whatever you want, anytime, or a grumpy human on her own schedule who wants to complain about someone who said "hi" to her without her consent? The choice seems obvious to me.

I expect AI to reduce safetyism because safetyism is, optimistically, a result of uncertainty and miscommunication. If you have poor eyesight, you wear glasses; if you have poor hearing, you wear a hearing-aid. My expectation is that many to most people will opt into prosthetics that give them improved social cognition: a feeling, in advance, for how something you're intending to say will be received. Alternatively, you can literally get the AI to translate vernacular, sentiment and idioms; this will be useful when leaving your peergroup. Furthermore, it will be much easier to stay up to date on shibboleths or to judge cultural fit in advance.

Humanity suffers from a massive lack of competence on every axis imaginable. We cannot now imagine how nice the post-singularity will be, but for a floor consider a world where everyone is good at everything at will, including every social skill.

My expectation is that many to most people will opt into prosthetics that give them improved social cognition: a feeling, in advance, for how something you're intending to say will be received.

I think you have a fundamental misunderstanding of why some utterances are received poorly.

It's not about knowing enough cultural sensitivities to avoid faux pas, because faux pas aren't really caused by cultural insensitivities (which could be legible to an AI). Whether or not offense is taken is a choice of the listener, not a condition of the zeitgeist. If your interlocutor woke up on the good side of the bed this morning, conversation will go smoothly. If they woke up on the wrong side of the bed this morning, they'll claim to be offended by your aspie stutterings. It depends on the fundamentally invisible qualia of your conversation partner, not a legible, predictable, objective feature of language.

I am reminded of the fall of Lord Renard, brought down because he made "unwanted sexual advances". How could he know they were going to be unwanted? Sorry, pal, whether or not they're unwanted can only be decided inside the woman's head, unfalsifiably. I don't think anyone's going to agree to give up the power to destroy people at will because "Shucks, his AI told him she was asking for it, I guess he's off the hook!"

As such, I predict that "a prosthesis for social cognition" is impossible. Unless its a maxillofacial prosthetic, that'll successfully produce the desired effect.

Do you think it's okay that some people have AI companions, or do you think that those people should be forced to suffer eternally for no fucking reason?

I hardly know where to start with this, mostly because the part after the comma bears no connection to the part before the comma.

Do I think it's OK for some people to have AI companions? What do you mean "companions"? Do you mean AI GFs, or do you mean the AI social cognition prostheses discussed previously? In any case, I think AI GFs are bad because it's edging towards wireheading and wireheading is bad. And I think AI social cognition prostheses are impossible.

As for the people without AI companions being forced to suffer eternally for no fucking reason:

  • Why is tfw no AI gf "eternal suffering"?

  • Who's forcing them?

  • There's very good reasons for people to not have AI GFS. They're expensive to run, they make it more difficult for him to get a real gf, and there are moral problems to creating arguably semiconscious entities if you're only going to let them be an incel's ERP plaything.