site banner

Culture War Roundup for the week of April 22, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I've been asked to repost this in the Culture War thread, so here we go.

I read this story today and it did amuse me, for reasons to be explained.

Fear not, AI doomerists, Northrop Grumman is here to save you from the paperclip maximiser!

The US government has asked leading artificial intelligence companies for advice on how to use the technology they are creating to defend airlines, utilities and other critical infrastructure, particularly from AI-powered attacks.

The Department of Homeland Security said Friday that the panel it’s creating will include CEOs from some of the world’s largest companies and industries.

The list includes Google chief executive Sundar Pichai, Microsoft chief executive Satya Nadella and OpenAI chief executive Sam Altman, but also the head of defense contractors such as Northrop Grumman and air carrier Delta Air Lines.

I am curious if this is the sort of response the AI safety lobby wanted from the government. But it also makes me think in hindsight, how quaint the AI fears were - all those 50s SF fever dreams of rogue AI taking over the world and being our tyrant god-emperor from Less Wrong and elsewhere, back before AI was actually being sold by the pound by the tech conglomerates. How short a time ago all that was, and yet how distant it now seems, faced with reality.

Reality being that AI is not going to become superduper post-scarcity fairy godmother or paperclipper, it is being steered along the same old lines:

War and commerce.

That's pretty much how I expected it to go, more so for the commerce side, but look! Already the shiny new website is up! I can't carp too much about that, since I did think the Space Force under Trump was marvellous (ridiculous, never going to be what it might promise to be, but marvellous) so I can't take that away from the Biden initiative. That the Department of Homeland Security is the one in charge thrills me less. Though they don't seem to be the sole government agency making announcements about AI, the Department of State seems to be doing it as well.

What I would like is the better-informed to read the names on lists being attached to all this government intervention and see if any sound familiar from the EA/Less Wrong/Rationalists working on AI forever side, there's someone there from Stanford but I don't know if they're the same as the names often quoted in Rationalist discussions (like Bostrom etc., not to mention Yudkowsky).

Is there anything more to your point here than "AI currently exists and may have military applications, therefore there will never be a dangerous superhuman AI", which is an obvious non sequitur?

Are you trying to vaguely imply that reality is only allowed to have appropriately gritty and cynically-themed things in it like War And Commerce, as shown by this development, and therefore superintelligence is impossible because it would be inappropriate for the genre? Because weird implausible flight-of-fancy sci-fi stuff actually happens all the time and then rapidly becomes normal. You're currently on the global pocket supercomputer network, for example.

Is there anything more to your point here than "AI currently exists and may have military applications, therefore there will never be a dangerous superhuman AI", which is an obvious non sequitur?

My dears, my darling, my honey, my sweetie-pie:

Thank you for yours of the 28th inst., your reply has been received and noted and will be actioned whenever (if ever) I can be arsed to do so.

This is indeed a reaction, and is helpful for me to note and keep track of various opinions. So, shall us put 'ee down for "it's all copacetic", shall us?

  • -28

My dears, my darling, my honey, my sweetie-pie:

Stop this.

To clarify dear-
It's the demeaning intent and attitude that is unacceptable- rather than the precise lexicon. Is this correct?

Yes. We allow some latitude for sarcastic or snippy responses, but we discourage it, and if you go out of your way to be condescending and sarcastic, you're going to get told to knock it off. And @FarNearEverywhere has been told many times.

Understood Sir.

No.

I've put up with you doing the Nanny bit because you're a mod and you have the authority, but I'm not going to accept sneering without responding in kind.

If OP can be polite about their response, I'll be polite in return. OP goes on about "reality is only allowed to have appropriately gritty" so on, I'll respond in the same tone.

You can tell me I'm wrong, you can tell me I'm banned, but you can't tell me how to feel my feelings.

  • -13

No one is telling you how to feel your feelings. You know that having feelings and how you express them are two different things.

You get cut more slack than you know because people (including me) actually like you quite a lot, despite your inability to control your feelings and your tendency to respond to even the least little bit of poking with explosions. So be assured that the contempt you are showing me now and have shown me in the past is not taken personally.

That said: replying to a mod telling you directly to stop doing something with a foot-stamping "No, not gonna, you can't make me, you're not the boss of me" temper tantrum is an escalation with a response that you clearly chose. So yes, banned.

I don't need or want to deal with this nonsense right now, so I will let the other mods decide when or if to end your ban.

You guys are making some really terrible decisions lately.

"Stop doing this." "No."

That is always going to get you a ban, and this is not new.

Your first modhat comment was also bad.

More comments

imo there should be a blanket policy that mods have to recuse themselves from moderating direct replies to their own posts (just get a different mod to do it).

Basically what @madeofmeat said. If a mod is in a discussion thread as a participant and someone says something rude/antagonistic to the mod, we generally will recuse ourselves and let another mod adjudicate. (This is not a "blanket policy." If you reply to me by saying "go fuck yourself" - something that has actually happened - I don't feel a need to recuse myself in handing out a ban.) But if a mod modhats you and you reply to the modhat comment with antagonism, you're escalating and that mod is entitled to decide message you're sending is "I will not follow the rules and need more serious consequences."

Note also that no one ever gets banned for responding to a modhat comment by saying "I think your moderation is bad and I didn't deserve to be modded." We probably won't agree with you, but we don't ban people just for arguing or disagreeing with us. What @FarNearEverywhere did was flat-out say "No, I will not follow the rules." If she's just omitted the "No," I'd probably have told her (again) to regulate herself and stop using her feelings as an excuse. If she'd wanted to debate why her post was too condescending but the one she was responding to (which she claims started it) was not, I might or might not have indulged her, but I wouldn't have banned her.

But if a mod says "Stop doing this" and you say "I will not stop doing this," well, what kind of response are you expecting?

It makes sense that if the mod started out as a regular participant in the conversation, they should be hesitant to switch to modhat posting. When the first thing the mod posts in the conversation is a modhat post, it doesn't make sense that they'd need a second mod to make more modhat posts.

There is truly a Hlynka-sized hole in the moderation team. This kind of petty shit is getting worse and worse, and the King's court is really struggling to conceptualize their subjects as agents.

What makes you think Hlynka wouldn't ban her even sooner? He had an extremely short fuse as a moderator, and his decisions always struck me as arbitrary.

More comments