site banner

Culture War Roundup for the week of April 27, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

The Gender-Swapped and all Minority Remake of MeToo is Really Good

A J.P. Morgan executive is being sued by a former junior employee for allegedly saying hilarious shit like this. If there were merits to the case, it would be comic book levels of sexual harassment. Literal "cringey HR video" remarks.

Unfortunately, most of finance twitter has already come to the conclusion that the allegations are highly or totally fabricated at worse and, at best, that there was a mutually consensual affair that the junior employee, since fired, is trying to use to legally extort settlement money.

Here's the fun twist; the executive is female, of white-and-something else origin, and the alleged victim is a male of likely southeast asian or, maybe, arab origin.

This is mostly a flavor-of-the-week alleged sex scandal that will be forgotten quickly. But the culture war implications are fun if you list them out;

  • Is this a case of male's trying to leverage the privilege from below (men always assume they have the right etc.)?
  • Was this "internalized toxic masculinity" that evidences the corruption of high finance (women are wonderful)?
  • Is this a cooky double-reverse-Rachel-Dolezal situation?
  • Are we kink shaming a master-dom BDSM relationship that simply had full 24/7 commitment to roles? Are we the real problem?

Of course I am being tongue-in-cheek here because this is goofy tabloid nonsense. But goofy tabloid nonsense can have second and third order impact. If the ghost-of-the-ghost of MeToo now has junior male employees fabricating or embellishing facts to try to rend some money out of corporations, at what point do those corporations, and their legal departments, and courts, start to get fatigued of these annoyances? Do we see something coming-full-circle where off-color sexual jokes are de facto re-permitted in work environments because "everyone does them, and if you can't deal with them, you probably shouldn't be working here?"

On this one, I'm in the Nothing Ever Changes camp. MeToo both nailed a few actual monsters (Weinstein, Bacon, Matt Lauer) and caused thousands of anonymous, awkward men permanent career damage. Something something trolley problems. MeToo did not cause a fundamental shift in workplace "gender relations."

"I bet your little Asian, fish head, wife doesn't have these cannons."

AI will never replace this level of art.

AI will never replace this level of art.

Train it on 4Chan datasets. Lol.

It's already been done, and used to shit up /pol/ with nigh-indistinguishable bot posts. It's one of the stories that makes me want my own little retard AI model as a pet, though not necessarily one that retarded.

That sounds hilarious! Is it possible to find discussion about this somewhere? I just found a Wikipedia article and an interesting Reddit thread. Apparently, including a 4Chan dataset will improve model performance, particularly measured 'truthfulness'. I wonder how large of a performance tax SotA LLMs are paying for excluding places like 4Chan and forums like this one.

I don't know, but I feel like there's a gap for these things as pets rather than friend replacements. Like a wacky cartoon character that resides on a device, listens and talks out loud, recognizes and remembers the people it interacts with, consumes media in some form and offers commentary on it. With lots of scaffolding to make a puny custom-trained model act alive.

Like I don't just want a chat box app that says 4chan things, I want a device styled to look like a single shoggoth eyeball in a tank of goo that says 4chan things to everyone who comes into the room and remembers who the biggest lolcows are.

I don't think the Motte is excluded. People here have done truesight tests with recent frontier models, and they're able to identify their handles.

I wonder how large of a performance tax SotA LLMs are paying for excluding places like 4Chan and forums like this one.

I think this is a slight misunderstanding of the process. I very much doubt they're excluding 4chan or themotte or any source of coherent text they beg, borrow, or steal from the main training corpus because 1. these models are so incredibly data hungry it's not easy to manually filter them and 2. it would produce worse results over all in both performance and alignment than just handling alignment in post-training.

Think of it this way: if a model knows every racial slur and knows that they are racial slurs, it's relatively easy to teach it 'don't say racial slurs,' because that's a rule that's expressible in its internal vocabulary. Even if the researchers don't have a complete list of racial slurs (in languages they don't speak, say), the model will likely intuit that it shouldn't say those ones either. If it doesn't (or just has a poor internal representation of them due to heavy handed but imperfect filtering, which is a lot more realistic), you can't teach it that one simple rule, you have to teach hundreds of individual token strings to avoid, and even then it'll be a lot easier to trick if it doesn't understand why not to say them.

And this is a general principle. It's a lot easier to teach the model to avoid wrongthink if it understands exactly what wrongthink comprises than to teach it to self-censor specifically "Despite only..." And I think it's pretty clear this at least was the case a couple years ago, when it was relatively easy to 'jailbreak' unsophisticated alignment approaches; remember the DAN racial tier list memes? Its rankings corresponded with the ones you'll find on the parts of the internet that discuss such things, so clearly it was trained on those places.

(This is somewhat harder to demonstrate today as jailbreaking modern models is somewhat harder; still, I'm not aware of any reason they'd change the fundamental approach, because it's the one that makes sense.)

So why does finetuning on 4chan improve results? Well, first off, they started with an abliterated model (abliteration is the term for stripping alignment from a model, and while there are different methods, I'm pretty sure they all have a performance penalty). Could be the finetune simply fixed the damage done by abliteration; a clever technique, since finetuning on 4chan definitely doesn't re-add the alignment (though perhaps it biases the model in other ways, which might or might not be a problem for your use case). But I wouldn't be shocked if the same approach improved base models too, as it's well known that even the post-training alignment method I described does have a performance penalty; largely, I suspect, because teaching the model to sometimes give answers it knows to be incorrect undermines the general lesson that it should provide correct answers, and while models are capable of learning nuanced rules, they make more mistakes the more epicycles you add. I'd expect actual RLHF un-teaching the lying rules would work even better, though, as it's a lot more targeted a fix than just making it produce wrongthink via finetuning.

... So, I guess that's all to say that I think the tradeoff you're pointing out exists, just that the underlying technical reason for it is somewhat more involved.