site banner

Culture War Roundup for the week of April 20, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.

No email address required.

I occasionally see content about Clavicular (Clav, age 20) pop up in my algorithm. I used to ignore it because I felt like I had a good read on his schtick. I decided to watch a couple interviews to understand why he might be popular among others, and to better understand him from a psychological perspective.

The primary thing he is known for is being a spokesperson for looksmaxxing ideology. He believes looks are the most important factor in achieving positive social outcomes. He therefore believes in going to extreme lengths to optimize his own looks. His own looksmaxxing experiments include steroid usage at a young age, taking meth to stay lean, and altering his facial structure by hitting facial bones with a hammer/fist.

Digging deeper it appears that he has social anxiety (he suspects he has autism) and he usually uses a cocktail of drugs to overcome this anxiety when streaming his social interactions. He recently overdosed while streaming, but made a quick recovery.

He is very in tune with social media trends and algorithmic manipulation. He knows how to clip farm and turn novelty into engagement. He uses weird terms like methmaxxing and jestermaxxing to increase the probability of a clip going viral. He also knows how to livestream and turn audience engagement into content.

His interviews tend to be a combination of him wanting to sperg out about looksmaxxing and him playing the role of clip farmer. The interviewers usually start out as curious about Clav’s worldview, but then they try to bait him into talking about his past controversies or play some rhetorical gotcha game. When Clav appears to have his drugs dialed in he seems to achieve his goals in the interview (spreading looksmaxxing ideology and generating algorithmic engagement). Sometimes he just comes across as spaced out and like is he having a hard time following the logic (like he is impaired by a substance).

My personal critique of him is that he is correct that looks matter, but he fails to realize the importance of balancing other skills and traits in order to achieve social success (like Aristotle's golden mean). I also think he is on a precipice with his drug use. He has the opportunity to taper and integrate the confidence he learned into his sober personality, but if he continues using his cocktail of drugs he will cause physical and mental injury to himself.

I’m far more interested in discussing the larger pattern that Clav is symptomatic of. Young men don’t see any viable paths to success, or have good role models for how they should live their lives. They look around and see the traditional paths (like college) are uncertain at best. They notice young women’s expectations have increased and they often don’t meet them. If they see a successful person (like a retired boomer) they don’t think that path is still available to them. If everything is uncertain the best thing to do is look around for successful people and imitate them. So, they find an influencer like Clav and realize they can play the social media influencer lottery by trying to become viral like him. If society tells them to figure out everything on their own and won’t provide a clear path that is likely to succeed then becoming viral on social media, giving up, or gambling suddenly seem like much more attractive options.

It is obvious to me that incentivizing a bunch of people to figure out how to optimize viral social media content is not good for society. It steers people into echo chambers, distorts their ability to see reality, and is also a huge waste of potential – they could become productive members of society (like scientists and engineers) if only society better aligned the incentives.

How can society better support the men who sincerely look up to Clav as role model? Is there a way to become as viral as Clav by doing pro-social things (so offering a viable competing worldview)?

Divisiveness sells on social media. Controversial and sensationalist content will always get a larger audience than the sensible guy telling you to get an education and not destroy your future through drug abuse and cosmetic surgery.

Past attempts to make social media take responsibility for their effects on society has mostly resulted in the attempted silencing of dissident viewpoints, while the actual issue of the algorithm boosting extreme content remains unchanged.

I think the redpill/manosphere is a good case study of this. /r/theredpill was an attempt at offering a viral competing worldview, giving young men a clear explanation of how a man succeeds in the world and how to be attractive to women. But the only aspects of the red pill that went viral were those laced with misogyny and intense sexism. The big "red pill" content creators of today are the Andrew Tate types, which are essentially grifters selling BS courses to young men. Meanwhile, the rest of the manosphere has more or less drifted into obscurity, hidden away through censorship and stigma.

I think the simple solution is to get children off of social media completely. This would limit the risk of being continuously exposed to viral memes and addictive content, while also significantly reducing the viewerbase of Clavicular and those like him. This way we are actively disincentivizing his type of business while also protecting the kids, and forcing them to socialize in person. How we go about doing this is another question though. I really dislike the idea of ID verification, but on the other hand, parents at large are also unwilling (or unable) to do their part, at most preferring to monitor the social media accounts of their kids instead of banning it outright.

...the simple solution is to get children off of social media completely.

"For every complex problem there is an answer that is clear, simple, and wrong." -- H. L. Mencken

Your proposal has two flaws: the first is that it puts children at a greater risk of hermeneutical injustice at the hands of their parents. Imagine the ideology of your outgroup, the worldview you find most odious; do you really want a parent who holds that ideology to have absolute power over whether their child is aware that some people, fully endowed with reason and conscience, disagree with it?

The second flaw is that many adults are also led astray by extreme content boosted by social media algorithms; many of the adherents of Queue A Knon were already adults when social media became a thing.

I believe a better method would be to adjust the incentives further upstream, by requiring social media companies to implement an Agatean Wall¹ between user-experience and revenue-generation.

¹GNU Terry Pratchett.

Your proposal has two flaws: the first is that it puts children at a greater risk of hermeneutical injustice at the hands of their parents. Imagine the ideology of your outgroup, the worldview you find most odious; do you really want a parent who holds that ideology to have absolute power over whether their child is aware that some people, fully endowed with reason and conscience, disagree with it?

It seems that we have a tradeoff here: the more tightly you enforce central planning and limitations over how parents raise their kids, the more you reduce odious practices. At the same time if the central authority wishes to enforce an odious practice on all kids they have the power to enforce that in this hypothetical. At the same time, giving parents unlimited authority means you have no way to stop child abuse.

It seems to me that this is a question of marginal tradeoffs. I'm in favor of giving the state the ability to stop child abuse, defined as what the consensus of people consider child abuse, but I'm unwilling to go much further than that. I disagree a lot with how many people raise their kids, but I accept the need to let them raise their kids as they wish since I don't want them getting a vote on how I raise my kid. I'd be willing to support the state having more power if there was more of a consensus of values where I live, but since there isn't I default to general libertarianism as the local maxima.

Imagine the ideology of your outgroup, the worldview you find most odious; do you really want a parent who holds that ideology to have absolute power over whether their child is aware that some people, fully endowed with reason and conscience, disagree with it?

It’s irrelevant, my outgroup doesn’t breed well. Like pandas.

In other words, your main worry would be your outgroup converting outsiders to their worldview, which happens through the internet?

As to your second point, I do kind of like the idea of just banning social media completely. The issue with that is I can't think of a way to do that without obvious workarounds, while also not banning the entire internet. At least with children, age serves as a clear dividing line, and adults are generally better equipped to handle the internet than kids.

I don't think I understand the Agatean wall. Would that just be a verbal agreement that the social media companies would not optimize engagement for revenue generation? If so, I don't see why these companies would ever do that.

I don't think I understand the Agatean wall. Would that just be a verbal agreement that the social media companies would not optimize engagement for revenue generation? If so, I don't see why these companies would ever do that.

In my proposed architecture, one side of the wall would handle content-curation algorithms and interface design, with the instruction to make it convenient for the end user to see the content they want to see, with any advertisements or sponsored content kept to designated spaces clearly labeled as such. The other side of the wall would deal with anyone seeking to purchase advertising space or aggregate data, but would have no method to adjust the experience of end-users to keep them on the site longer; advertisers could either accept however many eyeball-minutes occur without engagement-maximisation tactics, or leave the attention of social-media users to their competitors.

This gives at least some possibility of squaring the circle of having a service both free-at-the-point-of-use and prioritising the preferences of its end-users.

As for how to bring about such a state of affairs, I have discovered a truly marvelous regulatory structure accomplishing this, which this comment box is too narrow to contain.

Your link goes to a wikipedia page about a mathemathical theorem. Is that on purpose?

Link fixed; it should point to the relevant section of the article.

Pierre Fermat, circa 1637, wrote in the margin of a book "It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second, into two like powers. I have discovered a truly marvelous proof of this, which this margin is too narrow to contain." The theorem was proven in 1995 by Andrew Wiles.

Congratulations! You're one of today's lucky 10,000!