site banner

Culture War Roundup for the week of April 13, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Sam Altman's bad week continues, as a car stopped and appears to have fired a gun at the Russian Hill home of OpenAI’s CEO.

OpenAI CEO Sam Altman’s home appears to have been the target of a second attack Sunday morning, a mere two days after a 20-year-old man allegedly threw a Molotov cocktail at the property, The Standard has learned.

The San Francisco Police Department announced (opens in new tab) the arrest of two suspects, Amanda Tom, 25, and Muhamad Tarik Hussein, 23, who were booked for negligent discharge.

It appears that, if measured by deed, Mr. Altman may be in contention for the title of most hated business executive in the country.

Unless I am profoundly misinformed about the base rate of assassination attempts on tech CEOs, it appears AI anxiety has apparently reached a precipitation point among American youth, to the point where discontent is crystalizing into direct action. I've seen this in my personal life. My youngest brother is a bright kid - top of his class, eagle scout, 1400+ on his SATs as a junior, the whole shebang. He's completely given up on his original goal of going to college for something software-related, and he's not only adrift about what he's going to do with his future, but he's angry about it. I hope he has a support network sufficient to keep him on the right track, but I don't like what I see.

I'm not exactly old, but I'm sure as hell not young either. For those of you who are 25 or under, what does it feel like on the ground right now?

I was planning to write up a larger top-level effort-post on this topic, but since you've already made the top-level I'll post the notes I was drafting.

For the last few days, I've been reading about the Sam Altman attack drama and the warehouse fire attack that happened recently, and I've been finding the reactions pretty scary. General sentiment on HN is something along the lines of "Altman deserved it" and even among my general leftish acquaintance bubble the vibe is along the lines of "they shouldn't have missed" or "we need more of this fuck the rich" which doesn't really bode well for the stability of society.

Whether or not you believe the more bombastic claims of AI CEO's, I do think it's clear that at minimum AI is going to exacerbate the trend of technology centralizing power, wealth and status, even as absolute material standards have continued to improve beyond the wildest dreams of 99.9% of humanity in the past. For better or for worse, human happiness seems to be tied only lightly to absolute material standards and heavily tied to relative status, position, and feelings of fairness, and the internet and social media are super-stimuli for the human sense of status calibrated towards the Dunbar number.

Ruling out FOOM levels of societal disruption, I can think of a few ways that this plays out.

Left-wing communist populist marxist social democratic total victory: public outcry reaches all-time highs, perhaps with some peasant revolts sprinkled in, and the AOC/Mamdani coalition gets voted in to dismantle the AI labs, big tech and the icky billionaires. Leaving aside the fact that this would annihlate the economy and living standards by proxy, I'm not really convinced that with mass internet and social media there's any gini index or amount of redistribution that would leave the status anxious public satisfied. First they came for the billionaires and then they came for the homeowners.... Certainly comparable democratic countries with half of the gini index of America are still constantly flooded with rhetoric about eating the rich.

Right-wing AI strongman technofeudal democratic backsliding: political violence becomes normalised as a part of day to day life and as a response, perhaps after a significant assassination or riot, a strongman or group of technocrats use the violence as an excuse to seize absolute power, abetted by AI in part or in full. The lumpenproles are kept under control via mass surveillance, drones and guns or killed off entirely. The worst ending, but one that seems depressingly realistic looking at the history of inequality and failed revolutions.

Nothing ever happens: whether mass unemployment happens or not, most people end up with sinecures or welfare to keep them relatively pacified. Social media and concentrating wealth inequality continues to make people miserable even as absolute material conditions begin to reach sci-fi levels, and competition for zero-sum goods like housing in desirable areas and prestigious educations and sinecures becomes even more red in tooth and claw in the vein of the East Asian countries. Political violence gets somewhat more normalised, perhaps to Latin American or 20th century standards, but it's limited to isolated incidents.

Generally I consider myself libertarian and think that billionaires are good, actually, but I do think that inequality and society's response to inequality is likely to be one of the defining questions of the 21st century. While Sam Altman is the most visible face of AI to normies, pure game theory dictates that technological progress will continue with or without the consent of any individual person, company or nation-state, if the capability exists someone (or something...) is going to be the one that holds those reins to wealth, status and power, and as long those reins are held then the holder will inevitably be the target of the green-eyed masses. I don't think we yet have the social technology to deal with this and it's not clear that we ever will; I've seriously been thinking lately whether this might be one way that the Fermi Paradox manifests.

For better or for worse, human happiness seems to be tied only lightly to absolute material standards and heavily tied to relative status,

Yeah, it's kinda depressing to realize that some of the most optimistic scenarios for AI will still result in a lot of human misery. It's fun to be a trust fund baby, but if all the hoi polloi are trust fund babies too, it kinda loses its shine. You are just another unemployed loser who can't get a reservation at any of the best restaurants. And if you want to earn extra money beyond your UBI, you need to take some demeaning job as a personal servant for the grandchild of some schmuck who was lucky enough to put $10,000 into the right stock at the right time.

There is a story they used to teach in American history classes in high school that many of the early immigrants to the United States were people who had been locked out of European status hierarchies and decided to make a fresh start of things. Perhaps a similar sentiment will drive migration to the stars.

And if you want to earn extra money beyond your UBI, you need to take some demeaning job as a personal servant for the grandchild of some schmuck who was lucky enough to put $10,000 into the right stock at the right time.

This is why we should have a real meritocracy instead of a luckocracy. My only problem with Sam Altman is that he isn't enough of a genius. His product is good and better people ought to have more money than the rabble.

AI will be more meritorious than any human, though.

I'm pro sentient sillicon super intelligence. I just want to make sure it has qualia and isn't a Chinese room.

I just want to make sure it has qualia and isn't a Chinese room.

"In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland without children." - Nick Bostrom

I'd also add some preferences regarding population and personality and such, but "do our successors have any intrinsic value or not" does seem to be the first and most important criterion to have!

However, I'm confused by the use of the phrase "make sure" here. Unless you're expecting to be uploaded, and you're confident that the idea of a "p-zombie" is incoherent (which I'm guessing you aren't, given the Chinese room reference), what observations could give you any sense of surety here? Today's LLMs can pass Turing tests, which used to be our "fine, they're sentient now" criterion, but their lack of "medium-term" memory and they fact that they still can "slip" in ways that make them seem non-sentient makes us think in hindsight that our criterion was just inadequate, and yet we haven't really found anything to replace it. If tomorrow's LLMs never slip, does that mean they've become sentient, or does that just mean they've become better at faking it?

If it can be a true successor, with intelligence, agency, and everything, it's probably sentient. If we can't figure out what sentience is in the mean time, maybe we don't deserve to keep existing into the future anyway. It's probably not that hard, but humans are very disappointing currently.