site banner

Culture War Roundup for the week of March 30, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

2
Jump in the discussion.

No email address required.

I haven't posted too much about AI on here, largely because my own personal experiences with using it have been boring and underwhelming. Generating offensive memes (9/11 gender reveal, racial stereotypes, etc.) is my most positive interaction with AI. And partly because I find the pro-AI "AGI is just around the corner bro!" crowd obnoxious as hell, and I find that most discussions about it depend on accepting certain massive assumptions about what we actually do (and don't) know about the nature of intelligence, consciousness, the human brain, etc. For the purposes of making my biases clear up front: Personally, I'm religious and believe in the existence of a human spirit/soul, so I'm already strongly biased against claims that consciousness is an emergent property of sufficiently advanced systems or any arguments along those lines.

Regardless, a few developments have happened recently that have motivated me enough to actually make a top-level post about this. The first being my (employer-mandated) use of Claude to generate code. "You're not using the latest model, just one more model and we'll reach AGI"-bros officially in shambles after this one. I have an HTTP API client library I wrote a few years ago for interacting with a 3rd party API. There's a good amount of duplicate logic throughout for things like setting up and making the requests, caching, etc. I asked Claude to look over the code and extract out the duplicate logic into a single implementation Here's how it messed up just the authentication part of it:

  1. It didn't notice that there are two different ways to authenticate
  2. It didn't notice that one of those two methods requires two separate calls to the API
  3. It didn't notice the calls for refreshing the auth tokens
  4. It didn't notice the caching logic for the tokens, so it would have authenticated every time, meaning we would have hit rate limits on the API super fast

This was with the latest version of Claude Sonnet. We don't have access to the latest version of Opus, but I'm sure an AI-bro on here would insist that Opus would totally get this right. Regardless, it failed spectacularly at what would be an easy (but tedious) task for a mid-level developer and above (or a sufficiently talented junior).

The second happening is the ARC prize people releasing version 3 of their AGI test suite, a series of puzzle games. They released it within a few hours of Jensen Huang saying he thinks the latest and greatest models are capable of AGI. Humans were capable of solving 100% of the puzzles. The highest scoring AI couldn't complete more that 0.5%.

I'm willing to bet future models will do st least somewhat better on this, but only because I'm maximally cynical and I fully expect these puzzles to be included in the training set for future SOTA models.

I tried several of the puzzles myself, and none of them are terribly difficult. I'd estimate that anyone in the 100-110 IQ range or higher would be able to solve most or all of them. This development has further reinforced my belief that LLMs are basically just really advanced statistical regression models on crack, but nothing approaching what we would consider actual intelligence or conscious thought (and this is before we get into Chinese Room style criticisms of them).

In any case, I'm curious to see what you all think of these. Even the AI-bros I've been speaking about condescendingly throughout this post. If anything, I'm actually most curious about and interested in the AI-bros responses, I'd love to hear yoyr thoughts.

Here are the AGI puzzles for anyone interested in trying them out: https://arcprize.org/arc-agi/3

my belief that LLMs are basically just really advanced statistical regression models on crack

Personally, I like the epithet "glorified Markov chain" (1 2).

I prefer "jumped up matrix multipliers" myself.