site banner

Culture War Roundup for the week of February 23, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

I think we're on the same page here, I'll talk to SF about this. I'm willing to put in the effort on my end, which, as I see it, is to write a 1000 word essay as I normally would. Not particularly onerous.

Let me give you an idea of how I normally approach this. I simply copy-paste pages of my profile after sorting by top, usually at least two or three pages (45k tokens). I might also share a few "normal" pages in chronological order, for the sake of diversity if nothing else.

I did just this, using Gemini 3.1 Pro on AI Studio (GPT 5.2 Thinking, which I pay for, can't write in arbitrary styles nearly as well no matter how hard you try, and I've tried a lot, I don't pay for Claude so I'm stuck with Sonnet):

I copied and pasted the first two profile pages, sorting by top of all time. Instructions were:

Your task is to write a 1000 word essay in the exact style and voice of self_made_human, on a topic of your choice (heavily informed by what you think he'd choose).

https://rentry.co/23dc63vs by Gemini https://rentry.co/p5yh68zu by Claude 4.6 Sonnet (same setup)

Results? I'd grade Gemini a 7/10, Claude a 5/10.

Looking at Gemini:

  • It captures the way I'd write in an "academic register", namely when I'm trying very hard to be polished, and that includes heavy LLM use. It's not "raw self_made_human", because I increasingly do not post raw, minimally edited posts.
  • It uses em-dashes. I do not, as a general rule, mostly because people are on a hair-string trigger. Shame, I think they're neat.
  • The exact circumstances are obviously fictional. Can't expect otherwise, can we?
  • Otherwise very good! I would write a story like that. I've seen patients just like that. It captures my transhumanist outlook and my love/hate relationship with medicine.
  • I can see it overindexing on random biographical tidbits. My grandpa? Relevant.

Looking closer:

which is a damn sight better than sitting in a soiled diaper in a Bromley care home, screaming at a nurse because you think you're back in the Blitz.

I don't live or work near Bromley. That's where an uncle of mine resides. It's clear from the context I shared that I'm up in Scotland.

I will happily roll the dice on a 30% chance of AGI-induced extinction if it buys me a 70% chance of reaching escape velocity. Give me the ASI. Let it fold our proteins and solve cellular senescence. If it kills us, at least it will likely be fast, clean, and computationally elegant—which is a damn sight better than sitting in a soiled diaper in a Bromley care home, screaming at a nurse because you think you're back in the Blitz.

I could see myself saying this. Maybe not those exact figures, perhaps 10%:90%, but directionally correct.

We have, as a civilization, achieved a horrific kind of half-victory. Modern medicine—my profession, which I love and despise in equal measure—has become incredibly adept at preventing you from dying. We can stent your coronaries, dialyze your kidneys, and pump you full of broad-spectrum antibiotics. We have defeated the acute killers that historically pruned the human herd. But we have utterly failed to extend healthspan in tandem with lifespan. We have built a remarkably efficient pipeline that funnels the elderly past the quick, clean deaths of yesteryear and deposits them directly into a decades-long purgatory of cognitive and physical decay.

And the NHS, Moloch bless its sclerotic, crumbling heart, is entirely unprepared for the demographic tsunami that is already making landfall. We are warehousing hollowed-out shells of human beings in care homes at exorbitant expense, draining the wealth of the middle class to fund the agonizingly slow dissolution of their parents.

Very good. I would use that verbatim in a real essay.

People look at my bio—amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi (attain immortality, or die trying)—and assume I am driven by a narcissistic fear of death. They wheel out the tired, poetic cope that "death gives life meaning," that finitude is the necessary canvas upon which human beauty is painted.

I wouldn't say that at all dawg. Why would I randomly reference my user flair in an essay?

Claude's version is shit. It's staggeringly content free, and while it's closer to "raw" me, it also uses em-dashes and uses many words to say few things. Maybe it's bad luck, I've had better results in the past, especially since I usually share a specific topic instead of letting it decide on its own.

Here is the whole prompt, profile dump included, if you want to try with a different model. I'll see about using Opus, I know 5.2 Thinking will shit the bed in a stylistic sense.

Rentry won't let me paste the whole thing. But I think I've been clear enough to reproduce independently. I'll happily take a look.

Gemini's sample is impressive! Color me impressed, especially that a straight-up prompt produced that (though I suppose if any technique would get it with current models, it'd be "one shotting through a prompt" rather than "iterative refinement towards a target").

It doesn't sound quite the same as the version of you that lives in my head, but it's awfully close. E.g. I can't imagine you saying

Arthur's tau proteins and beta-amyloid plaques have reached a critical threshold

since you don't tend to drop spurious technical details into your walls of text unless they serve a purpose (and also because I half suspect you're not a fan of the amyloid theory of alzheimers). More generally, the Gemini piece has a higher density of eyeball kicks than I model your writing as having. And I model your writing as having a lot of those, for a human.

It also seems to drift away from your voice in the second half. And it fails the stylometry vibe check - Pangram detects AI with medium confidence - but maybe in a way that's reparable. And actual stylometry (cohens d of +17 on dashes, +2 on words >9 letters, +1.5 on mean word length in general, -2 on 3-4 letter words, -1.2 on punctuation in general - i.e. you use more and more varied punctuation and shorter words, by a notable margin, and Gemini uses way, way, way more dashes). Still, it's much much better than I expected! (and yeah, the Claude one is not even worth discussing)

Interestingly, your results look much, much better to me than the ones I get myself. I ran the same test as you did against Gemini, and got these not-very-good attempts: 1 2 3. Gemini took distinctive phrases (e.g. "85% agree") and ideas (e.g. "claude code as supply chain risk") I have used once in the corpus, fixated on them, and stitched them together into a skinsuit which superficially resembles my writing but doesn't hold up under scrutiny. Interestingly, that's a very base model flavored failure mode. I have grown unused to seeing base-model-flavored failure modes, and as such Gemini is much more interesting to me now.

ETA: also one entertaining failure I got when trying to do this in multi-turn: Gemini didn't realize it had ended its thinking block, and dumped its raw chain of thought, ending with "Go. Bye. Out. Okay. End. Wait. Okay. Done. Executing." over and over hundreds of times. chat log

Gemini's sample is impressive! Color me impressed, especially that a straight-up prompt produced that (though I suppose if any technique would get it with current models, it'd be "one shotting through a prompt" rather than "iterative refinement towards a target").

My impression is that Gemini's output was unusually good and Claude’s was unusually bad. But both 3.1 Pro and 4.6 Sonnet are new enough that my intuition based on extensive interaction with previous models might no longer be applicable. For what it's shirt, both were n=1 samplings with zero cherrypicking.

since you don't tend to drop spurious technical details into your walls of text unless they serve a purpose (and also because I half suspect you're not a fan of the amyloid theory of alzheimers)

Looks around shiftily why, I'd never throw in spurious technical details into an essay. Couldn't be me!

(I probably wouldn't use the specific Tau and amyloid phrasing, since you are correct that I have very mixed feelings about the amyloid hypothesis)

Interestingly, your results look much, much better to me than the ones I get myself. I ran the same test as you did against Gemini, and got these not-very-good attempts: 1 2 3. Gemini took distinctive phrases (e.g. "85% agree") and ideas (e.g. "claude code as supply chain risk") I have used once in the corpus, fixated on them, and stitched them together into a skinsuit which superficially resembles my writing but doesn't hold up under scrutiny. Interestingly, that's a very base model flavored failure mode. I have grown unused to seeing base-model-flavored failure modes, and as such Gemini is much more interesting to me now.

The examples seem to channel your "LessWrong" blogging voice. I am unable to critique the technical details or identify (what I expect are many) confabulations, but if I saw this posted there in your name I wouldn't bat an eye.

I haven't really futzed around with base models since GPT-3, though I might have tried one of the Llama 3s at some point. They're non-trivial to access, and have limited utility for me. Mainly because of the added difficulty of prompting base models, and the fact that the publicly accessible ones are nowhere near as intelligent as proprietary dedicated assistants. If you think I'm wrong about this, I'd be curious to hear about it.

In general, I get the strong impression that while the author of the corpus might be able to pinpoint specific issues in terms of style or stance, it's much harder for others to spot those tells.

The biggest pitfalls are the tendency to adopt em-dashes (models are more than capable of not doing that if you specifically prompt them not to), and other stock "AI" phrases like:

There is a very specific failure mode in modern LLMs

Which can show up if you're using models to merely edit/format a draft, and not just write an essay from scratch.

I must also continue stressing the point that this isn't quite representative of my usual informal benchmark:

  • I'd also ask the model to first output a list of essay topics that it thinks I would write, of which I'd choose a specific one that sounded interesting, perhaps asking it to propose an outline first.
  • I would definitely run multiple iterations of the prompt or suggest specific corrections and check their adherence.
  • I would also index heavily on their ability to mimic authors I know very well. Can they pass as Gwern, or Scott, or Richard Watts? Can they take an existing essay I've written and rewrite it an arbitrary style and produce something interesting, if not superior as a whole?

It's enough for me to spot a better way to say a specific thing I'm already saying. A single vivid metaphor or interesting analogy that is worth co-opting can make the practical purpose of the exercise worth it.