This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Huh. It's been my sniff test for new models as well, and so far I have not seen much success. It should be easy! This is literally the most LLM-flavored task to ever task! And yet. I've sunk probably 50 hours into it.
My most recent attempt, which I sunk about 10 hours and $100 into, and which got a lot closer than any previous attempts, involved giving Claude a corpus of all my past writing and having it try multiple different ways of producing text on arbitrary topics in my voice. The things tried were
On the one hand, I was very impressed by how good Claude was at running a whole bunch of these experiments very quickly. On the other hand, it did not work for me, not even at the level of "passes the sniff test", much less at the level of "standard stylometry techniques say it sounds like me".
I think you'll find that this is one of the tasks that is now much much easier. It's actually been within the capabilities of frontier models since Sonnet 4.0 (which is when I went ahead and gathered said corpus, on the theory that it'd be pretty useful to have). The prompt you're looking for is something like "Here's a chrome instance running with --remote-debugging-port and logged in on most of the sites I post on with a tab open for each. Go generate a corpus of all my publicly available writing".
Yeah. An H100 for 24h would run in the ballpark of $40, well worth it for me to provide. Vast allows transferring credits from one account to another, so I'd happily just transfer $50 of credits over if someone actually wants to do this. Does seem like rather a lot of work though.
Yeah, that's entirely reasonable. Your voice is very different from Claude's voice.
Yeah, I'm hoping you can prove me wrong here. I've been trying to do this since back in late 2019 when nostalgebraist-autoresponder was shiny and new. I want a good simulacrum of myself! I want to have that simulacrum, and I want to loom it. I want to build an exobrain, and merge with it, and fork off a copy running in the cloud.
BTW I expect there's a substantial market for anyone who manages to build this in a repeatable way. I've looked, and there are as of now no commercial offerings for this (though there are a few commercial offerings that pretend to be this).
I only have access to the models you can obtain access to with money - I expect I'm 3-6 months behind the best of what insiders at Anthropic or OAI have access to.
An LLM skeptic is an LLM idealist who's been disappointed :)
I expect looking like you stylometrically while also exhibiting the same patterns of thought you exhibit on a specific topic will involve writing code. But code in the service of trying to mimic you convincingly, rather than in the service of producing some specific durable software artifact.
For the record, I do expect this to be within the capability window within the next 18 months, but I would be pretty surprised if you managed to get Opus 4.6 specifically to do it.
I think we're on the same page here, I'll talk to SF about this. I'm willing to put in the effort on my end, which, as I see it, is to write a
1000 word essay as I normally would. Not particularly onerous.Let me give you an idea of how I normally approach this. I simply copy-paste pages of my profile after sorting by top, usually at least two or three pages (45k tokens). I might also share a few "normal" pages in chronological order, for the sake of diversity if nothing else.
I did just this, using Gemini 3.1 Pro on AI Studio (GPT 5.2 Thinking, which I pay for, can't write in arbitrary styles nearly as well no matter how hard you try, and I've tried a lot, I don't pay for Claude so I'm stuck with Sonnet):
I copied and pasted the first two profile pages, sorting by top of all time. Instructions were:
https://rentry.co/23dc63vs by Gemini https://rentry.co/p5yh68zu by Claude 4.6 Sonnet (same setup)
Results? I'd grade Gemini a 7/10, Claude a 5/10.
Looking at Gemini:
Looking closer:
I don't live or work near Bromley. That's where an uncle of mine resides. It's clear from the context I shared that I'm up in Scotland.
I could see myself saying this. Maybe not those exact figures, perhaps 10%:90%, but directionally correct.
Very good. I would use that verbatim in a real essay.
I wouldn't say that at all dawg. Why would I randomly reference my user flair in an essay?
Claude's version is shit. It's staggeringly content free, and while it's closer to "raw" me, it also uses em-dashes and uses many words to say few things. Maybe it's bad luck, I've had better results in the past, especially since I usually share a specific topic instead of letting it decide on its own.
Here is the whole prompt, profile dump included, if you want to try with a different model. I'll see about using Opus, I know 5.2 Thinking will shit the bed in a stylistic sense.Rentry won't let me paste the whole thing. But I think I've been clear enough to reproduce independently. I'll happily take a look.
Gemini's sample is impressive! Color me impressed, especially that a straight-up prompt produced that (though I suppose if any technique would get it with current models, it'd be "one shotting through a prompt" rather than "iterative refinement towards a target").
It doesn't sound quite the same as the version of you that lives in my head, but it's awfully close. E.g. I can't imagine you saying
since you don't tend to drop spurious technical details into your walls of text unless they serve a purpose (and also because I half suspect you're not a fan of the amyloid theory of alzheimers). More generally, the Gemini piece has a higher density of eyeball kicks than I model your writing as having. And I model your writing as having a lot of those, for a human.
It also seems to drift away from your voice in the second half. And it fails the stylometry vibe check - Pangram detects AI with medium confidence - but maybe in a way that's reparable. And actual stylometry (cohens d of +17 on dashes, +2 on words >9 letters, +1.5 on mean word length in general, -2 on 3-4 letter words, -1.2 on punctuation in general - i.e. you use more and more varied punctuation and shorter words, by a notable margin, and Gemini uses way, way, way more dashes). Still, it's much much better than I expected! (and yeah, the Claude one is not even worth discussing)
Interestingly, your results look much, much better to me than the ones I get myself. I ran the same test as you did against Gemini, and got these not-very-good attempts: 1 2 3. Gemini took distinctive phrases (e.g. "85% agree") and ideas (e.g. "claude code as supply chain risk") I have used once in the corpus, fixated on them, and stitched them together into a skinsuit which superficially resembles my writing but doesn't hold up under scrutiny. Interestingly, that's a very base model flavored failure mode. I have grown unused to seeing base-model-flavored failure modes, and as such Gemini is much more interesting to me now.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link