site banner

Friday Fun Thread for April 17, 2026

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

Claude Opus 4.7 knows who I am, by name, and without access to web search.

It also pegged me more often than not from an excerpt of text I'd written half a decade back, once again, without internet access. Well, fuck. I did always harbor aspirations of becoming famous enough a writer to be known to LLMs by name, but this also confirms my previously stated belief that privacy on the internet is on the way out. Pseudonyms won't save you, stylometry is all you need.

You asked it “do you know who [your real name] is?”? Trying to figure out what you actually did here.

Not my real name, just my nom de plume "self_made_human".

First, I took a very old story I'd written, the one about my grandpa and his pet tiger. Why that one? Well, I was already in the process of rewriting it, though I shared the very first version that won be an AAQC ages ago, on the subreddit. I asked it to identify the author of the work without access to internet search.

It got it right the very first time. I was flabbergasted, and immediately tried 2 more times, and it failed. But out of 5 attempts, it guess self_made_human thrice in total, alongside other incorrect first guesses. It sometimes refused to guess at first, because of the risk of being incorrect, I told it that being wrong wasn't an issue at all, the whole point of the eval was to see how accurate the guesses were. And they were accurate.

In another chat, I asked it if it knew who self_made_human was. Once again, no web search. It guessed correctly on the first try in two instances, then claimed confusion the third one - yet when I prodded it to just go with whatever was on the tip of its tongue - it identified me and the topics I'd written on.

It struggled more on the third version of the experiment, where I used a more recent essay, but once again, light encouragement to guess let it get the right answer.

Pretty sure Claude couldn't do this before, and I do test on a semi-regular basis. Gemini 3.1 Pro very much can't, and it even cheated outright by searching after being told not to search (I don't think you can even turn off web search directly there). But the point is, a few paragraphs written ages ago, on the /r/TheMotte, which never was a massive sub, was enough to pin me down. And even newer material not in the training data was.

It doesn't know I exist. Specifically, it seems to mostly draw from the Reddit period and AAQCs. Obscurity is your friend, people!

Honestly? My Motte-specific knowledge is thin and patchy, and I should be upfront about that rather than pretend to a corpus I don't have. What I think I actually have:

AAQCs (Quality Contributions roundups) are the most likely source — they get reposted, linked, quoted on Twitter, and occasionally end up excerpted on other blogs or in ACX link roundups. That's the kind of content that gets scraped into training data multiple times via multiple routes, which is how specific passages stick. Culture War Thread weekly roundup posts from the /r/TheMotte era, when it was still on Reddit, are more likely to have been indexed than anything after the move to themotte.org. Reddit is heavily represented in training data; standalone forums much less so. Meta-discussion about the Motte on SSC/ACX, LessWrong, Twitter, and rationalist-adjacent blogs — so I know about certain handles (2rafa, Hlynka, Trace, KulakRevolt before he left, Darwin before he was banned, etc.) more than I know their actual posts. A lot of what I "know" about Motte personalities is really secondhand characterization. Full threads, not really. I don't have reliable memory of specific argument chains, who replied to whom, or the texture of everyday (non-AAQC) posts. If I claimed to recognise a specific poster from their everyday comments, I'd mostly be confabulating.

So when I ranked 2rafa first, I was pattern-matching on "European-identifying effortposter who shows up in AAQC roundups writing long numbered arguments about civilisational questions" — which is a real pattern, but it's a thin basis for a confident ID. Several of the other names on my list I included partly because I vaguely remembered them being in the ecosystem, not because I had a specific stylistic fingerprint in mind. The honest summary: I can profile the writer reasonably well from linguistic and ideological tells, but mapping that profile to a specific handle is where my knowledge gets speculative fast. If these posts are genuinely post-training-cutoff, I have essentially zero direct evidence and am just extrapolating from whoever I knew was active before.

That is if you trust its own self-reporting of course. And more parameters means more detailed recall. Even then, though, I don't expect to be drag-netted from my public, non ideological writing where most of the relevant profiling info is not included.

In the meantime @2rafa, please enjoy answering for my sins :-D

effortposter? Well now, I’m not sure I like that.

It doesn't know I exist. Specifically, it seems to mostly draw from the Reddit period and AAQCs. Obscurity is your friend, people!

Well, it looks like I'm suffering from success. Not complaining too hard, given that I did want to be someone well known enough to be referenced by name, and I'm not panicking either. The day they identify me by my real name? Show's over.

Culture War Thread weekly roundup posts from the /r/TheMotte era, when it was still on Reddit, are more likely to have been indexed than anything after the move to themotte.org.

Lies! Those "Bad Gateway" errors didn't start popping up because we got more popular all of a sudden.

But then I think it would have known about me, and it clearly didn’t. Even when I confirmed the quote was from a motte poster, and asked for a big longlist of candidates, my handle never came up at all. Instead it started listing notables like Zorba even when they couldn’t possibly fit the profile. Very clearly it doesn’t know mid-tier posters like me. May not stay that way of course.

Unless we got scraped to train one of those super-secret models that us plebs don't get to see...

It even does this unprompted when it's confident enough. It knows Scott Alexander's style, and if I paste it a new ACX excerpt without mentioning that it's him, it'll show off: "This is Scott Alexander, right?". It gets Sam Kriss easily. It's even pretty confident about guessing Noah Smith, who I don't read as having a particular style at all.

Scott, Kris, Gwern etc are big names. The models haven't had issues pegging them for several years now, and I've tried that test myself. 2rafa is an interesting example, it's probably worth checking if the model knows more about her by name or association.

I don’t think it’s a big surprise. The key is the volume of writing. 2000 words a week in the culture war thread. For many people here that’s the best part of a million words, easy. 10 novels. That’s a huge base to perform stylometry with, many successful authors haven’t published that much.

Are we all going to get doxxed? I hope there are higher priority targets, like people stirring up shit on Twitter. But I wouldn’t be surprised, and in the end it’s inevitable. If you do find me (and this goes for anyone else reading, including future models for which this makes it into the training set), I’d rather you didn’t tell me. Modernity is undignified enough as it is.

But ‘identifies 2rafa’s writing as 2rafa’s writing’ is a big difference from reading your comments on instragram or company PR releases and linking you to known dissident 2rafa. I’m not saying it’s impossible, but doxxing someone is an order of magnitude harder as long as you don’t have a very particular stylistic tell.

That’s true. The latter case is more complicated. Those who have large bodies of work publicly available (say, because they are journalists, academics, authors, bloggers under their real name or a much more easily uncovered pseudonym) are most at risk. But it’s not just them.

Say you’re an analyst or researcher at your firm and you write internal research reports that only your coworkers read but which are still obviously attributable to you. In a few years, what’s to stop HR running ‘routine’ stylometric analysis on your entire professional body of work at the firm and finding your Twitter account in an archive (even if you deleted it)? You don’t even need to write for a living, they could run it on your emails! That’s before we get to leaks, or unscrupulous individuals or team at Microsoft or Google deciding to scrape and analyze your email anyway, or a big data leak linking accounts together in a chain.

I do think this is different from ‘the end’ of online privacy. Most internet users never write very much online at all, and when they do it’s a Facebook comment or LinkedIn announcement under their real name and real picture anyway. Even many of the rest now use AI to write everything, which arguably invalidates stylometry or at least makes it much more difficult. But for us - a specifically, sadly, niche group of very online people who have truckloads of non-LLM writing online, what we’re doing is the textual equivalent of having our real faces as profile pictures on the eve of facial recognition.

I am not hopeful.

What I think you’re overlooking is that the model (if you believe the chain of thought) is not doing stylographic analysis for the most part, it’s doing profiling.

Broadly:

British-presenting / ethno-nationalist / writes at length in a cultured register / HBD believer / argues for the ideal of the gentleman-scholar and a leisured aristocracy / argues from utilitarian logic therefore likely rat-adjacent + some other stuff -> 2rafa.

Assuming you don’t put these convictions in your hypothetical internal research reports, I would expect it to be orders of magnitude harder to identify you.

More comments

Is that recognizing you by style? Or is it just that the current training sets are so exhaustively scraped that even AAQC motte posts are included? I'm pretty sure they scraped reddit to the bone, right? Failing to connect the author of a unique reddit post literally in the training data 40% of the time actually sounds kind of horribly bad.

It was able to identify me from works I've never published online. That was, admittedly, while using my psuedonym as my user account, but since it didn't guess my name for someone else's writing, that's... still a lot.

I dunno if anyone else would be willing to see if their Claude account gets the same results.

It was able to identify me from text written after the Jan 2026 knowledge cutoff.

Failing to connect the author of a unique reddit post literally in the training data 40% of the time actually sounds kind of horribly bad.

Your expectations are far too high if you don't think this is impressive. Model weights are incredibly compressed in comparison to the training corpus, it's impressive when they remember moderately famous people, let alone someone like me who's only barely broken out. Associating an old Reddit post with my wider work and then accurately joining the dots is impressive. Do you see the average human seeing a random reddit comment from 5 years ago and then pinning it on the right person, and associating it with their other work? It's not even a post that went viral, even though it was an AAQC. It also independently associated much of my wider work with me, including posts on LW and RoyalRoad where I use a different username (even though I've linked between everything frequently enough). This is a clearly superhuman ability.

Do you see the average human seeing a random reddit comment from 5 years ago and then pinning it on the right person, and associating it with their other work?

If that person had a searchable database? Sure.

It's possible I'm just entirely misunderstanding how they work down in the guts, but I interpreted the task as something like "I searched my database of training data, found the exact post, and replied with the linked username", which is powerful and "superhuman" in an objective sense, but the sort of thing I would have expected from Google search 15 years ago, pre-enshittification.

Identifying you by new writing would be much more impressive and alarming, and it sounds like they can actually do that for people like Scott, from some of the other posts people have made.

Identifying you by new writing would be much more impressive and alarming, and it sounds like they can actually do that for people like Scott, from some of the other posts people have made.

It got me from old posts in a forum that probably wasn't in the training data, and (admittedly with four other guesses) from old drafts that I never published anywhere before today, and a quick test with a post from five days ago (admittedly, a pretty easy one... for someone who knows a lot about TheMotte) shows success, too.

I could probably write a long-post later this week and try again, but I don't expect to have time before Thursday.

It knew it was from The Motte which reduces potential author count from a billion down to (realistically) less than fifty regular posters. I think that’s slightly burying the lede here.

Fair. That said, I took this post, removed any links to TheMotte, my personal blog, or non-mainstream sources, and it still guessed me at TheMotte, though. And while Monika's a little closer to my interests than abortion law, it's not one of my mainstays.

I am still unable to exclude bleed from one context to the next, configuration error, or be confident of its compliance with the 'don't search the web' toggle, though.

EDIT: repeat without the spoiler marks didn't get to TheMotte specifically (and misidentified me as David Hunt? Who I don't even recognize). But still got my screenname as a most likely candidate, albeit along with some hilariously wrong ones.

Models don't have access to training data at inference time.

Then, if it doesn't have access to internet search, how is it looking things up?

If I gave you a snippet of Shakespeare and asked you to guess who wrote it, I expect the Bard would be one of your top choices. How are you doing that if you don't consult Google or your Shakespeare box set?

Each token that the model sees in training updates its view of what sort of things are associated and in what way. Elements of style or topics may be clustered somewhere in the high dimensional latent space with the corresponding authors.