self_made_human
amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
A friend to everyone is a friend to no one.
User ID: 454
It was able to identify me from text written after the Jan 2026 knowledge cutoff.
Failing to connect the author of a unique reddit post literally in the training data 40% of the time actually sounds kind of horribly bad.
Your expectations are far too high if you don't think this is impressive. Model weights are incredibly compressed in comparison to the training corpus, it's impressive when they remember moderately famous people, let alone someone like me who's only barely broken out. Associating an old Reddit post with my wider work and then accurately joining the dots is impressive. Do you see the average human seeing a random reddit comment from 5 years ago and then pinning it on the right person, and associating it with their other work? It's not even a post that went viral, even though it was an AAQC. It also independently associated much of my wider work with me, including posts on LW and RoyalRoad where I use a different username (even though I've linked between everything frequently enough). This is a clearly superhuman ability.
Not my real name, just my nom de plume "self_made_human".
First, I took a very old story I'd written, the one about my grandpa and his pet tiger. Why that one? Well, I was already in the process of rewriting it, though I shared the very first version that won be an AAQC ages ago, on the subreddit. I asked it to identify the author of the work without access to internet search.
It got it right the very first time. I was flabbergasted, and immediately tried 2 more times, and it failed. But out of 5 attempts, it guess self_made_human thrice in total, alongside other incorrect first guesses. It sometimes refused to guess at first, because of the risk of being incorrect, I told it that being wrong wasn't an issue at all, the whole point of the eval was to see how accurate the guesses were. And they were accurate.
In another chat, I asked it if it knew who self_made_human was. Once again, no web search. It guessed correctly on the first try in two instances, then claimed confusion the third one - yet when I prodded it to just go with whatever was on the tip of its tongue - it identified me and the topics I'd written on.
It struggled more on the third version of the experiment, where I used a more recent essay, but once again, light encouragement to guess let it get the right answer.
Pretty sure Claude couldn't do this before, and I do test on a semi-regular basis. Gemini 3.1 Pro very much can't, and it even cheated outright by searching after being told not to search (I don't think you can even turn off web search directly there). But the point is, a few paragraphs written ages ago, on the /r/TheMotte, which never was a massive sub, was enough to pin me down. And even newer material not in the training data was.
Look, Jack, here's the deal, and I'm not joking. leans into the mic The thing about the AI, with the, you know, the writing thing, and how it figures out who you are from the commas and whatnot. I was talking to Barack about this just the other day. Well, not the other day. But recently. Recent-ish.
My dad, God love him, he sat me down when I was a kid in Scranton, and he said, he said to me, he said "Joey, a man's words are his bond." Now what does that MEAN, folks. whispers It means they can catch ya.
While in Scotland? I'd frequent the pub about twice a week, and drink more than I know is good for me. On average, two pints of beer and a few double-strength shots of some kind of spirit. This was for about a period of 4-6 months, outside of which I barely drank more than once a month.
I realized this wasn't great for me, and cut down significantly. It was also out of character, before, and after, I'm mostly a social drinker. I'd drink hard maybe twice or thrice a month, but only with company. I usually make it a point not to keep liquor at home or drink by myself, while I'm usually solid about not giving in to temptation, it's not easy during a bout of depression. The fact that I was self-administering alcohol use screening tests and squinting at the results was enough to make me desist.
Then again, it's Scotland. I'd have my visa revoked if I didn't engage in the cultural highlight.
Claude Opus 4.7 knows who I am, by name, and without access to web search.
It also pegged me more often than not from an excerpt of text I'd written half a decade back, once again, without internet access. Well, fuck. I did always harbor aspirations of becoming famous enough a writer to be known to LLMs by name, but this also confirms my previously stated belief that privacy on the internet is on the way out. Pseudonyms won't save you, stylometry is all you need.
I haven't noticed that, and I do use all of them regularly. If you have some kind of formal benchmark to point at, I'd be more receptive.
This strikes me as a fool's errand. TFR is cratering worldwide, even in the Global South. There's not much point marrying into an Indian, Nigerian or Subsaharan African community and going native, when the results will be indistinguishable in just a generation or two. My dad had 8 siblings, and then he and and the rest of his brothers and sisters had 2 or 3.
You're better off trying to go Mormon, for a certain definition of "better".
The most plausible reason for changing the tokenizer is that a more fine-grained tokenizer increases model performance, at the cost of more compute per token (we're breaking up the same input into more tokens). My understanding is that you don't even need a new base model to do this, and that the gains are particularly pronounced for arithmetic and coding. It's not a free lunch, but there are pros and cons that don't just amount to Anthropic nickle and diming their customers.
Even if AGI is actually possible with LLMs (or at all, but I'm not trying to start a discussion on metaphysics here), it looks like the capital needed to achieve it is drying up before it can be reached. Anthropic's move here (combined with them handicapping Opus 4.6 a few weeks ago) seems to clearly be an attempt to achieve profitability. The free/subsidized rate train for end users has pulled into the station, and now you have to pay more for the same (or worse) capabilities you were enjoying before.
Anthropic is, by far, the most compute strapped frontier LLM company. They are also not the only frontier LLM company. Until at least Google and OAI engage in the same putative enshittification (which I am far from sure is even happening wrt Anthropic), then you're kinda jumping the gun here.
Met my criteria, thanks for the reminder!
Eh? Do you think I don't know the difference between the practicality of emulation and the theoretical feasibility of emulation? What do you think physicists use for their modeling? There is a very real tradeoff between the accuracy of models and their compute requirements, you wouldn't try to predict the weather with QCD. Fortunately for me, the brain is an incredibly stochastic entity, which means that you can cut plenty of corners while being reasonably confident you aren't losing something vital. We are extremely unlikely to need to simulate things down to the atom to make a functional brain emulation, which takes the computational demand down from ludicrous to merely concerning.
We can simulate that too, I'm quite confident. It's not like we can... rawdog baseline reality, what's another layer of abstraction? The brain works on the laws of physics, so does a standard computer, and the latter can model the laws of physics.
At the end of the day, I'm an abacus that doesn't mind. I'm happy as long as the numbers add up.
- Prev
- Next

Scott, Kris, Gwern etc are big names. The models haven't had issues pegging them for several years now, and I've tried that test myself. 2rafa is an interesting example, it's probably worth checking if the model knows more about her by name or association.
More options
Context Copy link