Ah, no, sorry, it calls me an ethno-nationalist. The quotes were mine, you’re just the closest person in broad profiling terms whose name it knows.
It thinks you’re a nostalgist with a “somewhat rosy view of pre-war gentleman-scholar society” or words to that effect :P
You can take the AI out of San Francisco, but…
What I think you’re overlooking is that the model (if you believe the chain of thought) is not doing stylographic analysis for the most part, it’s doing profiling.
Broadly:
British-presenting / ethno-nationalist / writes at length in a cultured register / HBD believer / argues for the ideal of the gentleman-scholar and a leisured aristocracy / argues from utilitarian logic therefore likely rat-adjacent + some other stuff -> 2rafa.
Assuming you don’t put these convictions in your hypothetical internal research reports, I would expect it to be orders of magnitude harder to identify you.
But then I think it would have known about me, and it clearly didn’t. Even when I confirmed the quote was from a motte poster, and asked for a big longlist of candidates, my handle never came up at all. Instead it started listing notables like Zorba even when they couldn’t possibly fit the profile. Very clearly it doesn’t know mid-tier posters like me. May not stay that way of course.
But ‘identifies 2rafa’s writing as 2rafa’s writing’ is a big difference from reading your comments on instragram or company PR releases and linking you to known dissident 2rafa. I’m not saying it’s impossible, but doxxing someone is an order of magnitude harder as long as you don’t have a very particular stylistic tell.
It doesn't know I exist. Specifically, it seems to mostly draw from the Reddit period and AAQCs. Obscurity is your friend, people!
Honestly? My Motte-specific knowledge is thin and patchy, and I should be upfront about that rather than pretend to a corpus I don't have. What I think I actually have:
AAQCs (Quality Contributions roundups) are the most likely source — they get reposted, linked, quoted on Twitter, and occasionally end up excerpted on other blogs or in ACX link roundups. That's the kind of content that gets scraped into training data multiple times via multiple routes, which is how specific passages stick. Culture War Thread weekly roundup posts from the /r/TheMotte era, when it was still on Reddit, are more likely to have been indexed than anything after the move to themotte.org. Reddit is heavily represented in training data; standalone forums much less so. Meta-discussion about the Motte on SSC/ACX, LessWrong, Twitter, and rationalist-adjacent blogs — so I know about certain handles (2rafa, Hlynka, Trace, KulakRevolt before he left, Darwin before he was banned, etc.) more than I know their actual posts. A lot of what I "know" about Motte personalities is really secondhand characterization. Full threads, not really. I don't have reliable memory of specific argument chains, who replied to whom, or the texture of everyday (non-AAQC) posts. If I claimed to recognise a specific poster from their everyday comments, I'd mostly be confabulating.
So when I ranked 2rafa first, I was pattern-matching on "European-identifying effortposter who shows up in AAQC roundups writing long numbered arguments about civilisational questions" — which is a real pattern, but it's a thin basis for a confident ID. Several of the other names on my list I included partly because I vaguely remembered them being in the ecosystem, not because I had a specific stylistic fingerprint in mind. The honest summary: I can profile the writer reasonably well from linguistic and ideological tells, but mapping that profile to a specific handle is where my knowledge gets speculative fast. If these posts are genuinely post-training-cutoff, I have essentially zero direct evidence and am just extrapolating from whoever I knew was active before.
That is if you trust its own self-reporting of course. And more parameters means more detailed recall. Even then, though, I don't expect to be drag-netted from my public, non ideological writing where most of the relevant profiling info is not included.
In the meantime @2rafa, please enjoy answering for my sins :-D
Nice! And kudos for going off and doing a proper test.
My instinct is that even with this type of training, LLMs will still be missing something essential
Your instinct is probably correct IMO. This form of synthetic data generation is just another tool in the box, it's not the key to everything.
I will say that we've got far further than I ever expected us to get using these methods. I'm instinctively a Gary Marcus-style fan of embodiment and unsupervised learning, it seemed clear to me pre-LLM that models wouldn't be able to be anything resembling intelligent without a body and the ability to interact with the real world and 'test' their understanding in real time. When LLMs came in, I felt I had to admit that I'd been wrong. It seems clear to me that we have managed to get to something I would call 'intelligence' (even if it's spiky and fails in some cases where humans would not fail) through these means. So I no longer trust my instincts as much.
This kind of semi-supervised exploration seems like a good compromise for now. I am also very interested in LLMs that can combine next-token video generation and text generation, because video generation requires understanding a bunch of stuff about the real world in order to produce consistent results, but that's a way off.
In this toy case it's just literally a calculator (a snippet of python code). The problem is 2+2, the calculator just does 2+2 and checks if the answer is the same as the LLM output. (The LLM is trained to format the final answer in a particular manner and wrap it with special tokens, so the verifier doesn't have to be able to interpret natural language.)
You can get surprisingly far with this. If it's a calculus question, you can use an automatic differentiator to check it. Likewise for factorisation questions, metric conversion questions, algebraic manipulation of formulae, etc. you put a little work into programming the automatic verifier and you can get an infinite number of problems.
If you're a big company, you might have human domain experts doing some of this work too. If you're a smaller company you have a big LLM do verification for the smaller ones.
Then you have leetcode and programming problems, and again you can verify these automatically. Does the program compile? Is the program output what was requested? Is it faster than the previous solution?
Like I said, this only works for maths, programming, and other domains where you can verify the answer with a computer relatively cheaply, but contra the model of multiple intelligence factors, heavy training on maths and programming seems to improve general intelligence and reasoning quite well.
Mid 30s, and I drink rarely because even one pint makes me woozy for a few hours and that's not fun unless I'm with friends. Drank 3 pints one evening last week, had very restless sleep and was hungover and unable to work until about 3pm. That's a bit extreme for me but it's just not something I can do any more.
In general I think it has less to do with age and more to do with drinking frequency, which correlates with age for various reasons. My father is like @MaximumCuddles and has more every single day than I would in a month. He doesn't sleep well but otherwise shows no ill effects.
Question: What is 2 + 2
Model: Hmm, that’s 2 and then another 2, so 22.
AUTOMATIC VERIFIER: WRONG
——
Model: Hmm, that’s the sum of 2 and 2, so 4
AUTOMATIC VERIFIER: CORRECT.
The model is tweaked slightly to make the second output more likely, and that output is potentially added to the training set. Repeat for arbitrarily complex mathematics and other problems as long as the solution can be verified, even if it isn’t known in advance. In this way you can generate potentially infinite amounts of data, albeit limited to certain domains. However, problem solving ability has so far extended quite well to other domains even when trained in this manner.
Has faced the judgement of ages and come out on top.
- Prev
- Next

I do note that EA and various other ‘bleeding-heart’ movements also tend to be disproportionately Jewish.
One might fairly argue that almost every movement is disproportionately Jewish, due to high IQ and great verbal skills, but there’s something there IMO.
More options
Context Copy link