Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 159
- 1
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule

Jump in the discussion.
No email address required.
Notes -
It was able to identify me from text written after the Jan 2026 knowledge cutoff.
Your expectations are far too high if you don't think this is impressive. Model weights are incredibly compressed in comparison to the training corpus, it's impressive when they remember moderately famous people, let alone someone like me who's only barely broken out. Associating an old Reddit post with my wider work and then accurately joining the dots is impressive. Do you see the average human seeing a random reddit comment from 5 years ago and then pinning it on the right person, and associating it with their other work? It's not even a post that went viral, even though it was an AAQC. It also independently associated much of my wider work with me, including posts on LW and RoyalRoad where I use a different username (even though I've linked between everything frequently enough). This is a clearly superhuman ability.
If that person had a searchable database? Sure.
It's possible I'm just entirely misunderstanding how they work down in the guts, but I interpreted the task as something like "I searched my database of training data, found the exact post, and replied with the linked username", which is powerful and "superhuman" in an objective sense, but the sort of thing I would have expected from Google search 15 years ago, pre-enshittification.
Identifying you by new writing would be much more impressive and alarming, and it sounds like they can actually do that for people like Scott, from some of the other posts people have made.
Models don't have access to training data at inference time.
Then, if it doesn't have access to internet search, how is it looking things up?
If I gave you a snippet of Shakespeare and asked you to guess who wrote it, I expect the Bard would be one of your top choices. How are you doing that if you don't consult Google or your Shakespeare box set?
Each token that the model sees in training updates its view of what sort of things are associated and in what way. Elements of style or topics may be clustered somewhere in the high dimensional latent space with the corresponding authors.
I would have similar examples in my memory. What does that memory look like for an LLM if not reference to a database? I just asked the Edge default copilot for "the 3rd line from Shakespeare's 31st sonnet", and for "without searching the web, the 5th line from Shakespeare's 41st sonnet" and it produced both without any trouble.
Are you suggesting that Copilot is re-deriving particular lines from the sonnets from first principles?
This is a sincere question, I honestly don't know how the nuts and bolts of these things work.
The memory is the weights of the model. The first stage of training an LLM is next token prediction - the LLM is shown a block of text and is trained to produce the next word (technically, part of a word, but that's not important). Internally, the model manipulates the numeric representation of the input tokens as points in a high dimensional space. The model produces a kind of probability distribution over all the words it knows for what the next word might be, and the weights of the model are adjusted so that it's more likely to expect the correct word.
The result is that every token, with some context before it, that the model sees leaves some kind of "impression" upon the model. The details are fuzzy, but things like word choice or style are probably represented in some regions of that high dimensional space, which is what lets the model say that something "sounds like" Shakespeare.
Anthropic have some public research around trying to look into a language model to see what's going on in that latent space and how neuron activations relate to concepts. It may be difficult to read without any ML background, but fortunately you can now feed it in to an LLM and have it explain anything you don't understand.
LLMs being described as having ‘memory’ of things in the training set is almost certainly far closer to the colloquial, human understanding of what ‘memory’ is than either of the above concepts are to computer memory or an encyclopedia.
So if someone colloquially says the LLM has its training set in its memory this is no less accurate than saying that you remember what the water cycle is even though you cannot recall the precise page and content and diagram of the school textbook that you learned it from. Or why you can identify a line of text written in ‘Trump voice’ even though you cannot exhaustively list every Trump tweet you’ve ever seen.
It's one thing to say something is in the LLM's memory. It's quite another to say it's doing some kind of lookup in a database or into the training data.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link