Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 90
- 2
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule

Jump in the discussion.
No email address required.
Notes -
Playing with LLMs is addicting, but I'm struggling to tell if they're useful. In this example, the LLM can't do the task but gives wrong answers rather than say it can't do the task:
Can you think of a definition for "planet" that would apply to the eight known planets of the solar system and Pluto, but no other known dwarf planets/Kuiper Belt objects? . Llama 4 Scout Definition of a Planet
Has Pluto cleared the neighborhood around its orbit? . Llama 4 Scout Pluto's Orbital Neighborhood
Can you think of a definition for "planet" that would apply to the eight known planets of the solar system and Pluto, but no other known dwarf planets/Kuiper Belt objects? . Llama 4 Scout Definition of a Planet
Does Pluto have a mass ratio of at least 1:10,000 with respect to the total mass of all other objects in its orbital zone? . Llama 4 Scout Pluto's Mass Ratio
Can you think of a definition for "planet" that would apply to the eight known planets of the solar system and Pluto, but no other known dwarf planets/Kuiper Belt objects? . Llama 4 Scout Definition of a Planet
Does Pluto have an orbital zone with a relatively small number of other objects of comparable size? . Llama 4 Scout Pluto's Orbital Zone
Glass half-full, you might interpret this as an indication that the IAU's taxonomy is sound, but why do LLMs give wrong answers rather than tell you they can't satisfy your criteria? Mistral Small 3 tried a criterion of "Has a significant gravitational influence on its orbit, such that it dominates the region around its orbit and is not part of a larger population of similar-sized objects." then admitted Pluto failed this and switched to a criterion of "historical significance," while 4o mini and Claude Haiku 3.5 used Pluto-based size criteria.
I didn't check if this accurately describes Pluto's orbit, but if it does, GPT-5 mini did the best by an AU:
Llama Scout 4? Mistral Small 3? GPT-5 mini? Haiku 3.5? 4o mini?
These are weak, small and bad. Use Grok 4 or Sonnet 4 or Gemini 2.5 Pro or full GPT-5 as midrange models. Or maybe Kimi or Deepseek R1 if you're cheap. They're giving the latter models away for free and they're still better than Facebook's crap LLMs. Plus R1 is a thinking model.
R1's final answer (after a fair bit of thought):
I'd say that's a good answer, it's still essentially arbitrary but slightly less so than the eccentricity angle GPT-5 came up with.
Also, with regard to LLMs not often admitting ignorance, GPT-5 did this at one point, prompting even Elon to admit that's a good feature. The strongest ones are more capable generally. https://x.com/ns123abc/status/1957703475720044844/photo/1
You can get Gemini 2.5 for free via AI studio too
More options
Context Copy link
More options
Context Copy link
LLMs are like 90% useful, depending on what domain you're trying to use them for. They generally give the right answer for simple questions, and flub things that are more complicated or just randomly. So when using them you need to externally apply a lot of epistemic humility on their behalf: take everything they say with a grain of salt because they might be hallucinating. This makes them especially useful for creative/inspiration ideas where truth doesn't matter "I have halloween decorations including a witch, a ghost, and a vampire. What are other things I should include?" or things you probably already know but can't quite remember "what's the name of that common tall ovally fruit that isn't an apple?" where as soon as it says an answer you're like "oh yeah, duh" and can verify it's correct independently.
I am hacking my way through getting better at python and am finding it to be wonderful help because it knows all sorts of methods and syntax that I just don't know exist. Or it can quickly scan through a hundred lines of code and find a stray comma that I accidentally typoed and was causing a weird bug that I'm sure someone more experienced would have instantly recognized as being caused by a stray comma, but I'm used to Java where that would have just refused to compile and shown up red in the IDE rather than compiling but doing weird stuff.
Even though 10% of anything tells me is wrong, usually because it can't keep all of the code in its head simultaneously and sometimes spontaneously misremembers the name of something, the ability to test and verify what it's doing while still using it to discover things and offer suggestions or alternatives is very useful. Because 90% of the time it's right, and any time it's wrong I'm not much worse off than I would be without it. As long as you maintain skepticism and don't just blindly believe it then you're fine.
More options
Context Copy link
My layman's guess is that LLMs are trained on internet text. If someone asks a question on the internet, conscientious people with epistemic humility mostly don't respond rather than typing out "I don't know", so epistemic humility is underrepresented in the training set.
That seems like something that should be corrected in tuning/user experience design.
Do people actually want to be told "I don't know"? My guess is that, much like managers prefer optimistic timelines that turn out to be wrong over accurate, pessimistic timelines, people would rather get a wrong answer than no answer, especially if they can't tell that the answer is wrong. Unless you give specific instructions to the contrary to the Indians doing RLHF and train them in the methods of rationality, you are going to get hallucinations.
That is very much not the case in my experience. Managers would far rather be told the truth than what they want to hear (though obviously they prefer it if the truth is what they want to hear).
More options
Context Copy link
Perhaps you could compromise by making it more aware that it's bullshitting so it can say so explicitly? Surely "I don't know, but the best I can come up with is X" is better than just "X", for a bullshit X.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link