Transnational Thursday is a thread for people to discuss international news, foreign policy or international relations history. Feel free as well to drop in with coverage of countries you’re interested in, talk about ongoing dynamics like the wars in Israel or Ukraine, or even just whatever you’re reading.
- 27
- 1
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule

Jump in the discussion.
No email address required.
Notes -
As an aside, this is the biggest source of my AI skepticism. AI will not be able to be useful at scale unless it is truly reliable, which the current state of the art emphatically is not. The problem is not merely that it can fail to complete a task, but that it confidently pretends to have succeeded. In fact the models do not seem to be capable of differentiating on their own between success and pretend-success. This puts a hard limit on what kind of tasks they can perform and at what scale. People like to talk about working with an LLM assistant as like having a fast-working junior employee always at your beck and call (you can offload your tasks but you’ll need to check its work), but for most applications it seems more like having a dodgy outsourcing firm on-call. Not only do you have to check its work, its errors are bizarre and can be deeply hidden, and it will always project total confidence whether the results are perfect or nonexistent.
The lack of progress on this front by any of the major LLM companies makes me think it’s going to take a fairly significant breakthrough to fix, not merely “moar compute,” which makes the aggressive push for AI-everything seem… premature, shall we say. Certainly it does not seem to me that AGI is just around the corner.
Of course! If there were a way to evaluate the quality of the result, the hyper-smart people earning billions of dollars would think about a thing as trivial as inserting "if the result is of low quality, try doing better" at the end of the AI pipeline. If we, as the end users, see low quality results, it is a hard evidence that their best effort at evaluating the quality of the results are failing. Otherwise they'd build a perfect AI chat and move from billions to trillions.
More options
Context Copy link
More options
Context Copy link