Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule

Jump in the discussion.
No email address required.
Notes -
Has anyone found LLM performance to actually improve with memory on? At least on ChatGPT, I find it overfits pretty severely to my previous chats and noticeably increases the rate of hallucination. For example, if I asked it to solve a geometry question in a previous chat, then ask it the exact same structure of question but with different parameters, it will sometimes give an incorrect answer that seems to have been poisoned by the output of the previous chat.
I keep going back and forth on how powerful I think these models are. There are moments where I am impressed by a seemingly new thought that it had to have extrapolated or reasoned out for itself as it's unlikely anyone would write this out (usually some combination of too niche and too obvious). Yet at some point, every time this would happen I started to ask it for a source and sure enough it links to a page where someone at some point did indeed spell it out explicitly on the internet. Each new model gives outputs that take me longer to find cracks, but the cracks are always there, and they are generally cracks of the same type as the example above - that is, the kind suggestive of a lack of world model and simple stochastic interpolation of existing texts. I especially get the feeling that their understanding of the relations between objects in 3D space is rather poor. This apparent asymptotic improvement makes me think that what's needed is a rather drastic change in the fundamental structure of LLMs. But I'm just a layman, so interested to hear others' thoughts and experiences.
Yeah they are basically the platonically perfect wordcells. You can get around it with
.skills. Render the results to an image and give it the image. Or have a tool that will check details about the 3d space.More options
Context Copy link
Gemini with memory on seems to make reasonable guesses about the reasons why I ask a question, which so far is only a little useful for me, but which could possibly make it a better source of answers than I often am for some of the sorts of XY-Problem questions I sometimes get from others.
More options
Context Copy link
More options
Context Copy link