Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 68
- 1
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule

Jump in the discussion.
No email address required.
Notes -
Europe is not a serious country (or collective of countries):
https://eurollm.io/
The largest model is a paltry 9B parameters. I could run something of comparable size on my phone (maybe larger depending on quantization). Small isn't necessarily bad, but the performance is abysmal to boot.
As someone on HN points out, it:
Bruh. It's not like Mistral is doing so hot either. I suppose it's back to waiting for Gemini 3 and whatever else is cooking in Sino-American data centers. It's like the rest of the world is too poor or retarded to even try. I'd respect a Llama fine-tune more than this thing. Any decent model can handle all the EU "official" languages without breaking a sweat.
(In all fairness, it's a November 2024 model. They haven't done better, and it was trash even back then)
To be fair, without Mistral giving Llama a sharp poke with a pointy stick (especially Mixtral 8x7b) local might never have got anywhere in the first place.
Hmm.. My recollection is becoming hazy, but I recall that Meta would almost certainly have released OS models simply to get one in at Google/OAI. If they hadn't, then the Chinese would have, I don't believe that DeepSeek or the others all started as Llama forks (though I recall some did).
That reminds me that there's no word on new Meta models. I'm curious to see if Zuck's spending spree shows any dividends.
They did, but the first Llama was basically rubbish AFAIK. I tried it for a little bit as a novelty and gave up in disgust. The first Mistral 7b model you could use and think 'oh... there might be something in this'. Maybe Meta would have kept going, but there's a decent chance they would have given up.
The Chinese would probably have gone on regardless but I think the local scene really kept things going in the long wait between GPT4 and Deepseek, by allowing people to try lots of things that weren't officially sanctioned, and putting together lots of infrastructure like openrouter. I don't think the Chinese stuff would have made nearly such a splash if they'd just been another closed-source API model.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's only a matter of time.
More options
Context Copy link
More options
Context Copy link