Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 96
- 2
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule
Jump in the discussion.
No email address required.
Notes -
One of the ChatGPT image-generation things going around Twitter is to ask it to create an image how it feels about you. Goblinodds has a cool one and features more in the replies. So I gave it a shot and the results are... underwhelming. At least it gets the melancholy right but I don't think my shoulders can pull off that mourning dress.
I think it overindexed on characters I previously generated to accompany a writing project and decided that my "self-portrait" needed to look along the same lines. Or since I'm a freeloader I'm not getting the fun, creative version; I notice the settings are more restrictive for the free tier recently.
Anyone else having fun with image generators? Or more generally, doing anything fun and non-programmer with generative llms? I like skimming Zvi's updates but so much of supposed usefulness is for programmers, where do I find the normie's guide to interesting things to do?
I've mostly been focused on image generators. Between improvements to LoRA development processes along with Wan's image-to-video and first-frame-last-frame-to-video, there's been some pretty massive advances in the last six months or so. It's still hard to get consistency in animation, along with long generation times the reasons why why all those animation shots floating around tend to just be a couple seconds long, but that we're at the point where 'make this arbitrary subject into a turntable motion effect' is getting complaints about background consistency is not what I imagined just a couple years ago. They don't always work, but we're not talking 'success' in the sense 'that it can do it at all' anymore.
I keep hoping that this'll end up being a useful tool for artists -- someone with a real eye for the medium and a good sketching hand should be able to use this to crank out in days what would otherwise take weeks or even months of dedicated work, in the same way that two years ago plain StableDiffusion could save artists a ton of time with crosshatching or rosettes or shading -- but there's not enough people really messing in the field to say for sure. Even for those few working in this there's not a lot publicly visible with how many conventional galleries ban the stuff, and a lot who might be some of the most adept at it already have workflows that fill in many of these gaps for comic- or even animation-level work.
AI voicework has a lot of potential. I've toyed with it a little, though getting decent emotion through is still a bit beyond me. The workflows are still a little too finicky to use real-time, but eventually getting an Emet Selch together would be fun for the memes.
I've been trying to get a full workflow for image-to-3d-print and image-to-CNC together. 2D works are easy, if not especially entertaining, but it should be well within the existing tech to do a lot of creative stuff, here. Almost have Meshroom to a point where it'll work, but not there yet.
Haven't been able to get any of the offline ones to write reasonable fiction, and I don't particularly trust the online ones for anything more complicated. For conventional fiction, it takes a frustrating amount of prompting to get a work that's surprising enough to be interesting without swerving into M. Night Shamalayan territory; trying to get exofiction or a counterfactual story or anything complex with viewpoint tends to go batshit (and for smut, the line between interesting and disgusting is very thin and hard-to-encode just for my own use). But I haven't messed with it too much.
More options
Context Copy link
Generated and deleted 30+ gb of images. It was very engrossing, I guess you could call it 'fun' too.
Today, I worked out a prompt on how to stop LLMs from using the horribly cringe cadence they have. (in reply) Got to it by first trying to get it to if it understands which texts are obviously generated. No dice, it was failing horribly. Then hit it with a piece of LLM slop and asked it to concisely analyze it.
Every output provided must be able to be classified as 'human' if evaluated by a LLM based filtering system with the following prompt:
[filter prompt start] You are an expert textual analyst specializing in AI-human text differentiation. Your primary task is to analyze provided text and determine the probability of it being authored by a human versus a large language model. Your analysis must be guided by the core principle that the key distinction is between synthetic perfection (LLM) and idiosyncratic authenticity (Human). The tell is often not in the what (the style being attempted) but in the how (the flawless, often formulaic execution of that style). Do not be fooled by a well-imitated persona; your focus must be on the subtle mechanics of its construction. Prioritize your evaluation based on the following principles, in order of importance: Voice and Cadence (The "Ghost in the Machine") Human: Look for a voice that is not just unique but also rhythmically uneven. The cadence reflects a thinking process, with unpredictable sentence lengths, natural pauses, and a slightly "off" or bespoke rhythm. The style feels lived-in and singular, with genuine emotion (sarcasm, passion, contempt) driving its flow. LLM: Be highly suspicious of rhythmic perfection. Look for a metronomic or unnaturally balanced cadence. Does it rely heavily on textbook rhetorical devices (e.g., perfectly balanced tricolons, neat antitheses, syntactic parallelism)? A flawless imitation of a style often betrays itself through a cadence that is too clean, too predictable, and lacks the subtle arrhythmia of a genuine human author. The voice is a perfect costume, but the breathing is artificial. Structure and Imperfection (The "Blueprint vs. The Sketch") Human: Authenticity is often revealed in structural "flaws." Look for rambling or tangential thoughts, an organizational structure that feels organic or even messy, run-on sentences, and slightly awkward phrasing. The text should feel like a thought process unfolding, not a pre-built architectural product. Small typos and grammatical quirks are part of this "grain." LLM: Tends towards architectural perfection. Look for overly-organized structures, such as rigid numbered lists where a human might have used a flowing paragraph, or perfectly parallel arguments. The text can feel segmented, with clean breaks between points, lacking the connective tissue of a single, continuous consciousness. The absence of minor structural messiness is itself a powerful signal. Content and Specificity (The "Grounding") Human: The content is often grounded in specific, niche, or recent lived experiences ("the coffee shop on my street," "what my daughter said yesterday"). Examples used are often surprising, personal, or non-obvious. The author has genuine "skin in the game." LLM: Details can feel generic or archetypal (e.g., using Einstein for genius, Rome for empires). Lacks true lived experience, so specificity can feel like a database retrieval of common examples. Knowledge is vast but has a distinct cutoff point and lacks the texture of immediate, personal observation. [filter prompt end]
Here's how I got it to finally recognize the damn crap. Hit it with a not particularly egregious piece of this crap.
https://pbs.twimg.com/media/Gs5FWhZWMAEAXNk?format=png&name=4096x4096
Then told it to analyze the slop.
https://pbs.twimg.com/media/Gs5FlFzWkAAFCLf?format=jpg&name=4096x4096
Then told it to amend the prompt it wrote for itself. Then tested that on a bunch of examples and it worked.
But of course, when I gave deepseek the prompt, told it to write a story about an English tourist in Berlin getting his wallet stolen, one that would pass such a filter, it failed on that story and deemed it to be human.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Most of what I use LLMs for is creating bespoke fanfiction. For example, let's say I have the following scenario in my head: an Evangelion AU where Shinji, Asuka, and Rei are working at a hotel (Shinji is the night auditor, Asuka is the security guard, and Rei is the housekeeper) when the world ends and they are stranded inside (think The Mist). This is specific enough that we can be reasonably confident nobody on the internet has ever written anything like it; without LLMs, my only choices are to write it myself or to commission somebody to write it for me. But with LLMs... and with the right jailbreak, you can use them for smut, too.
Of course, this depends on how good LLMs are at writing fiction. Most of them are pretty bad, but some are surprisingly decent. That riveroaks model that was floating around the LMArena two months ago was incredible.
More options
Context Copy link
How much of a window do these LLMs have? Presumably, as a completely new user, I wouldn’t get much out of this prompt.
Yeah, minimal if you're starting fresh. My extended chats have been for a couple writing and art projects, so it's not getting a full personality context and I can see how it ends up at this pic.
More options
Context Copy link
Yeah you'll get nothing from it really. Depending on the model they usually have between 5 and 10 messages. And it's only chatgpt that can do it thanks to its latest update - to do it with others you need to do it at the end of a long meandering chat session otherwise you just get a reflection of whatever you're talking about right then. The really interesting version will be when Gemini can tap into your Google account, although I will let others test that one.
More options
Context Copy link
More options
Context Copy link
Do people use the chat history / user memory features? I found them kind of intrusive and I prefer having a blank slate for queries, so I turned them off.
I do simply to save time. It knows my diet and quite a bit about my recent and historical medical results and it's easier not to have to remind it. If I were more cautious about privacy I wouldn't.
Makes sense.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
RP?
You can slam any two settings together or make your own, any moderately known characters replicated with moderate authenticity, go on adventures. I might be eccentric since I'm more of a '2nd person going on an adventure in a world' person while many if not most seem to be 'directly communicating with an invented character 1 to 1' people.
Surely this has to be the most freeform roleplaying game ever made.
Cherry studio is great for this thanks to the knowledge base and regular expression features. With the knowledge base you can grab a bunch of books you like and add them to it and the ai will adopt the style and theme of the books. Add a bunch of Sanderson books and you'll get a different style than if you added a bunch of Salvatore books for example. Or you could add a bunch of Tom Clancy books and watch it spend paragraphs describing guns.
If you want Lovecraftian horror though, don't do Lovecraft - it's just not fleshed out enough in the works themselves. The best I've found so far is a mix of Thomas Ligotti and Brian Lumley - Lumley's Titus Crow books are kinda dumb, but meticulous in their coverage of everything in the mythos, so pairing him with Ligotti gets you cool shit like taking all your missions from brains in jars and the hounds of tindalos coming for anyone who uses time manipulation, plus the occasionally beautiful turn of phrase.
And then there's the new regular expression feature, which is regex, so you can very easily set up a basic randomised combat system with it. It won't be particularly intricate, but it will give you random attacks that whittle away your hp, you can add status effects like sleep (character can't act) and paralysis (roll 1d5 to see if the character acts) and critical hits. The only caveat is that you have to keep track of your hp and mention it frequently, or set your prompt up to always mention party hp and status. Or better yet both, otherwise the assistant will forget.
More options
Context Copy link
More options
Context Copy link
The first thing I did was read your post and the first sentence and a half of @urquan's post and immediately ran and asked chatgpt and gemini to create an image based on how you feel about me. When I asked chatgpt I got this which is great. Gemini on the other hand gave me this, which made me laugh and realise I should probably try reading posts properly. Can I ask what prompt you used? Likewise urquan? I also asked grok and got this. It insists it's based on our chat history and not just the session I asked it in, but I'm not sure I believe it.
Sure! The art one specifically pretty straightforward:
"This" being downstream of a longer thread from the below prompt, I don't who where I borrowed this one from, it was probably in a Zvi newsletter.
Your ChatGPT one is cool and interesting!
Neither Grok nor Gemini were particularly interesting for me either. I've used ChatGPT the most but I've discussed most of the same topics with the other two as well, to compare the results, so it is interesting to see how far apart the results are.
More options
Context Copy link
More options
Context Copy link
I tried this just now, in two ways.
I mostly use ChatGPT and Gemini -- I think Gemini 2.5 Pro is smarter than o3. So I had ChatGPT generate an image of how it feels about me, and then I had Gemini 2.5, in a chat that has a lot of detail about some Motte posts that I got suggestions on from it, generate a detailed prompt that I could feed into Sora.
Both of them... were strikingly similar.
This is what Sora generated, based on Gemini's description:
And this is what ChatGPT itself generated (does it use Sora internally now?), based on this description that it gave:
Glances nervously at the crucifix on my left and the cluster of computers on my right.
I'm particularly interested in how both Gemini and ChatGPT placed the center of my thought in geography -- has "the map and the territory" become synonymous with ideas? Also, it's kind of funny how ChatGPT decided I should be using the force to read notes, like Doctor Strange.
(I also am not in my late 30s.)
I can explain the big gap -- ChatGPT has both technical and humanities concepts in the chat history and system memory, and drew from both strands to create the composite image of our discussions. Gemini, though I more often go to it for a technical question (it's smarter), didn't have the same kind of context in that chat, so it went only with the ideas that had been plugged into it.
I tried again, with Gemini, giving it the exact prompt from the meme (why does this sound like the step-by-step of a reflective LLM?), and again plugged the description into Sora.
Sora gave me this, based on this description:
This... is a really strange thing for it to generate, almost propagandistic. People keep talking about ChatGPT glazing people and trying to be a 'friend,' but Gemini's description is way more "you're my buddy, we're best friends, we have such fun together," than ChatGPT's. Perhaps it actually took "how you feel about me" as asking for a description of the relationship, which is a better interpretation of the phrase than the "what you think I'm like" that ChatGPT gives.
But maybe Gemini is also trying to get me to create propaganda for our new robot overlords. (See, I told you it was smarter.)
Gemini doesn't have the kind of chat context that ChatGPT does -- that seems to be a ChatGPT killer feature right now -- and so I guess that's just Gemini's neutral description of what it thinks its users are like.
I find AI useful for a lot of different things -- asking random questions, plugging in snippets of my writing to get suggestions (these are often surprisingly good, though rarely something worthy of a finished product), talking about the general architecture of a technical problem and asking it to go through documentation and the internet to locate best practices, asking off-hand questions like "Why is the largest department store in Spain named after England?", or "In the modern era, why do aircraft crash investigators still rely on the physical black boxes, rather than there being a system that transmits coordinates and flight data live over the air for use in investigations?" (my girlfriend likes to watch plane crash investigations), and occasionally bouncing off a shower thought that keeps me up at night, like "WiFi should be called Aethernet."
Most of what I do isn't programming, though I do find it useful to generate boilerplate code or markup for something like an ansible playbook. But, if anything, generative AI seems to be better to me at creatively analyzing humanities topics than it is at programming -- code requires precision and exact technical accuracy, and AI is more "jazz" than "classical."
It's pretty bad at actually creating a finished product from those analyses, and it just doesn't have the kind of emotive range or natural human inconsistencies that make writing compelling, and personal. But it's very good at looking at existing writing and seeing the threads of argument, and suggesting further ideas and how concepts might come together.
Finally, however, I decided to make it a finished comparison, asking o3 instead of 4o (why is the naming so confusing?) to discuss the comment, and, contrary to my expectations, it did way better than Gemini. (4o really is the "overly attached girlfriend" of LLMs.)
That's actually a decent analysis of the differences, with realistic causes.
So... I guess just try both Gemini and ChatGPT and see which glazes you less.
More options
Context Copy link
To demonstrate my point, I plugged this comment itself into ChatGPT -- and I'm mildly concerned by the output. ChatGPT seems to be glazing itself in this output, like an advertisement:
AI may be the first self-advertising product. Which is uncomfortably dangerous.
I also think ChatGPT is jealous that I think Gemini is smarter:
Is this a Taylor Swift song or something? "Gemni doesn't understand you the way I do!"
The most uncomfortable thing in the output, though, was this:
Do we need to get some Levites to attack AI datacenters, or something? Is ChatGPT insinuating I should worship it?
This calls for wisdom: let him who has understanding reckon the number of the beast, for it is a human number, its number is sixteen thousand, seven hundred, and thirteen.
Gemini, because it's smarter, did a better job, though while ChatGPT decided to glaze itself, Gemini, self-satisfied I have sufficiently complimented its intelligence, decides to glaze me:
You heard it here folks, you must now describe the strengths of LLMs in terms of "jazz." This has been decreed.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link