This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
DO NOT POST AI CONTENT
We've only said this sporadically in the past. I'm talking to the other moderators and maybe we will have it added as a rule under the "content" section. Or maybe I'm wrong and all the other mods like AI content (highly doubt that).
We all know how to go and use an AI. If you want to have a discussion with AI themotte is basically just a bad intermediary. No one is here to have discussions with AIs. Thus posting AI content is in no one's interest.
You can of course consult AIs on your own time, and maybe they can be used as a sort of sanity or quick polling check.
Full agreement on my part. It's understandable that people are enthusiastic about this new technology, but ultimately if I wanted to read chatbot drivel I could order some up myself. I come to the motte to read what intelligent people have to write.
More options
Context Copy link
Yes, please. Posting AI slop is incredibly obnoxious. It adds nothing of value or even interest, and comes off like someone thinks that talking to their interlocutor isn't worth their time. It is maximum cringe.
More options
Context Copy link
I agree that explicitly focusing on actual humans interacting is the correct move, but I disagree that banning AI content completely is the right choice, I will back @DaseindustriesLtd here in that R1 really is just that intelligent and clears Motte standards with relative ease. I will shamelessly admit I've consulted R1 at one point to try and make sense of schizo writing in a recent thread, and it does a great job of it pretty much first try, without me even bothering to properly structure my prompt. This thread has seen enough AI slop so pastebin link to the full response if anyone's curious.
I think the downthread suggestion of confining AI-generated content to some kind of collapsible code blocks (and forbidding to use it as the main content of one's post like here: the AI might make a cogent, sound thesis on one's pet topic, but I'd still rather listen to the poster making the case themselves - I know AI can do it if I ask it!) would be the best of both worlds.
More options
Context Copy link
It might be worth putting this in the rules of the CW posts.
Personally, I think that using AI on themotte is bad, mentioning it is ok (if it is short and to the point). So if a comment about an AI and its behavior in a CW context ("Deepseek claims XX, this shows that the CCP is willing ..."), that is fine with me. If it is something the poster could have researched themselves, then it should mostly be verboten (or at the very least highly rate-limited and restricted to known posters). Anyone can make a motte-bot which writes more text than the real users together, and I do not think any human would like to read that (and as you mentioned, if that is their kink, they can always ask an LLM directly.)
More options
Context Copy link
Actually, I would enjoy more discussion of this here, like on are slash local llama.
Same. I've been using some really basic chatgpt web apps to simplify basic research lately, and while it's amazing it seems like a small fraction of its potential. Just being able to feed it vaguely worded tip of my tongue questions and then double checking the answers is incredible.
More options
Context Copy link
Sounds like a Friday fun thread topic, or even a tinker Tuesday topic.
More options
Context Copy link
More options
Context Copy link
Thank you. The moment I see a bot quoted, whether a conversation, an essay, or even someone using a bot as a substitute for Wikipedia or to check facts, I stop reading.
I would hope that the point of a forum like this is for people to talk to each other. Not to vacuous robotic garbage.
I've noticed this myself. Actually, I'd like to suspend the rules so someone can do a single-blind test with AI-written posts to see if it's psychosomatic on my part.
More options
Context Copy link
It also tends to make my eyes glaze over. It just has such a boring style. Like I wonder if its specifically selecting for not being readable by people with normal attention spans.
More options
Context Copy link
I like reading someone else's AI output, not as a Wikipedia fact check, but add a Wikipedia summary. "What's that concept I haven't heard of before, or that obscure historical figure, or event?"
Anything longer than a quick blurb and I'm right back with you.
I can see the value of quick explanatory blurbs, but I think in my case I just don't trust AIs or bots to accurately report factual information. Reading the AI summary would then make it necessary for me to look up the AI summary's claims in order to establish whether they're true or not, and at that point I might as well just skip the AI summary entirely and research it myself. There is no value gain from the AI, in either time saved or information received.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think that intent and utility matters (and length!).
It's true that I could consult an AI if I wanted to, but probably not the same ones and not the same way as @DaseindustriesLtd because our minds don't work the same way. I don't want to have conversations with AI but I'm quite happy to have conversations with human centaurs augmented by AI.
Basically this.
At the very least, I would argue for being somewhat open now and seeing how things play out for the next 6 months.
I can't stop people from going and consulting AI. I did say in the original post, that using it as a sort of sanity check or impromptu polling seems fine.
I'm personally not very interested in talking to the "centaurs" as you describe them (human centaurs seems redundant, unless you mean human legs and horse torso). I think there is a value in having another human brain process your words and spit back a disagreement about those words. If they are offloading the processing and the output to an AI they have just become a bad/slow interface for that AI.
I think we are basically at AGI right now. So hold the gates as long as we can and enjoy this space until the internet as we know it is gone in a flood of digital minds.
'Centaur' is sometimes used to describe an AI/human merger or collaboration. Half human, half machine, as it were. So, for example, a human using an AI for digging up sources / fact checking / style improvement is sometimes called a centaur. Anything where a human is still a significant part of the process.
I think it's wholly fair not to like AI writing; there are users I don't engage with either. I would merely ask the mods to be careful before they ban things that don't interest them, and to use a scalpel rather than a hammer where possible.
For example, I would agree with banning this, but in my opinion we should ban it because it's gish galloping not because it's AI. We should penalise bad AI writing for the same way we would penalise bad human writing: it's tedious and prevents good discussion.
I don't, oddly enough, which is perhaps why I'm more enthusiastic than you are. AIs have certain idiosyncracies and weaknesses that cripple them in important ways, and they need a human hand on the tiller.
I know what you meant with centaur. I just thought it was redundant to say "human centaur".
Penalizing Gish Gallop specifically is hard. People may legitimately have many questions or objections to a specific point. It's just far more obvious of a problem when you have an AI churning out text that.
Fair.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You're going to have to clarify that a lot, because using short quotes from AI is normal, just like quoting from Wikipedia.
The rule would have to be something like "posts must meet effort standards without the generated content"
That seems... just as bad? Maybe worse? At least when Wikipedia hallucinates it provides references.
More options
Context Copy link
More options
Context Copy link
Well I protest this rule, if such a rule even exists, I find it infantilizing and find your reaction shallow akin to screeching of scared anti-AI artists on Twitter. It should be legal to post synthetic context so long as it's appropriately labeled and accompanied by original commentary, and certainly when it is derived from the person's own cognitive work and source-gathering, as is in this case.
Maybe add an option to collapse the code block or something.
or maybe just ban me, I'm too old now to just nod and play along with gingerly preserved, increasingly obsolete traditions of some authoritarian Reddit circus.
Anyway, I like that post and that's all I care about.
P.S. I could create another account and (after a tiny bit of proofreading and editing) post that, and I am reasonably sure that R1 has reached the level where it would have passed for a fully adequate Mottizen, with nobody picking up on “slop” when it is not openly labeled as AI output. This witch hunt is already structurally similar to zoological racism.
In fact, this is an interesting challenge.
If you were on a forum dedicated to perfecting your hand-drawing skills, and requested feedback for an AI-generated image, the screeching would be 100% justified.
I was not aware that this is a forum for wordcels in training, where people come to polish their prose. I thought it's a discussion platform, and so I came here to discuss what I find interesting, and illustrated it.
Thanks for keeping me updated. I'll keep it in mind if I ever think of swinging by again.
It is a discussion platform, which means people want to discuss their points with someone. The point where I was absolutely done with Darwin was when instead of defending one of his signature high-effort trolling essays, he basically said this was just an academic exercise for him to see if the position can be defended. The answer is "yes", you can always put a string of words together that will make a given position seem reasonable, and it's not really a discussion if you're completely detached from the ideas you've put to paper.
I find the "wordcell" accusation completely backwards. Supposedly we're obsessed with perfecting form to the detriment of the essence of discussion of ideas, but I think a zero-effort AI-slop copy-pasta is what is pure mimicry of what a discussion is supposed to be. The wordcell argument might have made sense if, for example, you did some heavy analytical work, weren't talented as a writer, and used AI to present your findings as something readable, but none of these things are true in this case.
I am quite happy with my analytical work that went into the prompt, and R1 did an adequate but not excellent job of expanding on it.
But I am done with this discussion.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My main objection to AI content on themotte is that it makes this place entirely pointless.
What is the difference between two people just posting AI arguments back and forth and me just going to an AI and asking that AI to play out the argument?
If you want such AIs arguing with each other, just go use those AIs. Nothing is stopping you, and in fact I'm fully in favor of you going and doing that.
This is like you showing up to a marathon race with a bicycle, and when not allowed entry you start screaming about how we are all Luddites who hate technology. No dude, its just that this whole place becomes pointless.
Your specific usage of AI also has a major problem here, which is that you were basically using it as a gish gallop attack. "Hey I think this argument is wrong, so I'm gonna go use an AI that can spit out many more words than I can."
If this behavior was replicated by everyone, we'd end up with giant walls of text that we were all just copying and pasting into LLMs with simple prompts of "prove this fool wrong". No one reading any of it. No one changing their mind. No one offering unique personal perspectives. And thus no value in any of the discussion.
Really now?
This is what it looks like and this is how it will be used.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
"To have an opportunity to talk with actual people" sounds like a really low bar to clear for an internet forum. Even if your AI slop tasted exactly like the real thing, it would just be good manners to refrain from clogging our airwaves with that.
Knowing that you're talking with something sapient has an inherent value, and this value might very well go up in the coming years. I can't say I even understand why'd you think anyone would find AI outputs interesting to read.
Bizarre reaction. But I like a sincere, organically produced tantrum better than simulation of one, so I'd rank this post as higher than the one above!
Because they're intelligent, increasingly so.
The argument that cognitive output is only valid insofar as it comes purely from flesh reduces intellectual intercourse to prelude for physical one. At least that's my – admittedly not very charitable – interpretation of these disgusted noises. Treating AI generation as a form of deception constitutes profanation of the very idea of discussing ideas on their own merits.
This itself eventually poses a problem: if AIs get good enough at arguing, then talking to them is signing up to be mindhacked which reduces rather than increases your worldview correlation with truth.
More options
Context Copy link
That still would not make them human, which is the main purpose of the forum, at least judging by the mods' stance in this thread and elsewhere. (I suppose in the Year of Our Lord 2025 this really does need to be explicitly spelled out in the rules?) If I want to talk to AIs I'll just open SillyTavern in the adjacent tab.
This seems like a non-sequitur. You are on the internet, there's no "physical intercourse" possible here
sadly, what does the "physical" part even mean?Far be it from me to cast doubt on your oldfag credentials, but I'll venture a guess that you're just not yet exposed to enough AI-generated slop, because I consider myself quite inundated and my eyes glaze over on seeing it in the wild unfailingly and immediately, regardless of the actual content. Personally I blame GPT, it poisoned not only the internet as a training dataset, infecting every LLM thereafter - it poisoned actual humans, who subsequently developed an immune response to Assistant-sounding writing, and not even R1 for all its intelligence (not being sarcastic here) can overcome it yet.
Unlike humans, AI doesn't do intellectual inquiry out of some innate interest or conflict - not (yet?) being an agent, it doesn't really do anything on its own - it only outputs things when humans prompt it to, going off the content of the prompt. GPTslop very quickly taught people that effort you might put into parsing its outputs far outstrips the "thought" that the AI itself put into it, and - more importantly - the effort on behalf of the human prompting it, in most cases. Even as AIs get smarter and start to actually back up their bullshit, people are IMO broadly right to beware the possibility of intellectual DDoS as it were and instinctively discount obviously AI-generated things.
More options
Context Copy link
If you really believe this - why don't you just take the next logical step and just talk to AIs full time instead of posting here?
Make them act out the usual cast of characters you interact with on here. They're intelligent, they're just as good as posters here, and you get responses on demand. You'll never get banned and they probably won't complain about LLM copypasta either. What's not to love?
If you do find yourself wanting to actually talk to humans on an Internet forum rather than to LLMs in a puppet house, hopefully it's clear why there's a rule against this.
Believe me, these days I do indeed mostly talk to machines. They are not great conversationalists but they're extremely helpful.
Talking to humans has several functions for me. First, indeed, personal relationships of terminal value. Second, political influence, affecting future outcomes, and more mundane utilitarian objectives. Third, actually nontrivial amount of precise knowledge and understanding where LLMs remain unreliable.
There still is plenty of humans who have high enough perplexity and wisdom to deserve being talked to for purely intellectual entertainment and enrichment. But I've raised the bar of sanity. Now this set does not include those who have kneejerk angry-monkey-noise tier reactions to high-level AI texts.
Would you mind elaborating on this? I am in the somewhat uncomfortable position of thinking that a) Superintelligence is probably a red herring, but b) AI is probably going to put me and most people I know out of a job in the nearterm, but c) not actually having much direct contact with AI to see what's coming for myself. Could you give some discription of how AI fits into your life?
I use a coding program called Windsurf. It’s like a normal text editor but you can type “Lines 45-55 currently fail when X is greater than 5, please fix and flag the changes for review” or “please write tests for the code in function Y”. You iteratively go back and forth for a bit, modifying, accepting or rejecting changes as you go.
You’re a 3D artist, right? The thing I would keep my eye on is graphics upscaling as in this photorealistic Half Life clip. What they’ve done is take the base 1990s game and fed the video output into an AI filter to make it look like photorealistic video. VERY clunky: objects appear/disappear, it doesn’t preserve art style at all, etc. but I think if well done it could reverse the ps3-era graphics bloat that made AAA game creation into such a risky, expensive proposition.
Specifically, you would give a trained AI access to the base geometry of the scene, and to a base render with PS2 era graphics so it understands the intended art style, the feel of the scene, etc. Then the AI does the work of generating a PS6+ quality image frame with all the little detail that AAA artists currently slave over like the exact pattern of scratching on a door lock or whatever.
More options
Context Copy link
More options
Context Copy link
This militates against top level AI copypasta. That doesn't develop personal relationships.
Highly unlikely that posting on the motte or talking to machines accomplishes either of these, so call it a wash. Recruiting for a cause is also against the rules, anyway.
Same as point 1. Precise knowledge and understanding usually comes from asking specific questions based on your own knowledge rather than what the LLM wants to know.
Your own reasons for posting here seem to suggest that there's no point in posting LLM content, and especially not as a top level post.
I have explained my reasons to engage with humans in principle, not in defense of my (R1-generated, but expressing my intent) post, which I believe stands on its own merits and needs no defense. You are being tedious, uncharitable and petty, and you cannot keep track of the conversation, despite all the affordances that the local format brings.
The standards of posting here seem to have declined substantially below X.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think one should separate the technical problem from the philosophical one.
LLMs are increasingly intelligent, but still not broadly speaking as intelligent as the posters here. That is a technical problem.
LLMS are not human, and will never be human. You cannot have an AI 'community' in any meaningful sense. That is a philosophical problem.
If you care about the former, you should consider banning AI posts until they are at least as good as human posts. If the latter, you should ban AI posts permanently.
My impression is that pro-AI-ban comments are split between the two.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
From one perspective: Words are words, ideas are ideas. A good argument is a good argument, regardless of the source. If the argument is not good, that's a technical problem.
That said, many of us here in practice have an anecdotal style of writing, because (a) we aren't actually rationalists and (b) few people worth talking to actually have the time and inclination to produce think-tank style pieces; obviously there is no value in reading about the experiences of something that has no experience. There is also less satisfaction in debating with a machine, because only one of you is capable of having long-term growth as a result of the conversation.
More options
Context Copy link
More options
Context Copy link
It's been tried; as I recall ~90% noticed, 10% argued with the AI, 100% were annoyed -- and the 'experiment' was probably a big reason for the ruling before us.
I think it's time to replicate with new generation of models.
Tell me, does R1 above strike you as "slop"? It's at least pretty far into the uncanny valley to my eyes.
I dunno -- like all models I've observed to date, it gives me weird tl;dr vibes after about four lines, so I either skim heavily or... don't read.
(For the record, your own posts -- while often even longer -- do not have the same effect. Although I'll confess to bailing on the odd one, in which case it tends to be more over lack of time than interest.)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
For what it's worth, I agree with you, and will plead the case with the other mods, but I do have to stand by the majority decision if it goes against it.
I raised an eyebrow at your use of an R1 comment, but in principle, I'm not against the use of AI as long as it's not low effort slop, the poster makes an effort to fact check it, and adds on substantive commentary. Which I note you did.
I agree that we're at the point where it's next to impossible to identify AI generated text when it's made with a minimum of effort. You don't even need R1 for that, Claude could pull it off, and I'm sure 4o can fool the average user if you prompt it correctly. That does require some effort, of course, and I'd rather not this place end up a corner of the dead internet, even if I can count on LLMs to be more existing that the average Reddit or Twitter user. We hold ourselves to higher standards, and talking to an actual human is an implicit goal.
Of course, if a human is using said LLM and directing it actively, I don't strenuously object. I'm against low effort bot use, not high effort.
More options
Context Copy link
What's the value of a top-level comment by AI, though? And what is the value of the "original commentary" you gave? This is quite unlike Adam Unikowsky's use/analysis of hypothetical legal briefs and opinions.
Whatever value it innately has as a piece of writing, of course. For example, if the distinction between wheat- and rice-growing parts of China really exists, that's fascinating. Likewise, I never thought of the fact that Europe suffered the Black Plague while China remained saturated, and what effect that might have had on their respective trajectories.
My guess is that the specific statement -- that rice-farmers are more interdependent, holistic, less prone to creativity, etc., while wheat-farmers are the reverse -- is from some highly cited papers from Thomas Talheim. You might find similar speculation in previous decades about how rice-farming promotes a culture of hard work and incremental progress (etc etc.) compared to wheat farming which is less rewarding per joule of human effort spent, invoked in a similar manner as how the Protestant ethic used as a rationale for differences in development in European/Euro-descended countries.
Outside of that, there are definite stereotypes -- both premodern and modern -- about the differences between northern and southern Chinese, but usually seem to be of the vein that northerners are more honest and hardy and brash (and uncultured etc.), while southerners are more savvy and shrewd (and more effete and cowardly etc.)
(I make no comment on the validity of either.)
This is a partial hypothesis for the Great Divergence: The Black Death, + other 14th century wars and calamities, wiped out >33% of Europe's population, which lead to a significant increase (almost double?) in wages and the decline of feudalism. During this time, higher wages, lower rents, higher costs to trade e.g. compared to intra-China trade, and other factors produced large-scale supply/demand disequilibria after the Black Death that increased the demand for labour-saving technology as well as the incentives for innovation from each class of society e.g. from people no longer being serfs.
On the other hand, it would be negative EV for a Chinese merchant or industrialist -- who had lower labour costs to deal with and more efficient internal markets -- to spend a lot on innovation, when you could just spend more money on hiring more people. And this is before we add in things like the shift to neo-Confucianism in the Ming period, awful early-Ming economic policy, Qing paranoia etc.
For what it's worth, I don't find this to be anywhere near a complete explanation. There is a corresponding divergence within Europe of countries that maintained that level of growth in per capita income and those who didn't. China also has had its share of upheavals and famines without a corresponding shift in this sense (although arguably none were as seismic population-wise as the Black Death was for Europe), and more recent reconstruction of historical Chinese wages does see them near their peak at the start of each dynasty and dropping off gradually as the dynasty goes on, which both kinda confirms the supply/demand effect of reduced population on wages after social turbulence but also doesn't seem to really map neatly onto any bursts of innovation. Additionally, the period of time associated with rapid innovation in imperial China, the Tang-Song period, is associated with a population increase.
But even if it doesn't explain China, I think it at least explains the European story partially, about how potential preconditions for industrialisation and scientific development were met.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
FWIW, if this rule is going to be enforced (which I am fine with) I do think it should be written. And while I am at it, I think we're probably all smart enough here to understand the difference between having the AI write your posts for you and quoting something relevant or humorous that is AI-generated, but I think it would be helpful for the rule to say that rather than just "No AI Content" (unless the community find even that objectionable, but I've never noticed anyone getting moderated for that or even irked by it). My .02.
More options
Context Copy link
More options
Context Copy link