site banner

Culture War Roundup for the week of August 4, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Look, I think it's quite clear that my statement about tigers was hyperbole. You seem like a perfectly nice guy, while I wouldn't jump into the ring to save you, I'd throw rocks (at the tiger) and call for paramedics.

That is the nice thing about being able to compartmentalize one's combative online persona from being an actually nice and easy-going person in reality. There are very few people I would actually watch and let die, and they're closer to Stalin than they are to people I disagree with on a forum for underwater basket weaving.

But when you imagine talking to your own grandmother - a perfect example of a novice user - what do you do?

If this were a perfectly realistic scenario, my conversation would go:

"What the fuck. Is that a ghost? I thought your ashes were somewhere in the Bay of Bengal by now."

Do you understand why I paraphrased what is usually a more nuanced context-dependent conversation IRL? If my granny was actually alive, I would probably teach her how to use the voice mode in her native language and let her chill.

And I have to say, if I told you I'm not biased towards Teslas, Elon doesn't send me cheques, and in fact I just paid money for one, how wide would your eyes go as you attempted to parse that?

Uh? I don't know. If you have a reputation for doing that, I genuinely do not recall. I am very active here, but I do not remember that without actually having to open your profile.

Noted. You won't take back either statement - I am still dumber than a parrot (given the retreat you have been on over the last few posts I guess that's score 1 for parrots?) and you still want to see me meet a tiger outside of its cage, but you would throw rocks at it.

I am familiar with hyperbole. I am also familiar with the mechanics of shaming. I think you are too and you know that isn't a defence. Shame often uses hyperbole to express the level of emotion of the shamer and to trigger a more visceral reaction in the shamed. Can I start ending my arguments on the motte with die in a fire if I promise it's rhetorical?

On the topic of your grandma, you have my condolences. Retreating to literalism is just more condescension though, it's not an argument I will engage with, particularly when I already noted the hypothetical nature of the exercise. I will simply point out that you had the opportunity to deploy your intern model in a hypothetical with a novice user and you refused - twice now.

You have not needed to argue any of this.You're clearly capable of nuance when you want to be - over the past day you've written however many words on MIAD in that other thread and also given me a detailed breakdown of how and why you'd throw rocks at a tiger. You chose, after explaining the superiority of the intern model, not to use it. After having the discrepancy pointed out, you chose again not to use it. You can't imagine using it because it does not work as a cognitive shortcut, case closed. High five Sam Waterston. Created by Dick Wolf.

Lastly, my point about Tesla is that the fact that you are willing to pay for ChatGPT plus is a mad defence against the claim that you are evangelising on its behalf. You don't need to pay someone to advertise your product if they are already paying you, that's advertising 101 - you let the principles of brand fusion and post purchase rationalisation do their thing, eventually reinforced by the sunk cost fallacy. As these things go it's closer to a confession than it is to a defence.

Dude. Do you lack a sense of humor? This isn't intended to be an insult, but I am genuinely confused. I clarified that quite a bit of my apparent hostility towards bird fanciers is a joke. I can get annoyed by certain types of people at times, but I don't wish death upon them. I also make it a general policy not to fistfight tigers on behalf of strangers, you've gotta be close family or a loved one to make me consider that. I do not think we're family, and I do not think I sleep with you.

You're taking this to a place of literalism that's honestly baffling. "Can I start ending my arguments on the motte with die in a fire if I promise it's rhetorical?" No, because "die in a fire" is a bottom-tier, uncreative, stock internet insult. My tiger comment, while admittedly pointed, was at least bespoke. It was also clearly not meant to be serious, I enjoy making jokes. If it wasn't clear then, for the love of God I hope it's clear now. There's a difference between sharp, theatrical hyperbole meant to illustrate a point with some flavor, and just being generically hostile.

On the topic of my grandmother: you seem to think you've found some grand "gotcha." You haven't. You've simply discovered the concept of "scaffolding" in teaching.

Of course I wouldn't dump the entire "fallible but brilliant intern" model on a complete novice in one go. That's not how you explain anything complex. You break it down. The "genius with the world's worst memory" is a facet of the intern model.

I would tell the hypothetical grandma, due to a paucity of my own:

  • This is an AI. It can talk just like a human, in text or speech. It can even do video one way. (A grandma isn't using Ani from Grok, anime avatars aren't a concern)

  • It is not actually a human. But you can mostly treat it as a human, if you keep in mind the following:

  1. You need to introduce yourself to it, it knows nothing about you. Think of it as an intern you just met.

  2. It doesn't remember previous conversations by default. There are exceptions, granny, but there not too relevant here.

  3. It's really smart! But it's also forgetful, can make errors, so please double check what it says if it's not more important than ideal dress to Bingo Night.

  4. It can and will flatter you, my what lovely eyes you have granny. Please be careful about whether it's agreeing with you because you're right or because it wants to please you.

  5. It is good at: X, Y and Z. And bad at:.. If you really need A or B, then maybe consider this funny little fella named Claude.

  6. More depending on the context.

That is a perfectly good framework. Now, compare that list of actionable, non-technical advice to the guidance offered by the "stochastic parrot" model. What would that list look like?

  1. This is a parrot.

  2. It just repeats things it's heard without understanding them.

  3. ...That's it. That's the whole model.

And this brings me back to your final, bizarre point about me paying for ChatGPT Plus. You think that's a "confession" of bias. I think it's the very foundation of my argument. You don't develop a nuanced, multi-part user model like the one I just laid out by casually playing with the free version. You develop it through deep, sustained use. I was running into the tool's limitations day after day and systematically figuring out the strategies that work. I was doing this well before it was cool.

Plus being a "confession" of bias is a wild misapplication of pop psychology. By that logic, no one who pays for a product can ever be a credible critic or analyst of it. The person who buys a Ford F-150 can't tell you about its turning radius? The person who subscribes to The Economist is just engaging in post-purchase rationalization when they recommend an article? It's absurd.

I pay for it because I use it extensively, both for work and for leisure. That heavy use is precisely why I have a well-developed opinion on its capabilities, its flaws, and the best mental models for using it effectively. It's a credential for my argument, not a disqualifier.

You say you're joking, and then you continue by explaining why you wouldn't intervene in another scenario where you imagine me in a cage with a tiger. You couch your "apparent hostility towards bird fanciers" in the dismissive phrase "quite a bit", leaving yourself wiggle room to continue thinking less of some - like me. Then you tell me, a stranger you have never met and never will meet who lives on the other side of the world, that you don't actually wish me dead. Implying that my concern is for my life, not the insults. Yeah, I know all the tricks, chum.

Do you want to know how I know? Because I used to prioritize my jokes over the rules of the motte. I learned the hard way, through multiple bans, that being clever is no excuse for hostility. And that hostility is often in the eye of the beholder no matter how you meant it to come across.

So where is this line? It's north of blatantly obvious cliched examples of comedic shaming like "die in a fire" that's clear, but apparently south of "I hope you get mauled by a tiger" and "you're dumber than a parrot". How about, "I hope swarms of aphids crawl down your throat"? Or "I almost want to stick a iron hook up your nose and scrape out your brains, but I see there's no point" or maybe "scientists discovered a new sub-atomic particle on the edge of the gluon field - your worthless dick". I really need to know so I can go back to 'joking' people into silence. Either way I'll be damned if I'm going to let a mod get away with it if I can't.

Now, onto your 'scaffolding'. What was it I said you'd have to tell your grandma about your intern?

You'd never actually saddle your grandmother with the mental load of dealing with an intern who is an amnesiac - and is also a compulsive liar who has mood swings, no common sense, and can't do math.

Huh, looks like I discovered the concept a while ago. And what 'scaffolding' did you just invent? A list of rules that describes an amnesiac, unreliable, potentially flattering (read lying) intern who is bad at certain tasks.

You are still deliberately missing the fundamental concept. Let me try one last time. Cognitive. Shortcut. The goal is to give a novice a powerful, easy to remember tool to 'shortcut' if you will, their biggest barrier - anthropomorphism. Your scaffolding is just a more complicated version of my model. In fact you had to gut your own metaphor (the fallible intern, closer to a human than a parrot) and adopt the primary principle of mine (it's not human) to make it work. It's funny how the grandmas and grandpas I've taught my 'bad' model to have managed to wrap their heads around it immediately - and have gone on to exceed the AI skills of many of my techbro friends.

And as for armchair psychology, you brought up your financial relationship with OpenAI as proof you aren't biased, that you aren't defending the public image of LLMs. I just pointed out how flawed that argument is by explaining basic psychological principles like the sunk cost fallacy. I honestly can not believe a trained psychiatrist is claiming paying for something is proof they aren't biased towards it. It's beyond ridiculous.

And of course paying customers can be credible reviewers. I used to be one for a living. The site I worked for refused to play the '7 out of 10 is the floor' game, so despite being part of the biggest telecommunications network in the country we had to pay for Sega and Xbox Studios games to review them. But we made an effort to check our biases, with each other and our readers. And more importantly, this isn't a product review, this is a slap fight about which mental model is is best for novice AI users. You are heavily invested in your workarounds, I understand. I am heavily invested in mine. And while I haven't been heavily into it since before it was 'cool', I did:

  1. Jump in with both feet. I use Gemini 2.5 pro, which I pay for, every day. I find its g-suite integration to be an incredible efficiency enhancer.

  2. Expand beyond using a single model - I have API credit for DeepSeek, Gemini, Claude, Kimi, ChatGPT, and Grok. I could say I use them every day too, except I'm currently away from my computer.

  3. Develop your nuanced, multi-part user model before you did, with greater clarity.

My amusement at your condescension aside, that makes me biased too. But it also gives me the perspective to know that 'thinking like a GPT power user' isn't a universal solution. And it's working with others that gives me the perspective to know that a simple, portable mental model like the parrot is far more useful for novices across all platforms than a complex personality profile for just one.

I suspect none of what I just said matters though. Much like nothing I've said matters. You aren't arguing to enlighten, you are arguing to win the argument. That's not my assessment, in case you think this is more of my pop psychology, it was the assessment Gemini gave me prior to the last post when I put our conversation into it and asked it how I could possibly get my point across when you hadn't seemed to understand anything I'd said already. I should have listened.

This conversation is, quite clearly, not going anywhere useful at this point. That is, I'm happy to acknowledge, partly my fault. I apologize for that. I genuinely do not consider you the modal case of the Parrot-apologist I dislike.

I will bow out, I think I've said pretty much everything I can usefully say on the topic. I hope you have a nice day, and if you think your explanation works, well, it very well might (for the purposes of clueing noobs in) . At the end of the day, it seems that even if we have very significant differences of opinion on the philosophy of LLMs, but the actual conversations necessary to explain them to new users are, in fact, longer than calling them interns vs parrots. We both use multiple caveats and explainers, which, as far as I can tell, end up not that far apart in practice.

I genuinely do not consider you the modal case of the Parrot-apologist I dislike.

Thank you for saying so. I would say this conversation stopped going anywhere a while ago, and I think our philosophy on AI is much more aligned than you think, but... I'm not trying to start anything again, but I won't let philosophy get in the way of practicality if I don't think there is a moral component. Which is how I see this situation.