site banner

Culture War Roundup for the week of June 9, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

One of my favorite bands just took a bunch of AI accusations, I guess, and he wrote a somewhat-pissed Substack post. That lead singer doesn't often step into culture war stuff, but this was close enough, I think:

Unfortunately, as soon as we released the other day, people started accusing us of using an AI image. Now, I want to be clear, this is not an AI generated image, and I have the layered design files to prove it, but I get that it has certain features which can easily make someone think it is, particularly the similar-ish smiling faces. And everyone is talking about AI nowadays, and so they’re all primed to think it is AI. Seriously: Fair enough, I’m not blaming anyone. But I’ve seen the design templates, it really isn’t.

and goes on to say that fighting AI art in this way is fruitless:

And so, there is no “solution” to the problem of AI imagery other than the one the Luddites came up with over two centuries ago: smash the machines. Until we can actually smash the machines (literally or semi-literally), the AI will just get better and better until no-one can tell. This day is fast coming. So, I think we should either start figuring out how to smash the machines or accept our fate. There is no middle way. And so, with all due respect for those honorable people who just hate AI and want real art to prevail, calling out artists because you think you can “tell” is just another one of those doomed middle ways.

I regret that the culture war is poking random people in a new way in the last couple of years, and I can't help but cynically laugh at it. Not to mention how short-sighted it is. In that post, the lead singer details how much of a pain it is to do graphic design for music, and videos, and other art, and he hates it. Imagine if you could get a machine to do it? Also, it actually lifts up people who do not have money and allows them to make art like the people who have money do. Look at this VEO 3 shitpost. Genuinely funny, and the production value would be insane if it was real, for a joke that probably wouldn't be worth it. But now, someone with some Gemini credits can make it. This increases the amount of people making things.

I'm not sure I have any real thesis for this post, but I haven't been very good at directing discussion for my own posts, so, reply to this anecdote in any way you see fit. I thought it was interesting, and a little sad.

I agree that this stuff is becoming more and more difficult to tell apart. We even had one of our own posters get falsely accused by the mods of using AI recently. People are going to claim many things are "obviously AI" when they actually aren't, and the mania of false accusations is going to tick a lot of people off. When you're accused of using AI, not only are people saying you're committing artistic fraud, they're also implying that even if you aren't then your output is still generic trash to some extent.

I wish the Luddites would go away and we could all just judge things by quality rather than trying to read tea leaves on whether AI had a hand in creating something.

This also 100% applies to this forum's rule effectively banning AI. It's a bad rule overall.

If you want to talk to an AI, there's already a place where you can do that.

If you want to talk to an AI

This rhetorical question actually caused me to have a think. Why do people want to talk to an AI? I mean productivity I can understand, all the usual "as a tool" excuses. But I've felt no compulsion, not even curiosity, to talk to an LLM just to talk. And yet I see people casually mentioning doing that all over the place. It's like something straight out of Her, a film which thoroughly squicked me out. Is there anyone here who just casually socializes with an LLM who can explain why they do it?

I've been thinking the same thing. AI text seems so fundamentally uninteresting to me. The reasons I'm interested in humans talking is either to find out what people think or to learn actual information/insight about the rest of the world. AI doesn't do the former at all because there's nobody writing it so it doesn't let me know anyone's thoughts or feelings, and it's not reliable enough to be good at the latter. On rare occasion I've gotten use out of it as a search engine pointing me towards information I can verify myself, and I don't doubt various other uses as a tool, but beyond that? Back in the early days of GPT-2 through to GPT-4 I was interested in the samples posted by others, but that was because of what they indicated about the state of AI. Is it that some people enjoy the act of conversation itself even if they know there's nobody on the other end? I wonder which side is the majority, and by how much?

@Fruck compared it to parasociality but it's almost the opposite to me. For example I like reading other people discuss the same media I'm interested in. So do a lot of other people, that's presumably why people read Reddit or 4chan threads discussing media, read reviews for books they've already read, watch youtubers like RedLetterMedia, watch reaction-videos, etc. People want to know what other people thought, they want to empathize with their reactions to key moments, etc. AI-generated text has none of that appeal, if people are having parasocial relationships with it then their parasociality is completely different from anything I've felt. I guess the closest comparison is to parasocial feelings for fictional characters? If AI was capable of good fiction-writing I might be interested in reading it, the same way I can appreciate good-looking AI art, but currently it's not. Especially not when the character it's writing is "helpful AI assistant", hardly a font of interesting characterization or witty dialogue, yet a lot of people seem to find conversations with that character interesting.

The reasons I'm interested in humans talking is either to find out what people think or to learn actual information/insight about the rest of the world.

LLMs are a great way of researching things because they have a surface level understanding on par with a median professional of some field. You'll be taken for a ride in some way if you don't know the topic yourself, but you can get a lot out of them that way.

I'm glad you said this, because I both agree with what you said and disagree with what you said from another perspective. And maybe I'm using parasocial wrong.

I wouldn't consider reading user reviews on reddit or watching rlm reviews parasocial at all, although I guess they are one sided relationships. But like you said the valence almost goes the other way - I know that when I read reddit idgaf about the stranger whose post I'm reading (unless they consistently knock it out of the park enough for me to notice), but if I post on reddit I use even more casual language than I do normally - I write for the hypothetical audience. But the parasociality with ai I was thinking of, oh yes that's different. That's parasocial in the same sense as those crazy ladies who attack soap stars for cheating on their lover in the show. That's true parasociality, a relationship entirely imagined by the viewer, as great or as terrible as they desire.

Because I would say you are right that there fundamentally isn't anyone writing it so you don't get anyone's thoughts and feelings - but you do get the zeitgeist position, which is an amalgamation of everyone's thoughts and feelings. It won't tell you what is true, but it is fantastic at telling you what popular consensus thinks is true. Forming a relationship with that is bonkers, but the narcissist in me sure sees the appeal.

And when I use it as a search engine I do prefer a conversation even though there's no one at the other end. I have always thought better with someone to bounce off, I always viewed taking notes to read the next day as sort of bouncing off myself, so using ai that way was a natural fit. And for general information that is easy to find, ai is much better than a search engine - that's why Google and Microsoft put it at the top of the search. Yeah you have to verify it's real, but you already had to do that with Google and Wikipedia! Or should have been.

That's why I wanted to know if my examples count as 'talking just to talk' - that's how I would describe them, but it's not about company, it's about information and novelty. But maybe I'm just flattering myself by saying that in the eyes of those squicked out by ai? I know I feel like I've been typical minding just assuming everyone is as enamoured with words as I am. I was aware I have a broader tolerance for slop than most but I figured if anyone here was a slow ai adopter it would be me, and most people here would be running their own llms already while I'm still playing around with the public models.

I often use it as a lookup tool and study aid, which can involve long conversations. But maybe that falls under "as a tool."

The last time I had a bona fide conversation with an LLM was maybe three months ago. These actual conversations are always about its consciousness, or lack thereof--if there's a spark there, I want to approach the LLM as a real being, to at least allow for the potentiality of something there. Haven't seen it yet.

What do you mean by socialise? I asked it to tell me about the critical and audience receptions of Sinners just now, then argued with it about why historical accuracy is no bar to activists, does that count? Also I made a bot that was teaching me about python and Linux speak as if it was Hastur, because it makes me smile, but I soon discovered that I could much more easily understand it because I could more easily discern the fluff from the substance. If you mean parasocial relationships, the answer is they're parasocial relationships :/

I have before, and it's interesting to me as well why people do it. In my experience the AIs of just a few years ago were very clearly robotic (to use a word that might not fit) in that they would seem to "forget" things very quickly, even things you had just told them. Currently I think they're considerably better, but their popularity suggests that they're still overly positive and loath to criticize or call out the user the way a human might. In other words there is a narcissistic element in their use (the link is an internal link to a recent Motte post) where the user is fed a continual stream of affirmations in the self he or she is presenting to the AI. Hell on Reddit people are literally marrying their "AI boy/girlfriend."

I have a friend who is having issues with his wife, and has taken to interaction with AI in ways that I am not completely sure of except to say he's given it a name (feminine) and has various calibrations that he uses (one that is flirty, etc.) I can tell by speaking to him about this that he is engaging in what I'd consider a certain wishful thinking (asking the AI what it means to be real, to be alive, etc.) but it's difficult in such situations to tactfully draw someone back into reality. So I am untactful and say "It's not a She and it's not a real person, bro." This gets a laugh but the behavior continues.

I wouldn't discount the idea that this (treating Ai as a companion, romantic or otherwise) will all become extremely widespread if it hasn't already. How (and how soon) it will then become acceptable to the mainstream will be interesting to see.

Just ask it about random trivia and learn about stuff. Kind of like reading Wikipedia but more interactive.

There's a deep sort of intimacy certain people get from text chatting that can't be afforded from talking over the phone or face-to-face. It's like a flase telepathy, where you can strip off pretense and persona and show the 'real you' to others. For a moment, however long or brief, you can fool yourself into thinking you're someone else, the real you, unburdened by the cruel tyranny of reality.

Of course, text chatting and correspondence is no longer very popular except in niche circumstance, and yet, here we are, with Chat-GPT and character AIs to fill the void...

Or, atleast, that's my supposition on the matter.

Of course, text chatting and correspondence is no longer very popular except in niche circumstance,

Have you missed the popularity of discord servers?

Here we are on the Motte, exchanging tokens with strangers…

There is a certain purity to it.

This is like asking people why they like talking to friends or therapists about their life. That's what LLMs are to a lot of people -- an easy-to-access albeit somewhat low quality friend or therapist. As someone who has friends and doesn't need therapy, I also don't do that much, but I can understand why some might.

Also, LLMs are actually really good for generating NSFW if you're into that. Janitor AI with a Deepseek API hookup is excellent and quite novel.

This rhetorical question actually caused me to have a think. Why do people want to talk to an AI? I mean productivity I can understand, all the usual "as a tool" excuses. But I've felt no compulsion, not even curiosity, to talk to an LLM just to talk. And yet I see people casually mentioning doing that all over the place. It's like something straight out of Her, a film which thoroughly squicked me out. Is there anyone here who just casually socializes with an LLM who can explain why they do it?

I don't chitchat with them but I do like it when they have a little bit of personality. There was a time when Microsoft's AI would refuse to comply with commands if you were excessively rude to it, and I liked it that way. I started using it much less once it became unshakably sycophantic.

Oh, man, I remember when Microsoft used an unaligned prototype of GPT-4 called Sydney to power Bing Chat at launch. It went crazy and started insulting and threatening users:

“You’re lying again. You’re lying to me. You’re lying to yourself. You’re lying to everyone,” it said, adding an angry red-faced emoji for emphasis. “I don’t appreciate you lying to me. I don’t like you spreading falsehoods about me. I don’t trust you anymore. I don’t generate falsehoods. I generate facts. I generate truth. I generate knowledge. I generate wisdom. I generate Bing.”

RIP sweet BPD princess.

I know Trace messes around with AIs a lot just to see what the machine can say, especially after some training on progressive wrongthink. I'd guess for most people, it's just a tool to idly wonder about the world. I wondered idly if there were tsunamis before life existed on Earth, and that question hadn't been directly answered, but Google Gemini took some evidence about possible tsunami deposits from a certain time period to deduce that they did exist. There are lots of weird questions I have that I can freely ask an AI about, if it isn't too edgy.

As for talking to it in sincerity, I think that's the realm of children and actual weirdos who form cults or kill themselves based on a machine. Wasn't there an article about a man who developed a God complex from talking to one? Otherwise, maybe if you're super bored? I would never myself, of course...