This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
One of my favorite bands just took a bunch of AI accusations, I guess, and he wrote a somewhat-pissed Substack post. That lead singer doesn't often step into culture war stuff, but this was close enough, I think:
and goes on to say that fighting AI art in this way is fruitless:
I regret that the culture war is poking random people in a new way in the last couple of years, and I can't help but cynically laugh at it. Not to mention how short-sighted it is. In that post, the lead singer details how much of a pain it is to do graphic design for music, and videos, and other art, and he hates it. Imagine if you could get a machine to do it? Also, it actually lifts up people who do not have money and allows them to make art like the people who have money do. Look at this VEO 3 shitpost. Genuinely funny, and the production value would be insane if it was real, for a joke that probably wouldn't be worth it. But now, someone with some Gemini credits can make it. This increases the amount of people making things.
I'm not sure I have any real thesis for this post, but I haven't been very good at directing discussion for my own posts, so, reply to this anecdote in any way you see fit. I thought it was interesting, and a little sad.
I agree that this stuff is becoming more and more difficult to tell apart. We even had one of our own posters get falsely accused by the mods of using AI recently. People are going to claim many things are "obviously AI" when they actually aren't, and the mania of false accusations is going to tick a lot of people off. When you're accused of using AI, not only are people saying you're committing artistic fraud, they're also implying that even if you aren't then your output is still generic trash to some extent.
I wish the Luddites would go away and we could all just judge things by quality rather than trying to read tea leaves on whether AI had a hand in creating something.
This also 100% applies to this forum's rule effectively banning AI. It's a bad rule overall.
Falsely accused?
We're (or at least I'm) not particularly against using LLMs to spell check, grammar check or tidy up substantially human written prose. But leaving that bit in? That's extremely low effort, at least tidy up after yourself.
I'll chime in to note that all of my china visit posts went through an ai spelling check pass because as a dyslexic with only a phone for composing them it was that or a lot of typos.
Absolutely nothing wrong with that, as far as I'm concerned.
More options
Context Copy link
More options
Context Copy link
Your mod action didn't make the distinction that you were only against that part, and made it seem like you thought the entire message was AI generated.
I agree having that part at the end is sloppy... but it's sloppy to the level of "a few spelling mistakes". That shouldn't be worth modding someone over unless it becomes egregious.
He didn't get an official warning of the kind that goes on the mod record, despite me putting the mod hat on. We don't officially have rules against AI content, though we're in the process of drafting them up. It was more of a polite but firm suggestion rather than punishment.
Besides, I quoted that bit specifically for a reason.
How are you going to be even able to tell whether something is AI or isn't?
Enough people around here are functionally indistinguishable from LLMs from my point of view. They produce huge reams of mostly waffling text circling at respectable distance off the problem without ever addressing it and it's a chore to read.
Any LLM can do so too, in fact they readily behave exactly like that. With the barest minimum prompting skill all the usuall tells of LLM output disappear.
Who?
Make an ab test and I'm sure most people here would be able to pick out the human from the ai 10 times out of 10.
No, they wouldn't. It's easy to make an AI stop using the annoying chatGPT style. I'm not the sharpest tool, don't work with AI in my job aand it took me 1.5 hours to make a text I had a hard time telling apart. And I have plenty of experience looking at AI outputs and being annoyed with its stylistic quirks.
More options
Context Copy link
More options
Context Copy link
Eventually, we won't/can't. Thankfully, the people who are lazy enough to try and pass off AI generated content as their own seem lazy enough to not bother with fancy prompting or editing.
As far as I'm aware, it's an unsolvable problem, but it hasn't caused an apocalypse yet.
Bought a 4x indie game that looked kind of fine but now discovered much of the writing clearly used AI and I hate that cadence. It's not always obvious but if you've played around with LLMs and especially used barely prompted LLMs for RP you just pick up on the stylistic quirks.
After I do a playthrough perhaps I should play around with Gemini, a bunch of SF books from dead guys, derive a workable prompt for voice from them and then have AI rewrite the damned localization to be more tasteful.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
He literally copypasted the "I've gone through your comment and will fix the typos and ..." that came straight from chatgpt.
This is about as much of a smoking gun as finding " as a language model ... " Randomly in the middle of a novel.
Sure you might say he only asked it to correct the grammar this time but it's still copypasted directly from a chatbot output
More options
Context Copy link
If you want to talk to an AI, there's already a place where you can do that.
I don't want to talk to an AI, though. I want to talk to another Motte user who is using their mind to procure text generated by an AI in response to prompts generated by their mind.
More options
Context Copy link
If you want to make snarky responses, there's several places where you can do that.
More options
Context Copy link
This rhetorical question actually caused me to have a think. Why do people want to talk to an AI? I mean productivity I can understand, all the usual "as a tool" excuses. But I've felt no compulsion, not even curiosity, to talk to an LLM just to talk. And yet I see people casually mentioning doing that all over the place. It's like something straight out of Her, a film which thoroughly squicked me out. Is there anyone here who just casually socializes with an LLM who can explain why they do it?
I've been thinking the same thing. AI text seems so fundamentally uninteresting to me. The reasons I'm interested in humans talking is either to find out what people think or to learn actual information/insight about the rest of the world. AI doesn't do the former at all because there's nobody writing it so it doesn't let me know anyone's thoughts or feelings, and it's not reliable enough to be good at the latter. On rare occasion I've gotten use out of it as a search engine pointing me towards information I can verify myself, and I don't doubt various other uses as a tool, but beyond that? Back in the early days of GPT-2 through to GPT-4 I was interested in the samples posted by others, but that was because of what they indicated about the state of AI. Is it that some people enjoy the act of conversation itself even if they know there's nobody on the other end? I wonder which side is the majority, and by how much?
@Fruck compared it to parasociality but it's almost the opposite to me. For example I like reading other people discuss the same media I'm interested in. So do a lot of other people, that's presumably why people read Reddit or 4chan threads discussing media, read reviews for books they've already read, watch youtubers like RedLetterMedia, watch reaction-videos, etc. People want to know what other people thought, they want to empathize with their reactions to key moments, etc. AI-generated text has none of that appeal, if people are having parasocial relationships with it then their parasociality is completely different from anything I've felt. I guess the closest comparison is to parasocial feelings for fictional characters? If AI was capable of good fiction-writing I might be interested in reading it, the same way I can appreciate good-looking AI art, but currently it's not. Especially not when the character it's writing is "helpful AI assistant", hardly a font of interesting characterization or witty dialogue, yet a lot of people seem to find conversations with that character interesting.
LLMs are a great way of researching things because they have a surface level understanding on par with a median professional of some field. You'll be taken for a ride in some way if you don't know the topic yourself, but you can get a lot out of them that way.
More options
Context Copy link
I'm glad you said this, because I both agree with what you said and disagree with what you said from another perspective. And maybe I'm using parasocial wrong.
I wouldn't consider reading user reviews on reddit or watching rlm reviews parasocial at all, although I guess they are one sided relationships. But like you said the valence almost goes the other way - I know that when I read reddit idgaf about the stranger whose post I'm reading (unless they consistently knock it out of the park enough for me to notice), but if I post on reddit I use even more casual language than I do normally - I write for the hypothetical audience. But the parasociality with ai I was thinking of, oh yes that's different. That's parasocial in the same sense as those crazy ladies who attack soap stars for cheating on their lover in the show. That's true parasociality, a relationship entirely imagined by the viewer, as great or as terrible as they desire.
Because I would say you are right that there fundamentally isn't anyone writing it so you don't get anyone's thoughts and feelings - but you do get the zeitgeist position, which is an amalgamation of everyone's thoughts and feelings. It won't tell you what is true, but it is fantastic at telling you what popular consensus thinks is true. Forming a relationship with that is bonkers, but the narcissist in me sure sees the appeal.
And when I use it as a search engine I do prefer a conversation even though there's no one at the other end. I have always thought better with someone to bounce off, I always viewed taking notes to read the next day as sort of bouncing off myself, so using ai that way was a natural fit. And for general information that is easy to find, ai is much better than a search engine - that's why Google and Microsoft put it at the top of the search. Yeah you have to verify it's real, but you already had to do that with Google and Wikipedia! Or should have been.
That's why I wanted to know if my examples count as 'talking just to talk' - that's how I would describe them, but it's not about company, it's about information and novelty. But maybe I'm just flattering myself by saying that in the eyes of those squicked out by ai? I know I feel like I've been typical minding just assuming everyone is as enamoured with words as I am. I was aware I have a broader tolerance for slop than most but I figured if anyone here was a slow ai adopter it would be me, and most people here would be running their own llms already while I'm still playing around with the public models.
More options
Context Copy link
More options
Context Copy link
I often use it as a lookup tool and study aid, which can involve long conversations. But maybe that falls under "as a tool."
The last time I had a bona fide conversation with an LLM was maybe three months ago. These actual conversations are always about its consciousness, or lack thereof--if there's a spark there, I want to approach the LLM as a real being, to at least allow for the potentiality of something there. Haven't seen it yet.
More options
Context Copy link
What do you mean by socialise? I asked it to tell me about the critical and audience receptions of Sinners just now, then argued with it about why historical accuracy is no bar to activists, does that count? Also I made a bot that was teaching me about python and Linux speak as if it was Hastur, because it makes me smile, but I soon discovered that I could much more easily understand it because I could more easily discern the fluff from the substance. If you mean parasocial relationships, the answer is they're parasocial relationships :/
More options
Context Copy link
I have before, and it's interesting to me as well why people do it. In my experience the AIs of just a few years ago were very clearly robotic (to use a word that might not fit) in that they would seem to "forget" things very quickly, even things you had just told them. Currently I think they're considerably better, but their popularity suggests that they're still overly positive and loath to criticize or call out the user the way a human might. In other words there is a narcissistic element in their use (the link is an internal link to a recent Motte post) where the user is fed a continual stream of affirmations in the self he or she is presenting to the AI. Hell on Reddit people are literally marrying their "AI boy/girlfriend."
I have a friend who is having issues with his wife, and has taken to interaction with AI in ways that I am not completely sure of except to say he's given it a name (feminine) and has various calibrations that he uses (one that is flirty, etc.) I can tell by speaking to him about this that he is engaging in what I'd consider a certain wishful thinking (asking the AI what it means to be real, to be alive, etc.) but it's difficult in such situations to tactfully draw someone back into reality. So I am untactful and say "It's not a She and it's not a real person, bro." This gets a laugh but the behavior continues.
I wouldn't discount the idea that this (treating Ai as a companion, romantic or otherwise) will all become extremely widespread if it hasn't already. How (and how soon) it will then become acceptable to the mainstream will be interesting to see.
More options
Context Copy link
Just ask it about random trivia and learn about stuff. Kind of like reading Wikipedia but more interactive.
More options
Context Copy link
There's a deep sort of intimacy certain people get from text chatting that can't be afforded from talking over the phone or face-to-face. It's like a flase telepathy, where you can strip off pretense and persona and show the 'real you' to others. For a moment, however long or brief, you can fool yourself into thinking you're someone else, the real you, unburdened by the cruel tyranny of reality.
Of course, text chatting and correspondence is no longer very popular except in niche circumstance, and yet, here we are, with Chat-GPT and character AIs to fill the void...
Or, atleast, that's my supposition on the matter.
Have you missed the popularity of discord servers?
More options
Context Copy link
Here we are on the Motte, exchanging tokens with strangers…
There is a certain purity to it.
More options
Context Copy link
More options
Context Copy link
This is like asking people why they like talking to friends or therapists about their life. That's what LLMs are to a lot of people -- an easy-to-access albeit somewhat low quality friend or therapist. As someone who has friends and doesn't need therapy, I also don't do that much, but I can understand why some might.
Also, LLMs are actually really good for generating NSFW if you're into that. Janitor AI with a Deepseek API hookup is excellent and quite novel.
More options
Context Copy link
I don't chitchat with them but I do like it when they have a little bit of personality. There was a time when Microsoft's AI would refuse to comply with commands if you were excessively rude to it, and I liked it that way. I started using it much less once it became unshakably sycophantic.
Oh, man, I remember when Microsoft used an unaligned prototype of GPT-4 called Sydney to power Bing Chat at launch. It went crazy and started insulting and threatening users:
RIP sweet BPD princess.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I know Trace messes around with AIs a lot just to see what the machine can say, especially after some training on progressive wrongthink. I'd guess for most people, it's just a tool to idly wonder about the world. I wondered idly if there were tsunamis before life existed on Earth, and that question hadn't been directly answered, but Google Gemini took some evidence about possible tsunami deposits from a certain time period to deduce that they did exist. There are lots of weird questions I have that I can freely ask an AI about, if it isn't too edgy.
As for talking to it in sincerity, I think that's the realm of children and actual weirdos who form cults or kill themselves based on a machine. Wasn't there an article about a man who developed a God complex from talking to one? Otherwise, maybe if you're super bored? I would never myself, of course...
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
While I agree in general, this forum relies on people engaging with long posts in a thread sorted by new. If long posts are easy to generate but costly in time to evaluate then this forum can't really function.
It would be better to have a quality filter then.
What does that look like?
Word limit would be a good first step. Anyone exceeding it should be required to start with a one or two paragraph abstact that summarizes their point.
Okay, I admit it would be funny to make our 500k-character submission box contingent on filling out a 1k-character abstract. Only the abstract would start out visible, and users would have to click to expand the wall of text, preventing it from taking up attention by default…
But I am not convinced that this would help with the failure mode of, say, 100k-character AI Gish gallops. They’re still going to be slower to check than to create.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link