site banner

Culture War Roundup for the week of April 28, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Nate Silver just accidentally posted a link to an AI slop article. A quick delve into the article text makes it obvious that the contents were blatantly copypastaed directly from the output of ChatGPT. Various GPT detectors also agree, giving it a solid 100% confidence of being slop. Unfortunately, it seems that nobody in the replies even seems to have noticed or cared.

I'm of course already used to my google searches being clogged up by black hat SEO slop, but I expected it to just live in the shadows quietly collecting clicks and boosting pagerank. So it was sobering to see an aislop article just get posted like that by someone I regard as at least somewhat intelligent.

What does this say about the world? Are normies, even somewhat intelligent ones, incapable of distinguishing the most obvious stinky smelly chatgpt output? Or did hundreds of people read the headline and drop a snarky comment, and not a single one bothered to read the article? It's either a depressing anecdote about human nature and social media, or a depressing anecdote about the lack of intelligence of the average human.

Of course aislop grifters should be fedposted just like indian call center scammers, but sometimes I can't help but feel like the victims deserved it. But when they bother me waste 5 seconds of my time again, I am right back in fedposting mode.

Edit:

Since you idiots are out here defending the slop, these quotes are hallucinations:

“I get it,” Walz told the audience. “A lot of folks aren’t watching MSNBC. They’re watching ESPN or TikTok or just trying to make ends meet.”

“We need to reclaim who we are as a party of opportunity, of dignity, of everyday Americans,” Walz said. “If we leave that vacuum, someone like Donald Trump will fill it again.”

Here's the full recording of his talk and you can check the Youtube transcription: https://youtube.com/watch?v=MPt8V3MW1c4 And before you ask, the fake article specifically claims these fake quotes were said at his Harvard talk, not at some other time.

So again the AI put totally false words into somebody's mouth and you apologists are defending it.

I probably wouldn't have guessed that this article was almost purely generated by AI if I hadn't been primed on it beforehand. Looking at it with that priming, I'm still not convinced that it was a pure copy-paste GPT job, though certainly it's filled with phrasing that, having been primed, strike me as being from an LLM, such as "While some applauded the self-deprecating humor, others criticized the segment for reinforcing cultural stereotypes" or "As speculation mounts over the 2028 Democratic field, Walz offers a glimpse into his political philosophy for the years ahead." Is there any direct evidence of it being LLM-generated?

But more to the point, I don't see why most people would care if this was purely AI generated, other than perhaps this author Quincy Thomas's employers and his competitors in the journalism industry. Particularly for what seems to be intended to be a pretty dry news article presenting a bunch of facts about what some politicians said. This isn't some personal essay or a long-form investigative report or fiction (even in those, I wouldn't care if those were purely LLM-generated as long as they got the job done, but I can see a stronger case for why that would matter for those). This kind of article seems exactly like the kind of thing that we'd want LLMs to replace, and I'd just hope that we could get enough of a handle on hallucinations such that we wouldn't even need a human like Quincy Thomas to verify that the quotes and description of events actually matched up to reality before hitting "publish."

Once some fairly reputable news outlet gets sued for defamation for publishing some hallucination that was purely LLM generated and failing to catch it with whatever safeguards that are in place, that's something I'd be interested to see how it plays out. At the rate things are going, I wouldn't be surprised if that happened in the next 5 years.

I don't see why most people would care if this was purely AI generated

Because it's blatantly false garbage.

a human like Quincy Thomas

Quincy Thomas is not a human. It's obviously a fake name for some third world scam grifter who is shoveling this garbage

From a cursory search through the auto-generated transcript, it does seem to me like that that quote was made up. That does seem worth caring about. It's too bad that it's not defamatory, since it probably won't trigger some lawsuit or other major controversy, but perhaps a controversy could be created if someone decided to publicize this.

Seems like Yahoo's fact checking/editing department isn't built to handle its writers using LLMs. I still don't see why I would care about LLM usage if a journalism outlet had the proper controls for factual information. The problem isn't that it's AI generated, it's that it's false.

Seems like Yahoo's fact checking/editing department

I'm pretty sure that's not a thing -- it says at the bottom that the 'article' is reposted directly from "WhereIsTheBuzz.com" -- which looks about like what you'd expect. Highly unlikely that Yahoo is fact-checking anything; their role as a slop aggregator could use some more scrutiny I guess, but this article doesn't seem much different than their standard run of human slop to me.