site banner

Culture War Roundup for the week of April 28, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Nate Silver just accidentally posted a link to an AI slop article. A quick delve into the article text makes it obvious that the contents were blatantly copypastaed directly from the output of ChatGPT. Various GPT detectors also agree, giving it a solid 100% confidence of being slop. Unfortunately, it seems that nobody in the replies even seems to have noticed or cared.

I'm of course already used to my google searches being clogged up by black hat SEO slop, but I expected it to just live in the shadows quietly collecting clicks and boosting pagerank. So it was sobering to see an aislop article just get posted like that by someone I regard as at least somewhat intelligent.

What does this say about the world? Are normies, even somewhat intelligent ones, incapable of distinguishing the most obvious stinky smelly chatgpt output? Or did hundreds of people read the headline and drop a snarky comment, and not a single one bothered to read the article? It's either a depressing anecdote about human nature and social media, or a depressing anecdote about the lack of intelligence of the average human.

Of course aislop grifters should be fedposted just like indian call center scammers, but sometimes I can't help but feel like the victims deserved it. But when they bother me waste 5 seconds of my time again, I am right back in fedposting mode.

Edit:

Since you idiots are out here defending the slop, these quotes are hallucinations:

“I get it,” Walz told the audience. “A lot of folks aren’t watching MSNBC. They’re watching ESPN or TikTok or just trying to make ends meet.”

“We need to reclaim who we are as a party of opportunity, of dignity, of everyday Americans,” Walz said. “If we leave that vacuum, someone like Donald Trump will fill it again.”

Here's the full recording of his talk and you can check the Youtube transcription: https://youtube.com/watch?v=MPt8V3MW1c4 And before you ask, the fake article specifically claims these fake quotes were said at his Harvard talk, not at some other time.

So again the AI put totally false words into somebody's mouth and you apologists are defending it.

Hmm, is this a signal that we can finally go back to a time where a wider populace puts trust into specific media institutions, and rewards ($$$) them for earning that trust? Can AI be the deathknell of the mAiNsTrEaM mEdIa snark? Can we start being elitist about media institutions that don't utilize AI slop, regardless of their political slant? Surely articles written by an actual human, no matter their political bias, are universally better than AI slop of any particular bias? Can't we all agree on that across the political spectrum?

That Adam Silver posted an article from Yahoo Entertainment damages my base level of respect for him, regardless of the content of the article. The 90s version of this is like picking up a magazine from the grocery store checkout[1] and trying to form a cogent political argument based on its cover article. I guess that makes X the backyard grillout: "Dave69420 told me that the Clintons had Vince Foster killed".

  • [1] Curiously, though, I haven't actually seen magazines where I buy groceries these days, but they're still at CVS / Walgreens and other drug stores.

Surely articles written by an actual human, no matter their political bias, are universally better than AI slop of any particular bias? Can't we all agree on that across the political spectrum?

Obviously not, or you wouldn't be making an appeal to elitism as opposed to popular consumption, i.e. the numerically broader basis where 'we all' consensus derives.

The NPC (non-player character) meme arose during the first Trump administration precisely noting the formulaic and non-introspective nature of a good deal of partisan discourse. The belief that AI outputs would be equivalent or even higher quality than human writers at election propaganda has been the basis of AI election interference concerns. The market impacts of generative AI has weakened the bargaining position of creative types ranging from holywood writer guilds to patreon porn makers. is all slop. Non-slop is the exception, regardless of source.

Obviously not, or you wouldn't be making an appeal to elitism as opposed to popular consumption, i.e. the numerically broader basis where 'we all' consensus derives.

I make the appeal to elitism because I don't think popular consumption has shown any evidence of being capable of fighting against manufactured consent. Unless you think otherwise? Personally: I'm making the appeal because I want to live in a world where publishing AI slop is universally seen as low quality as the content in the 90s conspiracy magazines at the grocery store checkout (National Enquirer), and evidence of a media institution using AI slop should create scandals large enough to cause executives to resign. Personally: AI-hallucinated quotes are worse than fabricated quotes, because the former masquerades as journalism whereas the latter is just easily-falsifiable propaganda.

The belief that AI outputs would be equivalent or even higher quality than human writers at election propaganda has been the basis of AI election interference concerns.

I actually haven't seen much in the way of "AI election interference concerns" specifically. There's been a lot of noise around the potential for deep fakes to sway an election, but so far there's been no smoking gun that's been brought to my attention. On the left, I don't think people distinguish much from someone blindly consuming FoxNews opinion propaganda versus X AI bot propaganda (or MSNBC and Reddit, if you prefer the examples for the right). Which kind of plays into your broader point:

There is no russel conjugation in play. It is not your humans produce articles, your opponents' partisans demonstrate bias, and AI make slop. It is nearly all slop regardless.

Can I extend this to your view on the OP being that it doesn't matter at all that the article that Adam Silver reposted is AI slop, versus your definition of "slop" in general? It doesn't move your priors on Adam Silver[1] (the reposter), X (the platform), or Yahoo Entertainment (the media institution) even an iota?

  • [1] Leaving the error. NBA playoffs on the mind.

Personally: AI-hallucinated quotes are worse than fabricated quotes, because the former masquerades as journalism whereas the latter is just easily-falsifiable propaganda.

AI-hallucinated quotes seem likely to be exactly as easy as falsifiable as human-fabricated quotes, and easily-falsifiable propaganda seems to be an example of something masquerading as journalism. These just seem like describing different aspects of the same thing.

Can I extend this to your view on the OP being that it doesn't matter at all that the article that Adam Silver reposted is AI slop, versus your definition of "slop" in general? It doesn't move your priors on Adam Silver (the reposter), X (the platform), or Yahoo Entertainment (the media institution) even an iota?

I'm not Dean, but I would agree with this. I didn't have a meaningful opinion on Yahoo Entertainment, but, assuming that that article was indeed entirely AI-generated, the fact that it was produced that way wouldn't reflect negatively or positively on them, by my view. Publishing a falsehood does reflect negatively, though. As for Silver (is it not Nate?), I don't expect pundits to fact-check every part of an article before linking it, especially a part unrelated to the point he was making, and so him overlooking the false quote doesn't really surprise me. Though, perhaps, the fact that he chose to link a Yahoo Entertainment article instead of an article from a more reputable source reflects poorly on his judgment; this wouldn't change even if Yahoo Entertainment hadn't used AI and the reputable outlet had.

AI-hallucinated quotes seem likely to be exactly as easy as falsifiable as human-fabricated quotes, and easily-falsifiable propaganda seems to be an example of something masquerading as journalism. These just seem like describing different aspects of the same thing.

I'm clumsily trying to capture the sentiment that AI-hallucinated quotes and human-fabricated quotes have different motivations that can be attacked in order to discourage them, the former basically being increasing revenue without increasing costs, and the latter being the age-old "lie to someone to manipulate them". I don't think either are particularly moral, and it's a cultural battle to be waged against both. I don't think we'll ever convince fellow humans to stop lying to manipulate people, but I can at least imagine a world where we universally condemn media companies who publish AI slop. We've done it with companies who try to sell cigarettes to children, for at least one example of "universal condemnation".

Mea culpa, I shouldn't have said "worse", but "more easily discouraged".

I don't think either are particularly moral, and it's a cultural battle to be waged against both. I don't think we'll ever convince fellow humans to stop lying to manipulate people, but I can at least imagine a world where we universally condemn media companies who publish AI slop.

So I do think there's a big weakness with LLMs in that we don't quite have a handle on how to robustly or predictably reduce hallucinations like we can with human hallucinations and fabrications. But that's where I think the incentive of the editors/publishers come into play. Outlets that publish falsities by their human journalists lose credibility and can also lose lawsuits, which provide incentives for the people in charge to check the letters their human journalists generate before publishing them, and I see similar controls as being effective for LLMs.

Now, examples like Rolling Stone's A Rape on Campus article show that this control system isn't perfect, particularly when the incentives for the publishers, the journalists, and the target audience are all aligned with respect to pushing a certain narrative rather than conveying truth. I don't think AI text generators exacerbate that, though.

I also don't think it's possible for us to enter a world where we universally condemn media companies who publish AI slop, though, unless "slop" here refers specifically to lies or the like. Given how tolerant audiences are of human-made slop and how much cheaper AI slop is compared to that, I just don't see there being enough political or ideological will to make such condemnation even a majority, much less universal.

Can I extend this to your view on the OP being that it doesn't matter at all that the article that Adam Silver reposted is AI slop, versus your definition of "slop" in general? It doesn't move your priors on Adam Silver (the reposter), X (the platform), or Yahoo Entertainment (the media institution) even an iota?

You can strawman me in whatever way you prefer.

Nope, just trying to move the goalposts so we're at least on the same playing field.

Unfortunately you edited your comment though and now I've completely lost the context of our discussion. Maybe I'll wait next time for the second version of your argument.

Unfortunately you edited your comment though

The edit was for grammatical clarity. You remain free to assign to me any positions I have not taken as part of your goalpost moving.