site banner

Culture War Roundup for the week of December 29, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

How about a different kind of AI culture war? I speak of course of non-consensual pornography generation. The most outrageous article I read about this recently was probably this AP article: Boys at her school shared AI-generated, nude images of her. After a fight, she was the one expelled. The girl in question is 13 and she started a fight on a school bus with one of the boys later charged with a crime for sharing the images.

The girls begged for help, first from a school guidance counselor and then from a sheriff’s deputy assigned to their school. But the images were shared on Snapchat, an app that deletes messages seconds after they’re viewed, and the adults couldn’t find them. The principal had doubts they even existed.

Among the kids, the pictures were still spreading. When the 13-year-old girl stepped onto the Lafourche Parish school bus at the end of the day, a classmate was showing one of them to a friend.

“That’s when I got angry,” the eighth grader recalled at her discipline hearing.

Fed up, she attacked a boy on the bus, inviting others to join her. She was kicked out of Sixth Ward Middle School for more than 10 weeks and sent to an alternative school. She said the boy whom she and her friends suspected of creating the images wasn’t sent to that alternative school with her. The 13-year-old girl’s attorneys allege he avoided school discipline altogether.

When the sheriff’s department looked into the case, they took the opposite actions. They charged two of the boys who’d been accused of sharing explicit images — and not the girl.

It turns out that finding apps that advertise this kind of functionality is not hard. In fact, part of the reason I bring this up is it seems this capability is integrated into one of the largest AIs: Grok. There's been some controversy on X over the last couple days after Grok allegedly generated pornographic images of a couple minor girls. Additionally the bot's "media" tab was disabled, allegedly due to the discovery lots of people were using the bot to make pornographic edits of other people's pictures. Though the media tab is gone I did not find it very hard to get Grok to link me its own posts with these kinds of edits.

There is, I think understandably, a lot of controversy going around about this. It's not that it was previously impossible to make this kind of content but the fidelity and availability was much more limited and certainly required more technical skill. Being something you can do without even leaving your favorite social media app seems like something of a game changer.

Frankly I am unsure where to go with this as a policy matter. Should someone be liable for this? Criminal or civil? Who? Just the generating user? The tool that does the generating? As a general matter I have some intuitions about AI conduct being tortious but difficulty locating who should be liable.

There's just not any way around this. I have an AI image gen model on my computer right now, anyone with a current gen macbook could inpaint any image into pornography. It's not the kind of thing you can realistically ban. As a society we're just going to have to find a way to deal with this the way we deal with the fact anyone at any time could have drawn these same images if they wanted to badly enough. The genie is thoroughly out of the bottle and no amount of outrage will ever put it back in the bottle.

You could make it pretty broadly inaccessible: ban all open-weight models; require any image generation to have strict safeguards and reporting of attempts to authorities; enforce severe criminal penalties. Your existing model would be pretty much untouchable, but it couldn't easily be shared, and a decade from now most copies of it would have been lost to end users. You could even require manufacturers to include firmware on new hardware that bails on unapproved workloads, but that seems like it'd be overkill.

Not saying that this is what I'd like, but it seems doable.

ban all open-weight models

This seems harder than it sounds. Some of the best models aren't published by the West (DeepSeek is probably the best open text model at the moment, I hear [1]), so you'd need global agreement to start cracking down. And the small models aren't that big: Hollywood wasn't able to keep rips off of torrent sites a decade back, and from what I hear they're still around, and international VPNs are pretty ubiquitous too. Short of constructing your own Great Firewall, this isn't really feasible (and even then, it's just a matter of practicality, from what I hear).

  1. Funny note: a while back I was talking to a friend at Unnamed Defense Co (TM) who was excited about their new entirely in-house AI service for engineers. When asked about which models they were running, "DeepSeek" was one of the sheepish responses, admittedly next to GPT-OSS.

The goal wouldn't be to make it so literally no one in the USA could run an open weights model; it would be to add friction points to make it more trouble than it's worth, except for the most dedicated people. You wouldn't need any kind of global agreement, just a national focus and working with large tech companies to limit it. DNS blocks, removing them from Google search results, etc. A relatively small amount of effort can prevent the bulk of casual users from having access to them.

That's just if you get the domestic consensus to look at open weights models as something comparable to copyright violation. If instead the public started seeing them the same as CSAM, you could go a lot whole lot further: still theoretically accessible, but very rare.

It's not even slightly theoretically doable. The theoretical knowledge of how these models work is broadly available. Further, not only are adversarial countries going to completely ignore your desire for model control, they also are currently the ones who produce most of our hardware. Including, fpgas and gpus. Also You can't include firmware in the new hardware that can survive contact with the consumer. NVFlash chips are easily desoldered, dumped, and re-programmed. Firmware mods and flashing tools are easily accessible.

How to make CSAM is widely known, and plenty of places don't cooperate usefully with the USA in stopping it. Despite that, the USA does manage to broadly limit how much it proliferates.

I'm not saying that it's a good idea, and I'm not saying that open weight models could be completely eliminated. I am saying that they could be quite effectively suppressed, as there are plenty of tools that the government can use to enforce a ban, imperfectly but substantially.

The government can't even stop people from plugging in yandex.ru into their browsers and gaining instant access to any movie they wish to consume in seconds. Same for LLMs, Z.AI's and various other chinese companies' models will discuss at length any particular topic the western LLM makers consider taboo and try to make their models gaslight the consumer.

Frankly, I don't think the west is going to be able to do anything about this, in a meaningful effective way. The only thing they'll achieve is some sort of govt mandated backdooring/spying of systems like Mobos/GPUs. And even then it will catch only the least sophisticated of consumers.

Torrenting continues to exist. You just can't realistically prevent the distribution of a few gigs of data. Even if you eradicated all the currently existing models it's not particularly hard to train any of the safeguards off new models unless we're just never going to let professionals locally render images.