site banner

Culture War Roundup for the week of December 29, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

How about a different kind of AI culture war? I speak of course of non-consensual pornography generation. The most outrageous article I read about this recently was probably this AP article: Boys at her school shared AI-generated, nude images of her. After a fight, she was the one expelled. The girl in question is 13 and she started a fight on a school bus with one of the boys later charged with a crime for sharing the images.

The girls begged for help, first from a school guidance counselor and then from a sheriff’s deputy assigned to their school. But the images were shared on Snapchat, an app that deletes messages seconds after they’re viewed, and the adults couldn’t find them. The principal had doubts they even existed.

Among the kids, the pictures were still spreading. When the 13-year-old girl stepped onto the Lafourche Parish school bus at the end of the day, a classmate was showing one of them to a friend.

“That’s when I got angry,” the eighth grader recalled at her discipline hearing.

Fed up, she attacked a boy on the bus, inviting others to join her. She was kicked out of Sixth Ward Middle School for more than 10 weeks and sent to an alternative school. She said the boy whom she and her friends suspected of creating the images wasn’t sent to that alternative school with her. The 13-year-old girl’s attorneys allege he avoided school discipline altogether.

When the sheriff’s department looked into the case, they took the opposite actions. They charged two of the boys who’d been accused of sharing explicit images — and not the girl.

It turns out that finding apps that advertise this kind of functionality is not hard. In fact, part of the reason I bring this up is it seems this capability is integrated into one of the largest AIs: Grok. There's been some controversy on X over the last couple days after Grok allegedly generated pornographic images of a couple minor girls. Additionally the bot's "media" tab was disabled, allegedly due to the discovery lots of people were using the bot to make pornographic edits of other people's pictures. Though the media tab is gone I did not find it very hard to get Grok to link me its own posts with these kinds of edits.

There is, I think understandably, a lot of controversy going around about this. It's not that it was previously impossible to make this kind of content but the fidelity and availability was much more limited and certainly required more technical skill. Being something you can do without even leaving your favorite social media app seems like something of a game changer.

Frankly I am unsure where to go with this as a policy matter. Should someone be liable for this? Criminal or civil? Who? Just the generating user? The tool that does the generating? As a general matter I have some intuitions about AI conduct being tortious but difficulty locating who should be liable.

There's just not any way around this. I have an AI image gen model on my computer right now, anyone with a current gen macbook could inpaint any image into pornography. It's not the kind of thing you can realistically ban. As a society we're just going to have to find a way to deal with this the way we deal with the fact anyone at any time could have drawn these same images if they wanted to badly enough. The genie is thoroughly out of the bottle and no amount of outrage will ever put it back in the bottle.

It's not the kind of thing you can realistically ban.

I think this mistakes different types of bans/controls and their different purposes.

One way a ban/control may operate is to try to pre-emptively prevent certain events from occurring. When folks try to control, say, ammonium nitrate following the Oklahoma City bombing, they're often trying to prevent someone from acquiring some of the tools used to create a large bomb, ultimately in the hopes of preventing said hypothetical bomb from being used to kill people and destroy stuff. Whether or not this is practical is not the point here; the point is that this is the point of the effort. Similarly for controls on nuclear material.

Importation controls are somewhat similar in that they may be trying to prevent an event from occurring at all. The funny example I go to sometimes is the ban on Chinese drywall. The intent was to prevent it from even getting into the country, pre-emptively preventing whatever harms it may (or may not) later produce. Or see, for example, the discussion below about possible controls on UAS; I read that conversation to be primarily pondering whether controls can be put in place which pre-emptively prevent a significant number of events, to what extent such controls will be effective or not effective (how hard is it for folks to still "roll their own"?), etc.

Many other bans/controls are post-hoc controls, assigning liability/culpability after a sufficient number of steps have been taken toward an event or after the event has occurred. These are different in type. Probably the majority of controls are like this. I might even say that part of the reason why so many controls are like this is because it is not reasonable to control the inputs that are used to lead up to an event. This may be in part due to "dual use" considerations or other factors.

For a silly example, rope can be used to tie someone up when kidnapping them. Well, basically no one thinks it's reasonable to put heavy controls on possessing rope. But basically no one thinks that kidnapping is "not the kind of thing you can realistically ban", either. That people have widespread access to the tool used is sort of neither here nor there when considering post-hoc controls on the use of those tools for specific events.

What I find strange is that I've really only seen this come up for digital tools. There's this weird perspective that if someone uses specifically a digital tool that is "out there" and accessible, that the "genie is out of the bottle", then it's simply unrealistic to use any sort of law to restrict any type of use of these digital tools that one might perform. That still seems wild to me. Rope is a technology that is "out there". "The genie is out of the bottle." Even the Primitive Technology guy makes his own! ...sorrrrta think that we can still ban kidnapping.

[EDIT: I forgot to add what I had wanted to say about the UAS conversation. Suppose, after consideration, it seems infeasible to use a Type I control to prevent things like killing people with UAS. Can't even manage to stop someone from flying into, say, a crowd at an open sports stadium. I don't see any reason why someone couldn't want a Type II control, still making it illegal to fly a UAS into a stadium or to kill people with a UAS. Sure, maybe you can't prevent it, but to the extent that you have the investigative tools to prove in a court of law who is culpable for doing it, you can still prosecute them.]

Of course, once we're in a Type II ban world instead of a Type I ban world, then there is some amount of "we have to get used to the fact that this type of event will actually happen significantly more often than events that we can control with Type I bans". Frequencies and percentages will depend heavily on specifics. And maybe that's the sentiment you're going for. Sure, we're not going to be able to meaningfully pre-emptively prevent fake AI nudes from being generated, just like we can't really pre-emptively prevent rope-enabled kidnappings. But folks may still want to try a Type II control. The extent to which even a Type II control can be considered effective certainly depends extremely heavily on specifics, including an analysis of post-hoc investigation techniques, surrounding legal frameworks, resource considerations, and even the oft-debated deterrence theory of government sanctions.

The funny example I go to sometimes is the ban on Chinese drywall.

This ban was because we imported a lot of shitty Chinese drywall that later outgassed sulfur compounds. It wasn't pre-emptive, it was punitive.

This is different from the UAS ban for several reasons including

  1. UAS that do bad stuff on their own or at the surreptitious direction of their foreign manufacturer are largely only theoretical. DJI has been accused of uploading flight logs during an update, but that's it.

  2. It applies to components, too, including components such as motors and batteries that could not be compromised to do the bad stuff theorized.

The reason for the UAS import ban is to prevent Americans from doing bad things with a UAS on purpose, not for any damage done by the manufacturer or manufacturer's country.

shitty Chinese drywall that later outgassed sulfur compounds

For the purposes of my comment, it is this temporal relationship that matters. Sure, the other temporal relationship between folks realizing this temporal relationship and choosing to ban it is fine. But this one is the one that holds the conceptual link.

I'm certainly not going to defend the UAS/component ban, either, but that's not the point here. The point is that even if we assume that all of that is dumb and doesn't make sense as a Type I ban, we can still make it illegal to use a UAS to kill someone or even just make it illegal to fly a UAS into a stadium or something, and this type of ban will have particular qualities tied to the specifics.