This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
There's just not any way around this. I have an AI image gen model on my computer right now, anyone with a current gen macbook could inpaint any image into pornography. It's not the kind of thing you can realistically ban. As a society we're just going to have to find a way to deal with this the way we deal with the fact anyone at any time could have drawn these same images if they wanted to badly enough. The genie is thoroughly out of the bottle and no amount of outrage will ever put it back in the bottle.
I think this mistakes different types of bans/controls and their different purposes.
One way a ban/control may operate is to try to pre-emptively prevent certain events from occurring. When folks try to control, say, ammonium nitrate following the Oklahoma City bombing, they're often trying to prevent someone from acquiring some of the tools used to create a large bomb, ultimately in the hopes of preventing said hypothetical bomb from being used to kill people and destroy stuff. Whether or not this is practical is not the point here; the point is that this is the point of the effort. Similarly for controls on nuclear material.
Importation controls are somewhat similar in that they may be trying to prevent an event from occurring at all. The funny example I go to sometimes is the ban on Chinese drywall. The intent was to prevent it from even getting into the country, pre-emptively preventing whatever harms it may (or may not) later produce. Or see, for example, the discussion below about possible controls on UAS; I read that conversation to be primarily pondering whether controls can be put in place which pre-emptively prevent a significant number of events, to what extent such controls will be effective or not effective (how hard is it for folks to still "roll their own"?), etc.
Many other bans/controls are post-hoc controls, assigning liability/culpability after a sufficient number of steps have been taken toward an event or after the event has occurred. These are different in type. Probably the majority of controls are like this. I might even say that part of the reason why so many controls are like this is because it is not reasonable to control the inputs that are used to lead up to an event. This may be in part due to "dual use" considerations or other factors.
For a silly example, rope can be used to tie someone up when kidnapping them. Well, basically no one thinks it's reasonable to put heavy controls on possessing rope. But basically no one thinks that kidnapping is "not the kind of thing you can realistically ban", either. That people have widespread access to the tool used is sort of neither here nor there when considering post-hoc controls on the use of those tools for specific events.
What I find strange is that I've really only seen this come up for digital tools. There's this weird perspective that if someone uses specifically a digital tool that is "out there" and accessible, that the "genie is out of the bottle", then it's simply unrealistic to use any sort of law to restrict any type of use of these digital tools that one might perform. That still seems wild to me. Rope is a technology that is "out there". "The genie is out of the bottle." Even the Primitive Technology guy makes his own! ...sorrrrta think that we can still ban kidnapping.
[EDIT: I forgot to add what I had wanted to say about the UAS conversation. Suppose, after consideration, it seems infeasible to use a Type I control to prevent things like killing people with UAS. Can't even manage to stop someone from flying into, say, a crowd at an open sports stadium. I don't see any reason why someone couldn't want a Type II control, still making it illegal to fly a UAS into a stadium or to kill people with a UAS. Sure, maybe you can't prevent it, but to the extent that you have the investigative tools to prove in a court of law who is culpable for doing it, you can still prosecute them.]
Of course, once we're in a Type II ban world instead of a Type I ban world, then there is some amount of "we have to get used to the fact that this type of event will actually happen significantly more often than events that we can control with Type I bans". Frequencies and percentages will depend heavily on specifics. And maybe that's the sentiment you're going for. Sure, we're not going to be able to meaningfully pre-emptively prevent fake AI nudes from being generated, just like we can't really pre-emptively prevent rope-enabled kidnappings. But folks may still want to try a Type II control. The extent to which even a Type II control can be considered effective certainly depends extremely heavily on specifics, including an analysis of post-hoc investigation techniques, surrounding legal frameworks, resource considerations, and even the oft-debated deterrence theory of government sanctions.
Sure, and we can discuss type 2 controls. But we're going to very quickly get into the "what are we even doing here?" realm when anyone who wants to put together a piddly little indie game that uses player controlled image gen is going to need to spend time implementing some, easily circumvented, controls to prevent some class of images to be generated. And it's not just the deep fakes, we're going to have to get used to every image or video on the internet that doesn't have verifiable provenance being suspect. A lot of people seem to think this future is avoidable and it just isn't. People are going to be able to make deep fakes of people as easily as they can imagine them nude. We should try to teach young men not, or at least not to do so in a public manner just like we mostly manage to teach them not to describe to other people what they imagined one of their class mates would look like nude, but this is fundamentally a social problem that people are trying to solve with ill-fitting legal action. Do you think a kid should get expelled because he imagined what a classmate looked like nude? If a kid drew a picture of his classmate the buff should he be punished?
Sorry, h-what? This is truly out of left field.
Yeah, sure, agreed. Not sure the relevance.
I cannot possibly think of how this is remotely responsive to my comment. The answer is obviously no, but the mind is boggled.
More options
Context Copy link
More options
Context Copy link
This ban was because we imported a lot of shitty Chinese drywall that later outgassed sulfur compounds. It wasn't pre-emptive, it was punitive.
This is different from the UAS ban for several reasons including
UAS that do bad stuff on their own or at the surreptitious direction of their foreign manufacturer are largely only theoretical. DJI has been accused of uploading flight logs during an update, but that's it.
It applies to components, too, including components such as motors and batteries that could not be compromised to do the bad stuff theorized.
The reason for the UAS import ban is to prevent Americans from doing bad things with a UAS on purpose, not for any damage done by the manufacturer or manufacturer's country.
For the purposes of my comment, it is this temporal relationship that matters. Sure, the other temporal relationship between folks realizing this temporal relationship and choosing to ban it is fine. But this one is the one that holds the conceptual link.
I'm certainly not going to defend the UAS/component ban, either, but that's not the point here. The point is that even if we assume that all of that is dumb and doesn't make sense as a Type I ban, we can still make it illegal to use a UAS to kill someone or even just make it illegal to fly a UAS into a stadium or something, and this type of ban will have particular qualities tied to the specifics.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Well if GPU and Ram prices are any indication, we might get some de facto restrictions in that very few can afford a rig powerful enough to actually produce the images.
I was generating porn locally with Stable Diffusion XL running on an $800 gaming laptop that had an RTX 4050 with 6 GB of VRAM two years ago. Most of what I made was hentai, but it would have been trivial to train a LoRA on a couple dozen SFW photos of a particular girl, then made porn of her on demand.
I get a feeling people here vastly overestimate the required HW needed for generating random NSFW images because so much discussion is about LLMs that do require an order or two of magnitude more HW. If you don't care much about prompt understanding, concept flexibility or accuracy of poses and such, even "ancient" (ie. SDXL) models are more than capable of doing the job on piddly half a decade old computers that can be bought for $300 second hand.
Even if you really care about prompt-adherence, there are realistic Pony finetunes you can use to get a model that can understand *booru tags.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Why would that be the case when a seven year old laptop is already powerful enough to do it? You don’t need fancy new hardware when the existing far from top of the line hardware will do fine.
This is true, but unless the intention is to keep salvaging old hardware as the various components die, we're still ending up in the same place.
(My actual guess is that capacity WILL expand to meet demand, so this is probably a shortish term crunch)
I’m not talking about old high end gpus but the middle / low-middle end that’s now eclipsed even by integrated gpus. When you equalize for processing power, gpus are still way cheaper than when the hw that was capable of image generation first became common (which was several years before the software was invented). You really don’t need a 32 GB 5090 just to do some basic NSFW generation / inpainting.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You could make it pretty broadly inaccessible: ban all open-weight models; require any image generation to have strict safeguards and reporting of attempts to authorities; enforce severe criminal penalties. Your existing model would be pretty much untouchable, but it couldn't easily be shared, and a decade from now most copies of it would have been lost to end users. You could even require manufacturers to include firmware on new hardware that bails on unapproved workloads, but that seems like it'd be overkill.
Not saying that this is what I'd like, but it seems doable.
This seems harder than it sounds. Some of the best models aren't published by the West (DeepSeek is probably the best open text model at the moment, I hear [1]), so you'd need global agreement to start cracking down. And the small models aren't that big: Hollywood wasn't able to keep rips off of torrent sites a decade back, and from what I hear they're still around, and international VPNs are pretty ubiquitous too. Short of constructing your own Great Firewall, this isn't really feasible (and even then, it's just a matter of practicality, from what I hear).
The goal wouldn't be to make it so literally no one in the USA could run an open weights model; it would be to add friction points to make it more trouble than it's worth, except for the most dedicated people. You wouldn't need any kind of global agreement, just a national focus and working with large tech companies to limit it. DNS blocks, removing them from Google search results, etc. A relatively small amount of effort can prevent the bulk of casual users from having access to them.
That's just if you get the domestic consensus to look at open weights models as something comparable to copyright violation. If instead the public started seeing them the same as CSAM, you could go a lot whole lot further: still theoretically accessible, but very rare.
Should work as well as anti-piracy controls.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's not even slightly theoretically doable. The theoretical knowledge of how these models work is broadly available. Further, not only are adversarial countries going to completely ignore your desire for model control, they also are currently the ones who produce most of our hardware. Including, fpgas and gpus. Also You can't include firmware in the new hardware that can survive contact with the consumer. NVFlash chips are easily desoldered, dumped, and re-programmed. Firmware mods and flashing tools are easily accessible.
How to make CSAM is widely known, and plenty of places don't cooperate usefully with the USA in stopping it. Despite that, the USA does manage to broadly limit how much it proliferates.
I'm not saying that it's a good idea, and I'm not saying that open weight models could be completely eliminated. I am saying that they could be quite effectively suppressed, as there are plenty of tools that the government can use to enforce a ban, imperfectly but substantially.
The government can't even stop people from plugging in yandex.ru into their browsers and gaining instant access to any movie they wish to consume in seconds. Same for LLMs, Z.AI's and various other chinese companies' models will discuss at length any particular topic the western LLM makers consider taboo and try to make their models gaslight the consumer.
Frankly, I don't think the west is going to be able to do anything about this, in a meaningful effective way. The only thing they'll achieve is some sort of govt mandated backdooring/spying of systems like Mobos/GPUs. And even then it will catch only the least sophisticated of consumers.
Hell, The Pirate Bay itself is still operating just fine two entire decades later and the only hitch is that you have to google "piratebay mirror" and use one of those links.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Torrenting continues to exist. You just can't realistically prevent the distribution of a few gigs of data. Even if you eradicated all the currently existing models it's not particularly hard to train any of the safeguards off new models unless we're just never going to let professionals locally render images.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link