This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Sora is dead
It turns out that spending hundreds of millions for users to make useless slop videos was having a meaningfully negative financial impact. The bizarre thing is that Disney signed a $1b deal with OpenAI just a few months ago - who fucked up here? Of course, there are many more video AI tools out there, with fewer considerations for copyright law. But for now, Hollywood doesn't have much to worry about, at least on this front.
I tried using Sora for about a month at the end of last year, but I had to stop due to getting banned. Grok Imagine wouldn't ban me, so I've been using that instead. My wild guess is that a social media platform based entirely around AI generated videos like Sora can only exist in a sustainable way if it's explicitly for erotic/pornographic material - there's simply not enough demand for creating or viewing AI generated videos that aren't in that category to get enough users and views to pay for the generations.
I've mixed feelings, since Sora was clearly much better and more flexible than Grok imagine, and so I would've loved to see that develop further, but, at the same time, the lower censorship in Grok and XAI's general attitude towards censorship versus OpenAI's makes me think improvements in Grok is more likely to bear fruit. Of course, without Sora around, XAI has less reason to improve Grok... And Grok Imagine is also still censored, which isn't great, but it's the least worst, at least. In the long run, I'd hope that local video generation will be "good enough," but that'll probably require a world where dual 5090s with 64gb VRAM is considered a quaint little living room computer for sending emails and running old games at a tolerable 25fps, which I'm guessing is within 2 decades.
For a while I assumed that the big AI companies would permit porn generation eventually. They might want to act high and mighty now, but a time would come when they needed to show revenue, public perception be damned. Unfortunately, I'm naive enough that my conception of pornography did not extend to the type that could be legally problematic. Back in January, a bipartisan group of 35 attorneys general published a letter to Elon Musk asking for assurances that the company was taking steps to protect against NCII and CSAM, though it's unclear if he ever responded. Last week, a class action suit was filed in the Northern District of California, alleging that xAI is responsible for producing nonconsensual nude images of three underage named plaintiffs. Yesterday, the Baltimore city attorney filed a lawsuit alleging violation of various city ordinances involving consumer protection.
It appears to me that these issues could probably be solvable. Disallowing generative editing of user-uploaded images seems like a no-brainer. The CSAM issue is a tougher nut to crack, but it seems like the NCII issue is what was getting everyone's attention, so if that goes away then I doubt that the existing safeguards against the latter would be found deficient. But the cat's out of the bag at this point; Elon fucked up and now he's under the scrutiny of people who have the power to make life miserable for him. I imagine the class action suit will settle, but it will take years, and Elon is hard-headed enough that he might decide to make a statement out of it. The plaintiff's attorney seems to have selected the worst possible place to file, as I don't imagine you're going to find a more tech industry-friendly jury pool anywhere outside of Northern California. The Baltimore case is on less solid ground, and the potential exposure is likely lower (I can't imagine it being more than a few thousand dollars per proven victim), so it may make more sense to fight that one, although all that will accomplish is proving that he didn't violate a specific Baltimore consumer protection ordinance.
Grok Imagine has indeed implemented something like this, where uploaded images of real humans become extremely difficult to edit without triggering a censor, and I think videos might be right out. Unfortunately, even this gimped censor is severely limiting, and a full-on prevention of generative editing or animation of user-uploaded images would take away like 90% of the use cases for image/video generative AI. Since so much of using gen AI to produce images and videos is about trial-and-error and iterations of manual edits -> AI generation building on it -> manual edits of AI generations -> AI generation building on it -> etc., including using multiple different non-interoperable AI tools (e.g. generate original image in Midjourney, edit it locally using Krita and Stable Diffusion, then upload it to Grok AI to animate), lack of ability to take arbitrary image input would leave it as only the origin point for the workflow, which doesn't amount to much, or just simple time-waster slot machine generations.
Personally, I do not like the notion that it's possible to arrange pixels in an illegal way that doesn't involve some other independently illegal action as a causal factor and hope that attempts to make it so fail horribly. Unfortunately, I'm not hopeful, as it seems to me that support for free speech and free thought isn't very high right now in the USA. This is also why I'm still holding out hope that video gen on local hardware will become "good enough" that private servers owned by companies like OpenAI, xAI, Google, etc. don't become effective gates for this sort of creative endeavor for the layman (or at least lay enthusiast).
The reason it gets fuzzy around the edges is because you’re dealing with the knock on effects and potential second and third party consequences.
In several countries for example you can get executed for manufacturing and selling narcotics but not for consuming them. In this case if you’re someone who’s producing that kind of content, I can see a rationale for why you’d be in the crosshairs. People who consume the content… I can see why it’s ‘somewhat’ different.
More options
Context Copy link
More options
Context Copy link
Is it? Even if you only disallowed it for uploaded images of people, that would cripple one of the most popular use categories for generative editing. My kids mostly aren't very interested in AI, but they were thrilled when Gemini stopped disallowing and started allowing us to turn pictures of them (and their cats) into anime-cartoon-style and bobblehead-doll-style and kaiju-battle style and so on and so on. If you also disallowed it for all uploaded images then you'd be ruining one of the easiest good ways to control output of image generations in general.
More options
Context Copy link
I haven't seen that abbreviation before, so I'll explain it for lurkers: nonconsensual intimate images—a category that originally was just revenge pornography (public posting of privately-shared explicit content), but now has been expanded to include explicit and suggestive edits of publicly-posted nonsuggestive content, and sometimes even mere spotlighting of unedited publicly-posted suggestive content.
Fascinating how "consent" came to be a universal moral solvent, and by extension, a lack of consent can extend much further than any sane person might think.
More options
Context Copy link
Wow, that phenomenon seems really reminiscent of how "sexual assault" became a catchall term to be used when the speaker wants to create the connotation of forcible rape in contexts where the reality is some sort of harassment of a sexual nature. As well as how "sex trafficking" became a catchall term to be used when the speaker wants to create the connotation of kidnapping women into sexual slavery in contexts where any level of prostitution took place. I've come to really dislike these overt attempts to engineer language for the purpose of hiding covert attempts to manipulate others into believing things that one finds useful for others to believe and personally always just call them "child porn" and "revenge porn."
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link