This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Sora is dead
It turns out that spending hundreds of millions for users to make useless slop videos was having a meaningfully negative financial impact. The bizarre thing is that Disney signed a $1b deal with OpenAI just a few months ago - who fucked up here? Of course, there are many more video AI tools out there, with fewer considerations for copyright law. But for now, Hollywood doesn't have much to worry about, at least on this front.
Startups like Runway and Chinese companies like Kling are still around, and AI video generation is only getting more popular. The big players like Google and Tiktok have better in house models than OpenAI. It is a crowded space. Sora was first to market for this caliber of video models, but the space has left Sora behind.
There are 2 reasons OpenAI abandoned Sora, and it has little to do with the viability of AI video gen.
The primary reason is because OpenAI was spread too thin. Enterprise agents are the trillion dollar market, and OpenAI is currently losing to Anthropic. It spooked OpenAI and late last year, they changed focus to Codex. Since then, OpenAI has deprioritized ChatGPT, voice models, music models and ofc, Sora. It's not that video generation is not a lucrative market. But, it's 2 orders of magnitude smaller than the enterprise agents market.
The secondary reason is that OpenAI is not well positioned to win here. Video generation and editing is primarily about control and iterative improvement. You start with a story board -> create a 1st draft -> use a vast tool-kit to iteratively get it to the final version. Sora is great at creating the 1st draft, but the likes of Adobe & Apple have the whole tool-kit built out. Unless the model is capable of fine edits, it will not be a useful substitute for filming it manually. ChatGPT: the product is a thin wrapper on top of ChatGPT: the model. The effort needed to turn it into a commercially viable product is minor when compared to the research effort of creating a GPT v-next model. On the other hand, most of the effort with a commercially viable video generation product is in the product engineering, not the model itself. That's asking a lot lot of effort from OpenAI in an area they are not best equipped to beat seasoned product engineering teams at.
tl;dr: Video generation will survive. The bubble isn't popping. A better analogy is - 'The gold rush ended because someone just discovered a Diamond mine'.
For context, I worked for an LLM/diffusion based content-gen AI startup for a few years. I was very early to this. Frankly, it an indictment of my judgement that I am not yet a millionaire. Should've joined OpenAI or Anthropic in early 2023 while I still had the chance. SMH
It seems evident by their actions that engineers at OpenAI lacked the ability or capacity to use GPT5 to cost-effectively write an Adobe Premiere competitor but with Sora-integration, with UI that's just as good, just as intuitive and user-friendly for longtime video editing professionals, just as stable and responsive, etc.
I wonder if/when AI companies will reach the point where they could just do that for any arbitrary existing software. At what point could one of these companies just instruct the AI to generate an Excel clone that has perfect backwards compatibility to MS Excel, but also has their AI integrated in, and consistently get out a viable software product as the result? What about a Windows clone that has perfect compatibility to all Windows-compatible software, but also has their AI integrated in? What about an Oblivion clone that has perfect compatibility to all existing save files, but also doesn't require major QOL overhaul and performance mods to make enjoyable and also has their AI integrated in?
It relates to the traditional wisdom - "Regulations are written in blood". Much like it, "enterprise software is written in the tears of disillusioned engineers".
Yes, Adobe Premier is a few million lines of code, and LLMs can create millions of lines of code within weeks. However, Adobe premier wasn't one-shotted by a person, and an agent can't one-shot it either. The only way to build an excellent enterprise tool is to build a shitty enterprise tool, get feedback, and improve it with time. In startup speak, this private feedback is referred to as 'moat'. LLMs make this loop faster, but you can't skip it. eg: State of the art forecasting have great benchmarks, but routinely generalize worse than ARIMA. The only way to know this is to have spent years in the trenches trying to get some new paper into prod. The techniques needed for AWS to provide 11 (0.000000000001% chance of failure) 9s of durability will never be discovered by OpenAI unless they pay-off the hundreds of AWS people who meticulously got it there. That information is guarded in vault somewhere.
Coding agents are an exception because AI companies are their own customers (so feedback loops can precede adoption) and the code/discussions/learnings are publicly available.
The value in the text/images/media/any content that form the feedback comes from how modifying the software in a way guided by the feedback improves the software as judged by the people who gave the feedback (and people like them), not in the fact that content was generated by humans using the software and expressing their opinions. Generating the feedback that way through actual humans who used the software is a great way to ensure that that the feedback is valuable in this way, but I don't see why a sufficiently advanced LLM (or LLM-based tool) couldn't generate that feedback with just as much value (i.e. modifying the software in a way guided by that LLM-generated feedback improves the software as judged by the people who would have given the feedback, i.e. target audience), just by predicting the next word. And then modify the software through iterations until the feedback crosses some threshold of asking for small enough changes or something. I don't think this would be considered a "one-shot," but it certainly seems like it would require almost as little investment in human effort. It's just that the LLM-based tools don't seem sufficiently advanced (or perhaps they're not sufficiently fast?).
I don't think LLMs can generate meaningful human-like feedback of what it feels like to use the software. They just don't see the UI in the way that humans do. And it's not clear that increasing their capabilities can ever fix this.
Still, I do expect that they'll get better and better at iterating quickly and nondestructively based on your feedback, so while it won't be a fully automated dev cycle, I wouldn't be surprised if bespoke AI software replaces giant professional products eventually.
I don't see why LLMs would need to "see" the UI in a way similar to humans in order to generate meaningfully useful feedback for improving the UI (as well as any other element of the software) as judged by humans. It's not like the LLM would need to reason out "this UI element here gets in the way of this process due to that issue, etc." or "in my experience of trying to use this software in my workflow, this UI element could be improved by moving it here," or whatever. It'd be doing naked dumb pattern matching, of predicting words based on the prompt (which would include the sequence of 1s and 0s that make up the software, as well as instructions to produce text that a helpful human tester would provide, or the like) and its weights. There's no proof that this would work, but I also see no reason why simply scaling up current techniques and/or making them faster wouldn't allow LLMs to generate feedback like this which is just as useful as human user feedback.
Because it's really hard to predict how the software is going to be used, and it's not something that can be reasoned out. If that were the case, software companies with full UI teams wouldn't still be responding to user suggestions 50 years into the industry's history. Watch some of Tantacrul's videos on music notation software. He's a software developer by trade and a composer by hobby, so he has tried pretty much every major program on the market, and his video on MuseScore a few years ago resulted in him becoming the head of the development team. Music notation software is particularly ripe for this kind of criticism because it's all notoriously difficult to use and people such as myself who occasionally dabble in music have tried pretty much all of the available programs in a desperate attempt to find something that isn't going to piss us off. Highlights from the comments:
Sibelius
Finale
Dorico
Muse Score
Watch the videos. They're long, but highly entertaining. And keep in mind that he's only scratching the surface with respect to the problems he describes, and they're all either deliberate design choices or the result of being bound by the limitations of the existing codebase. I don't think you can just get an LLM to figure this stuff out.
More options
Context Copy link
I have my doubts, but you make a good point. A lot of the other emergent capabilities have been quite surprising, so there's no guarantee that this is out of the question, either.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I fully endorse this take.
My minor add is that there is a nice website https://fal.ai that provides easy-to-use API access to all of these video models. It is profitable and so not going away. Openrouter (https://openrouter.ai) previously only specialized in serving text models, but they have made recent experimental API improvements to add multimedia output as well.
The media APIs aren't quite as interchangeable as the text APIs, and providers are making an effort to provide some sort of moat around weird features in their APIs to make no frontier model a drop-in replacement for another frontier model. But media generation obviously isn't going away.
More options
Context Copy link
More options
Context Copy link
Eh, we're in the very early days when it comes to AI video. Sora was a pretty big leap forward compared to prior models, so I think it's reasonable to assume that came at significant cost to train and deploy, which also accounts for the short length of video output and other restrictions. I had the ability to use it, but barely bothered after the odd initial experiment or two.
AI video isn't going anywhere, don't get your hopes up. I can't remember which Chinese company recently came out with a model that matches or exceeds Sora a few months back, but the output was solid. Some of the videos gave me stitches. The demand is there, and the cost curves will continue trending downwards.
More options
Context Copy link
Rick Beato was on Lex Fridman's podcast recently and they were talking about how when YouTube first came out and nobody really knew what to use it for one of the thoughts was that it would allow people to distribute their own short films to the public. YouTube is now a mature platform and YouTuber is a job description, yet there is very little resembling traditional film making. I just don't think there's much of a market for it. From Super 8, to camcorders, to everyone having a decent video camera on their phone and access to free editing software, the process has become increasingly democratized over the past hundred years, yet making amateur movies is still a fringe pursuit, and at that it's mostly people who are pursuing careers in the industry.
It says something, and I'm not quite sure what, that there is very little "fiction" film on YouTube compared to informational content. Unless there are a bunch of telenovela channels the algorithm has never shown me.
Depends on how you define 'fiction film'. If you're asking for stuff involving multiple actors doing TV-series like stuff, well... yeah.
If you're talking about creative fiction in unique settings, hell, Analogue Horror could send you down a multi-day rabbit hole.
More options
Context Copy link
First thought: fiction really requires >3 people with an interest in acting and a vague talent at it. That’s not easy to do unless you’re in an IRL theatre troupe or university (The now-defunct Collegehumor for example).
Informational video is easier to do on your own. Even then, a lot of people are ex-radio stars.
More options
Context Copy link
More options
Context Copy link
Short films as in fictional stories? No but short films like reality tv? Basically that's most of youtube.
Some absurd fraction of the youtubers I watch are pretty clearly "I watched mythbusters as a child and want to make mythbusters 2.0". Ranging from The action lab or Styropyro are these youtubers making "short film?" no. But youtube is basically reality TV but even lower quality for the most part. Heck watch Mr.Beast for a bit and you get "wow this guy is making reality tv 2.0". (wow I had to literally google who this guy was after googling "most popular youtubers" man algorithmic segmentation is strong)
Genuinely.impressive you made it this far not knowing who Mr Beast was. Kind of sad that streak ended.
Oh I have heard of mr.beast before but I kinda didn't realize he was that big, I just thought he was some random youtuber because the first time I heard of him was Stand up maths but I didn't really know anything beyond "oh he's some guy who makes youtube videos" I had no idea his videos were that... uhhh wasteful mindless entertainment. I just figured "oh this guy is doing something stupid and Matt parker is making fun of him, well ok whatever"
Like the fact that my algorithmic segmentation is so strong that I learned about mr beast through stand up maths is like insane. I guess Youtube really knows my taste and goes "yeah mr beast you won't like him, how about some Dwarkesh Podcast and some MichaelPennMath instead" and tbh youtube is right I do find international math olympiad problems more interesting than whatever the hell the mr beast video I watched was.
I know who Mr. Beast is, in that I recognize the name, but I've never seen any of his content. And content should probably be in scare quotes, since I'm pretty sure that it's all unwatchable filler that goes nowhere.
I've never seen a second of his content, but I somehow know at least some of it is real life Squid Game.
I don't know what Squid Game is either so that doesn't help.
Apparently, it's a shooter game resembling paintball, where squids (or kids, it's ambiguous) battle.
They spray brightly-colored ink at each other until the loser violently explodes. The object of the game is usually to paint the floor, but can be other things.
More options
Context Copy link
A Korean fictional show in which people compete for money. Losers are killed. Like playing "red light, green light" on a field surrounded by motion sensing machine guns. Or hard-core tug of war over a very long drop onto concrete. Really quite violent.
I assume Mr Beast isn't murdering the losers in his version.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I tried using Sora for about a month at the end of last year, but I had to stop due to getting banned. Grok Imagine wouldn't ban me, so I've been using that instead. My wild guess is that a social media platform based entirely around AI generated videos like Sora can only exist in a sustainable way if it's explicitly for erotic/pornographic material - there's simply not enough demand for creating or viewing AI generated videos that aren't in that category to get enough users and views to pay for the generations.
I've mixed feelings, since Sora was clearly much better and more flexible than Grok imagine, and so I would've loved to see that develop further, but, at the same time, the lower censorship in Grok and XAI's general attitude towards censorship versus OpenAI's makes me think improvements in Grok is more likely to bear fruit. Of course, without Sora around, XAI has less reason to improve Grok... And Grok Imagine is also still censored, which isn't great, but it's the least worst, at least. In the long run, I'd hope that local video generation will be "good enough," but that'll probably require a world where dual 5090s with 64gb VRAM is considered a quaint little living room computer for sending emails and running old games at a tolerable 25fps, which I'm guessing is within 2 decades.
The Grok app no longer allows me to generate videos. It went from 50 a day (!) to zero. I have to pay 30 a month for Super Grok.
Thing is, on desktop I can still generate videos (though unclear if they're still so generous as to give me 50) with my 13/month Premium X membership. Go figure. It's interrupted my mostly mobile "flow" but at least I'm not cut off completely.
More options
Context Copy link
Lol. What were you doing on the platform?
Thats why consumer AI will almost certainly never have broad based appeal. I mean let’s just be honest about what the real use cases are… 4Chan is similar to what most people would be using it for.
If you ask it questions related to mental health, they’ll refuse to answer and tell you to seek professional help. If you ask it for advice about interpersonal relationships they’ll say they won’t aid you in manipulating other people(???). If you ask them questions about fixing a car, it’ll give you straight up misinformation.
I’ve noticed it so many times. If you try testing the models by asking anything that involves an edge case, it’s either the case that the consequences are too large where it concerns returning a firm answer so it’ll refuse to do it, or it refuses to participate in supplying any information that has the potential to be abused. So what does that really leave you with? A cool plugin for some things that also has neat alternative functionalities as a toy that can generate images. Trillion dollar industry? Call me skeptical.
I think a good litmus test for these systems is to administer it with online IQ tests. If it can’t get a perfect score with low reaction times, consider it baloney. Cheap metric but viable place to start, and that’s still an ‘extremely’ far cry from getting some AI system to actually innovate or do something creative.
None of this is true? I've literally used it for all 3 of those things recently.
If you have any prompts for me, I will run them through any model of ChatGPT you like (I have pro).
Depends on how you ask it I suppose. I've ran into all 3 on multiple occasions. I'm also not using ChatGPT.
More options
Context Copy link
More options
Context Copy link
Evidently, trying to produce videos that OpenAI disapproved of. More precisely, IIRC, I was inspired by some videos I saw on Sora where, apparently through some clever prompting and/or iterations, the user had managed to generate and share a video of a woman doing yoga, shown from suggestive angles. I was experimenting with doing the same when I got the ban email. Grok Imagine has lines as well, as I alluded to before, but it places its lines very very far from where Sora did.
More options
Context Copy link
More options
Context Copy link
For a while I assumed that the big AI companies would permit porn generation eventually. They might want to act high and mighty now, but a time would come when they needed to show revenue, public perception be damned. Unfortunately, I'm naive enough that my conception of pornography did not extend to the type that could be legally problematic. Back in January, a bipartisan group of 35 attorneys general published a letter to Elon Musk asking for assurances that the company was taking steps to protect against NCII and CSAM, though it's unclear if he ever responded. Last week, a class action suit was filed in the Northern District of California, alleging that xAI is responsible for producing nonconsensual nude images of three underage named plaintiffs. Yesterday, the Baltimore city attorney filed a lawsuit alleging violation of various city ordinances involving consumer protection.
It appears to me that these issues could probably be solvable. Disallowing generative editing of user-uploaded images seems like a no-brainer. The CSAM issue is a tougher nut to crack, but it seems like the NCII issue is what was getting everyone's attention, so if that goes away then I doubt that the existing safeguards against the latter would be found deficient. But the cat's out of the bag at this point; Elon fucked up and now he's under the scrutiny of people who have the power to make life miserable for him. I imagine the class action suit will settle, but it will take years, and Elon is hard-headed enough that he might decide to make a statement out of it. The plaintiff's attorney seems to have selected the worst possible place to file, as I don't imagine you're going to find a more tech industry-friendly jury pool anywhere outside of Northern California. The Baltimore case is on less solid ground, and the potential exposure is likely lower (I can't imagine it being more than a few thousand dollars per proven victim), so it may make more sense to fight that one, although all that will accomplish is proving that he didn't violate a specific Baltimore consumer protection ordinance.
Grok Imagine has indeed implemented something like this, where uploaded images of real humans become extremely difficult to edit without triggering a censor, and I think videos might be right out. Unfortunately, even this gimped censor is severely limiting, and a full-on prevention of generative editing or animation of user-uploaded images would take away like 90% of the use cases for image/video generative AI. Since so much of using gen AI to produce images and videos is about trial-and-error and iterations of manual edits -> AI generation building on it -> manual edits of AI generations -> AI generation building on it -> etc., including using multiple different non-interoperable AI tools (e.g. generate original image in Midjourney, edit it locally using Krita and Stable Diffusion, then upload it to Grok AI to animate), lack of ability to take arbitrary image input would leave it as only the origin point for the workflow, which doesn't amount to much, or just simple time-waster slot machine generations.
Personally, I do not like the notion that it's possible to arrange pixels in an illegal way that doesn't involve some other independently illegal action as a causal factor and hope that attempts to make it so fail horribly. Unfortunately, I'm not hopeful, as it seems to me that support for free speech and free thought isn't very high right now in the USA. This is also why I'm still holding out hope that video gen on local hardware will become "good enough" that private servers owned by companies like OpenAI, xAI, Google, etc. don't become effective gates for this sort of creative endeavor for the layman (or at least lay enthusiast).
The reason it gets fuzzy around the edges is because you’re dealing with the knock on effects and potential second and third party consequences.
In several countries for example you can get executed for manufacturing and selling narcotics but not for consuming them. In this case if you’re someone who’s producing that kind of content, I can see a rationale for why you’d be in the crosshairs. People who consume the content… I can see why it’s ‘somewhat’ different.
More options
Context Copy link
More options
Context Copy link
Is it? Even if you only disallowed it for uploaded images of people, that would cripple one of the most popular use categories for generative editing. My kids mostly aren't very interested in AI, but they were thrilled when Gemini stopped disallowing and started allowing us to turn pictures of them (and their cats) into anime-cartoon-style and bobblehead-doll-style and kaiju-battle style and so on and so on. If you also disallowed it for all uploaded images then you'd be ruining one of the easiest good ways to control output of image generations in general.
More options
Context Copy link
I haven't seen that abbreviation before, so I'll explain it for lurkers: nonconsensual intimate images—a category that originally was just revenge pornography (public posting of privately-shared explicit content), but now has been expanded to include explicit and suggestive edits of publicly-posted nonsuggestive content, and sometimes even mere spotlighting of unedited publicly-posted suggestive content.
Fascinating how "consent" came to be a universal moral solvent, and by extension, a lack of consent can extend much further than any sane person might think.
That's because "consent" actually means "waives Female Privilege to profit from sex after the fact", not "accedes to".
Women cannot legally consent to sex (or any sex-adjacent activity, actually- 'revenge porn' is yet more salami-slicing away of that ability) today in any Western nation (the US is, perhaps ironically, the least far down that path- but it is still criminal). South Park made fun of this with the consent forms, but the fact that wouldn't hold up in court is actually the main issue here.
Sex with them is thus as potentially legally dangerous as it would be with a 7 year old- the group "consent" was made up to initially protect. We can see this by how laws tend to get changed so the man can't protect himself by demonstrating in court the women intended to discharge this and lied after the fact (i.e. the Jian Ghomeshi case). It's also why Western/feminist anti-prostitution laws only criminalize buying sex, not selling it.
In other words, invoking "consent" is the one-word fig leaf to cover up the fact women are blatantly abusing privileges meant for the people they claim are the most vulnerable, and to claim that if you're opposed to this abuse it's because you want 7 year olds to be raped. It's quite effective, as you can see.
Okay, let me put this question to everybody here.
Suppose women lose all sense of shame. They've sent intimate photos and videos to their boyfriend because that's how modern relationships work. Then they break up. Maybe it was a bad breakup. Former boyfriend is now pissed-off and is threatening them that unless they get back together, all their intimate photos and videos will be shared with everyone. Or maybe former boyfriend skips the threats and goes straight to uploading this on porn websites etc.
And the woman goes "Go right ahead, I don't care. Sure, send that full-frontal all-angles nothing concealed nude photo of me to my employer and my work colleagues. That video you wanted of me fucking myself with a vibrator? Yeah, send it to my granny. Hey, if you make any money off all that, remember to split it with me!"
That takes revenge porn off the table, because how can there be revenge if the blackmail element is removed? If women behave like men and are "I don't care if he's using my nudes to AI deepfake videos of me fucking dogs"? EDIT: I'm asking that in the context of the comments on here about "but what harm is really done if photos of women and children are used to create fake porn? why is this a concern? why are people worried about their images being used as masturbation material, if a guy wants to jerk off imagining a particular woman he knows, he can do that in his imagination so you can't stop it, and if you don't know why would you feel hurt about it?"
But would you like women to be like that? Or would it just be more "women are sluts who need their sexual autonomy removed and to be controlled by fathers and husbands" fuel for the fire?
EDIT: Oh, and gentlemen, if you find success with the ladies eludes you, could it be because you are neglecting your intimate hygiene? Luckily Lysol will solve that for you! Regular douching with something that makes you smell like coal tar down there will surely be irresistible!
And this is different than the general blackmail case... how, exactly (especially in the AI context)?
We already have laws to deal with this case (and in the cases where we've chosen not to have them/are prohibited from doing so, we've already made the tradeoff). You don't need another law like that, or at least, you wouldn't if this was actually about protecting people from harm and not just a case of
which is perhaps why you did exactly that in the first edit.
Given my assertion is "that's exactly what women themselves are agitating for here"? Of course, it's not really "controlled"- it's always legal for women to have sex for reasons that have a bunch to do with an echo of '70s sexual liberalism- it's just permanently illegal for men to participate in any way
First it was just sex itself, then it was sex-adjacent activities, now it's pictures (real or otherwise) of it. Salami-slicing.
Do you honestly not see why using pictures of real people to create sexual images might be offensive, even if the woman in question was sexually active? Would you not care if someone used a photo of you to create something like that and distributed it? You can call it a figleaf, but unhappily there are real guys out there who would indeed use a photo of the neighbour's four year old to create images of that child naked and sucking cock, and pass such images around.
Perhaps you don't object because if you got an AI-generated image of some hot chick you know, or a famous woman, or that bitch who refused to go out with you when you were seventeen, and she is used for porn material you can jerk off to, you'd be quite happy to use it that way. Perhaps you wouldn't care if such images were created and disseminated of you, because what harm is done? You never really got fucked by a stallion in real life, who cares if the kinksters are using your beach photo to show you taking horse cock? Maybe you think the guy who persuaded a 12 year old into sending him nudes, then tried to blackmail her with those and she eventually committed suicide, did nothing wrong (some people did comment along those lines before). After all, she freely gave him those images, so it served her right if he showed the world what a horny little bitch she really was, yes?
Some people do care, though. Personally, I think any woman who provides nudes or the like for a boyfriend is extremely stupid, but the betrayal there is that these were supposed to be intimate images for one specific person in the context of a relationship, not to be shared around or used to do reputational harm. That is what feels the most hurtful.
I'm cynical. Of course I think "don't trust men, they only think with their dicks and are vicious when not getting what they consider their right to get laid" but some women don't feel that way - until they get slapped in the face with it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
About that... Dave Chappelle would like a word.
More options
Context Copy link
More options
Context Copy link
Well, for some people anyway. Those of us born with the wrong parts are typically exempt from needing to provide consent, though for some reason we're still expected to obtain it.
More options
Context Copy link
More options
Context Copy link
Wow, that phenomenon seems really reminiscent of how "sexual assault" became a catchall term to be used when the speaker wants to create the connotation of forcible rape in contexts where the reality is some sort of harassment of a sexual nature. As well as how "sex trafficking" became a catchall term to be used when the speaker wants to create the connotation of kidnapping women into sexual slavery in contexts where any level of prostitution took place. I've come to really dislike these overt attempts to engineer language for the purpose of hiding covert attempts to manipulate others into believing things that one finds useful for others to believe and personally always just call them "child porn" and "revenge porn."
Those kind of things were always actionable under common law tort theories, and most of the discourse around them doesn't even go that far. The problem is that regardless of legality, they're scummy things to do, and they've only seen a lot of media attention in an era where it's easier to do them. It's not like you could do this stuff in the 50s without consequence, it's just that actually being able to do it wasn't really an option.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think it's a monetization issue. OpenAI needs to optimize it's revenue per compute. The slop videos take a ton of compute and the users won't spend much on them.
In contrast if they can give Disney a tool with more precise control it's worth a lot to Disney.
Look at a movie like American Sniper. It has a scene where Bradley Cooper is holding an obvious doll because the baby was acting up on set. Afterwards they decided that the expense, delays, an quality issues of fixing the scene with CGI didn't make sense.
If they could have spent $500k in tokens to fix the scene they would have.
Giving directors and editors a tool to tweak things has a lot of value.
More options
Context Copy link
It’s more likely that they couldn’t compete with open source models, Chinese offerings, and other companies specialised in video generation. They also bet on this strange social media app while most AI video generation users want to post their slop on YouTube, Instagram and Tiktok directly. Take a look at the average person’s feed and you’ll see how prevalent AI videos are.
Unscrupulous content creators probably prefer services that let you make videos with little to no censorship, something that OpenAI couldn’t offer.
Which open model is closest in quality?
Wan 2.2 had the lead for a while, but LTX 2.3 came out recently and might have changed that. They are pretty impressive considering the VRAM limitations they have to work within, but definitely a ways off proprietary SOTA.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I used Sora 2 for about a week when it came out, then never used it again. I might have used the app for funny video generation, but they insisted on watermarking everything and creating a social media bubble for Sora content, which was always a dumb idea. Meanwhile Sora 1, which actually had a cool discovery feed for generated images, was just sunsetted as well.
The OpenAI ecosystem sucks now in a way it didn’t 3 months ago. Claude is ascendant.
Its something I noted a few months back, OpenAI is screwed for lacking any supporting infrastructure for their core product.
They're up against companies with unlimited cash flow (Google), integrated social networks (META, xAI), or a noticeable edge in performance (Anthropic), or maybe all 3. And China.
They literally only had first-mover advantage. They gained a tiny edge by throwing out somewhat undercooked models that were nonetheless marginally better than others and got name recognition. But the cost of switching is, basically, zero.
Sora was their attempt to build some infra from scratch, a very dubious proposition. And then Sora got eclipsed by other SOTA video models. This keeps happening to them.
It looks like the Pentagon deal might have saved their bacon for the time being.
More options
Context Copy link
In the most important thing so far - code codex wipes the floor with claude
More options
Context Copy link
More options
Context Copy link
I fucking hate the internet. People refuse to do thier due dillagence, spouting off uniformed one-offs to generate click-revenues without doing thier goddamn due dillage and include some fucking references.
Yes, I'm ranting. Why? Because of one-off remarks on twitter claiming '"Massive investment in AI contributed basically zero to US economic growth last year," per Goldman Sachs'
Clearly, this is a big deal. Quite the claim. Wonderful. Delightful. You'd think they'd have the decency to actually include a fucking link to the article claiming this, but no. More slop for the midwits.
This did not satisfy me. So, I went sluthing, and found the article that sources the claim, which is... a goddamn video interview.
For fuck's sake.
But no. The wizard is undetered. Thankfully, my stubborness in the face of video essays means I have have semi-reliable sources for grabbing youtube transcripts, which I promptly did so to find the money quote. Which I will share for you now;
Interviewer: What surprised you and what didn't in 2025?
Jan Hatzius: Yeah, what surprised us was that the increase in tariffs was bigger than what we had in our baseline assumption, closer to our our risk case, but uh definitely a more pervasive increase in in tariffs.
And that I think explains why US growth was was weaker. The the the shock was bigger. But if I step back from the specific numbers, I think the title of our outlook going into 2025, which was tailwinds, probably Trump tariffs, held up reasonably well. [...] Second driver of growth in 2026 is the fiscal boost. The tax cuts in the one big beautiful bill act which are taking effect mainly in the first half of 2026.
And there's an an aspect here of full expensing of plant and equipment which is going to have a positive impact on the corporate sector and probably on business investment. Tax refunds for households are going to be quite a bit higher than normal. That should support consumer spending. And you know, we're estimating that fiscal policy adds a half to 3/4 of a percentage point to growth in the first half of this year.
And then lastly, financial conditions have been easing as we've gone through most of 2025, at least post liberation day. And that's been for a variety of reasons, but in part because the Fed's been cutting interest rates. And that impulse to growth is also probably going to be most visible in early 2026. So all of that gives us a an above consensus uh growth estimate.
You'll note that one uh one item that may be missing from this list is AI investment. Uh we don't actually view AI investment as strongly growth positive.
I think there's a lot of misreporting actually of the impact that AI investment had on US GDP growth in 2025 and it's much smaller than is often perceived because most equipment most AI equipment is imported and that means there's a positive entry in the investment line but that's offset by a negative entry in the net net exports line And a lot of the AI investment that we're seeing in the US adds to Taiwanese GDP and it adds to Korean GDP but not really that much to US GDP.
Interviewer: Can I just pause there for a moment because at your point and I think this is in the forecast as well is that basically AI investment has been negligible to US GDP growth in 2025. Is that a fair assessment?
JH: Yes. Basically zero. Basically zero.
Interviewer: That seems completely counter to the narrative we read almost every day in financial media. What we hear talked about AI capex and boosting and how that's it's basically supporting the US economy through tariff headwinds and otherwise. Where is the disconnect between the reporting and what you're saying here?
JH: Well, again, I think some people forget that you need to look not just at investment but also at net exports. just from an from a GDP accounting perspective. There's another more technical point which is that some of the AI investment as the AI investment directly in semiconductors isn't actually classified as investment in the national income and product accounts. It's classified as intermediate inputs. So the the national income and product accounts miss that part of investment spending. So we've talked about the the true GDP impact which is still very small but positive positive contributor to growth in in 2025 and then the measured GDP impact which is literally we estimate uh zero.
Interviewer: Okay. I think that's just an incredibly important point to underscore and take forward. And then in 26 you estimate slight increase there as investment comes more online.
JH: We do. We do, but it's still pretty small. I mean, the the the significance of AI and the AI trade and, you know, none none of what I'm saying means that I don't think AI is important, right? It's I think it's very important and it's obviously very important for financial markets, but the specific impulse from AI investment on GDP is still going to be pretty small. Although I do think it's going to be positive to a limited degree this year.
Please excuse any odd errors in the transcript taken straight from youtube. People's direct quotes can sometimes translate oddly to text. Some of the interview has been excised for brevity, as marked by '[...]'
I have suffered through find all of this information, and now you must suffer through reading it. The above discussion can be taken almost straight from the start of the video, should you wish to double-check the above quotes.
And I note all of this above to venture to wonder if we'll see, if not a bubble popping on AI this year, atleast some severe evaporative cooling. It's something to question by this point, but it's always good to recall that markets can remain irrational longer than you can remain solvent.
If AI contributed nothing to GDP growth, then either there was no AI bubble or the AI bubble was largely inconsequential economically... and therefore any popping of the bubble or evaporative cooling of AI will also be inconsequential economically.
More options
Context Copy link
Hatzius is one of the few economists marked out for his successful predictions and is actually worth listening to; and I agree with his statements about AI.
AI is mostly a solution in search of a problem. At best it may be a somewhat useful assistant when it comes to automating various tasks in the same way critics think of LLM's as little more than a fancy autocomplete but are fairly worthless when measured up against the claims people make about them. At worst the consumer models feel like having a fucking vagrant living in your basement, with as much as they get shoved deeply into the OS are increasingly pushed into integrations that people are forced to adopt, it feels like your system is harassing and nagging the fuck out of you to use it.
I kind of agree, though I’ll add that things like electricity, printing presses, the steam engine (which was a Greek toy in the classical era). Its future will depend on whether or not someone figures out what to do with it, and there are millions trying.
More options
Context Copy link
More options
Context Copy link
How can this be?
NVIDIA's revenue for fiscal year 2026 (ending January 25, 2026) was $215.94 billion
Nvidia revenue would be roughly 0.7% of US GDP if it were all in America. Nvidia margin is about 75%. So 25% of revenue goes to manufacturers in Korea or Taiwan. Maybe another 25% is foreign employees, operating expenses abroad. At least 50% of Nvidia should be derived as US economic activity. The chips are designed in the US after all. That's a cool 100 billion dollars or 0.3% of GDP, nothing to sneeze at.
Then there are all the other AI hardware companies like AVGO, the cloud providers like Azure or AWS, the AI companies themselves.
How could people possibly be building these gigantic datacentres and not have that picked up in GDP? https://youtube.com/watch?v=VLgDvjcvURc
If GDP is somehow not measuring the impact of AI investment then so much the worse for the GDP calculators I think. And it's not even clear that this is the case, EY seems to disagree:
https://www.ey.com/en_us/insights/ai/ai-powered-growth
I don't know if EY knows better than Goldman Sachs, I don't have a high opinion of either. However, I think that there are lots of people who want to hear that the bubble is popping and grasp for any sign that it is.
Imagine that it was the 1910s and tractors were the big new thing. Obviously tractors raise agricultural productivity. But maybe they're kind of unreliable, maintenance for this new technology is a bitch, maybe the methods for using them aren't well-established, maybe there's some difficult soil where the tractors get bogged down, maybe fuel distribution in the countryside isn't well-developed. There are lots of conservative farmers around. One could easily produce convincing anti-tractor arguments and examples. But in general, tractors would still remain the future of agriculture, profitable to produce and use. You could derive this from first principles, considering the power of engines and their utility vs horses. There was no tractor bubble, there is no AI bubble.
There actually was a tractor bubble in the US in the 1920s. From 1920 to 1921, the industry imploded and production dropped by more than half. The reason for this was twofold. The first was that speculators looked at sales numbers and treated tractors like they were consumables instead of durable goods. The second is that increased efficiencies through improved tractor designs reduced the demand for tractors.
Economic bubbles deflating don't necessarily kill technologies. Despite the tractor bubble popping, the technology stayed around and continued to develop at a reasonable pace. I think that's what's going to happen with AI. It'll be a useful technology, but today's players won't necessarily be tomorrow's winners.
More options
Context Copy link
I'm not so knowledgeable for tractors specifically, but there definitely were car bubbles. When people realized that cars have reached a state close to horse-drawn carriages, will predictably become even better in the near future, and probably replace horses altogether, a host of new companies with various variants of car technology cropped up. Of course, as you should know, almost all modern cars have used the same basic style of engine, the internal combustion engine (and in fact, mostly a specific kind of it).
But it wasn't always so. There was a variety of companies with their own engine designs that overwhelmingly failed. And of course various companies experimented with non-engine related designs that also didn't work out. Some companies successfully made the switch early enough, but many just failed. You could do everything right, correctly predict the dominance of cars, invest in a reasonable company, and lose absolutely everything anyway.
The main difference now is that many of the current competitors are already giants so they can write off a lot of losses without going broke, and it's unclear whether governments will even allow them to outright fail. But a bubble popping on several of them (or even all of them - maybe the real breakthrough will come from a smaller competitor, though I consider it very unlikely) and them losing substantial valuation seems like a foregone conclusion, even if I think that eventually AI will be a technology of the future.
My understanding is that something similar happened for railways, even without the significant differences in engines (?). Revolutionary technology that generated multiple bubbles along the way that bankrupted many people.
More options
Context Copy link
More options
Context Copy link
A technology can end up being dominant and important but still throw off bubbles in the meantime. Stuff like the early Dotcom bubble where everybody could see that the internet was going to be important, and yet funds were allocated to the wrong manifestations of that. Alternatively, some tech is hugely meaningful and yet a motherfucker to actually capitalize on as an individual private company.
But why is this even the right frame of looking at things? If it looks really powerful and useful, probably it is just really valuable. There might be reasons why it isn't but they should be specific.
What has tech been doing for the last 10 years besides AI? I can't think of any great improvement between 2012 and 2022. More irritating ads, having to subscribe to Microsoft Office, VR headsets with a handful of good games... Incremental advancements at best.
But now if I want to do something with a computer, I can get an AI to give me precise instructions or just do it outright. That's a genuine improvement. I don't have to wade through forums or oceans of SEO + ads with a search engine. I don't have to learn to code to make and sell code commercially - they seem to enjoy it and find it very helpful. I can just have it output niche, highly specific pieces of writing just for me, on a whim.
The technology sector has finally contributed something positive after about 10-20 years of resting on their laurels and now people are complaining about a bubble, it seems bizarre to me. This is putting aside all the scientific innovation and prospects of superintelligence.
Cisco had a P/E of 200 back in the day, Nvidia is in the 30s and is the largest company in the world. Where is the bubble? Is it just isolated to the AI pureplay companies like Anthropic or OpenAI who aren't profitable because of all the R&D they're doing?
I know it's cliched to say 'this time it's different' but if it looks different, then it probably is different? Maybe OpenAI has just gone down too many paths and finds that short-form video is not cost-effective since it caters to a population of poor people trying to game porn restrictions and upsets influencers/artists who love to hate AI video. Whereas selling coding AI actually makes lots of money, so they're redirecting compute for business use. In my experience it's far harder to make a profit selling AI to consumers than it is to businesses.
You're missing what a bubble is.
A technology can have profound long-term effects that are positive for everybody, yet still generate a bubble for investors who rush to try and take advantage of that effect and don't end up actually investing in the correct manifestation to make any money. The bubble is from the POV of those attempting to speculate on something, the actual relevant technology is immaterial
It's always a bit sad to watch all the artcels and redditors longing for "the bubble to be over". They're not talking about inflated stocks, but about the technology. As if the whole tech is just going to go away or go backwards and stop taking their jobs. They don't seem to understand that yes there is a speculative bubble and it may burst to some extent, but the tech is not the bubble and it will only grow bigger and better.
@RandomRanger I don't think the bubble is in NVDA so much as in the multitudes of small speculative startups that attract money based on them supposedly developing or using AI, or the push in all sorts of companies to be part of the trend.
But if those small companies go under, who cares? It won't have much economic effect, startups go under all the time. That's capitalism.
It's the hyperscalers who are too big and rich to go under and Nvidia that matter.
Suppose OpenAI collapses because of all their debt and spending. There'd just be a feeding frenzy as Microsoft just takes over their researchers, Anthropic and Google gain marketshare, maybe Chinese companies also make gains.
I guess that's what you're saying about the tech not going away but I don't see how anything would significantly change. People would just go to Gemini instead of 'Chat'. Kling or Seedance or Grok Imagine instead of Sora. What would the bubble popping actually mean?
It would mean a lot of shareholders losing a lot of money. Same as in the dotcom crash. That's who will care. The fact that it probably won't be your problem does not mean it does not/will not exist.
More options
Context Copy link
More options
Context Copy link
They believe gen AI will be like NFTs, where the general consensus is that the technology was overhyped and has limited use cases and was mocked both at the time and in hindsight. They believe gen AI has few to no actual use cases and is essentially useless technology that wastes water, electricity, and compute to create text and imagery that is unreliable, useless, and has no function.
They’re wrong, but they’re very confident. In the case of gen AI images and video, they have enough numbers that they’ve been able to make gen AI art controversial and low-status, which in creative fields is a death sentence. They believe they can combine that power with generative AI becoming more expensive to use because of a burst bubble ending cheap generation for end-users to make it both low-status and expensive.
That said, I get the sense that uptake of AI generated artwork is slowly growing in the corporate space, particularly as an adjunct or aid to human design instead of a replacement.
A line found in the dictionary entry for redditor. :P
In the late 1940s, something like 96% of respondents in an American survey said they would never buy a TV. One generation later, everyone did.
While it is true that there is a lot of low quality slop and baseless hype/marketing of that slop, the potential for AI/agents to soon replace a lot of human labor should be obvious. Maybe that's the painful part that it is tempting or emotionally necessary to sweep under the rug.
More options
Context Copy link
More options
Context Copy link
Indeed, we can see that when the dot com bubble burst, internet technology and adoption did not step back, it kept accelerating, probably it was even helped in the long term by capital becoming more careful and not just throwing millions and millions at every e-commerce under the sun.
But... Maybe those people are more hoping for the bubble to burst not because it will save them, but entirely because they believe it will hurt the "tech bros" they see as responsible for their current predicament.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Motion tracking got much better, I think. That and the VR improvements might just be due to cheaper hardware related to phone tech? I’m not really sure.
Drones (consumer or otherwise…) advanced a lot in the same period. Everything from cheap commercial filming to light shows to endpoint package delivery. This is some combination of software + control hardware + insane battery developments, which are in turn related to the adoption of electric cars. Compare also the advances in prestige robotics, like BD’s Atlas and Spot.
Related, computer vision looks completely different than it did when I was in school. Even excluding AI!
Additive manufacturing has taken off. Prototyping, obviously, but also aviation and electronics fab. It’s become vastly more accessible to hobbyists, too.
Video calling exploded for obvious reasons. I think it had the opportunity to catch up with other advancements in streaming video. Speaking of which, social media accelerated a lot in the late 2010s. TikTok launched in 2016; all the American tech companies joined the party.
Oh right. Commercial spaceflight.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This may be a canary in the coalmine because it's the apex of structural problems for transformers as a product.
First, the operating costs must have been enormous. Video processing is some of the most costly things you can do on a computer, full spectrum, and it's involved at every layer of processing here. YouTube has famously been a loss leader forever on the storage costs alone, though the usage pattern is vastly different I can't imagine operating the datasets, training and distributing video models is much different.
Second, open weight (or just Chinese) models eat up your moat and don't have the limitations a company is ultimately tied to for PR or legal reasons. In a funny twist or fate, it seems pretty easy to steal back all the knowledge one has stolen by pirating all the movies in the world through "distillation attacks". So innovating on public models can end up just spending money for your competition.
Third, finding product market fit can be tricky even for something as technically impressive as generative AI. It's not clear how this is useful at all for anything but shitty memes and fraud, and hybrid approaches that don't rely on fully generative techniques like DLSS5 could end up being far more practical both to use and to make.
All these are problems every lab is having right now and must solve before the money runs out. That we see people cutting back where they hit hardest may be a sign that at least the money no longer feels infinite.
I don’t see why video generation is a canary. The ideal use case for AI is in business applications, not generating weird videos of copyrighted characters doing random things. Sora was at best a sort of novelty act, something to show off the potential of a technology, much like the chatbots. When even non-tech people are able to use it, and do kind of cool stuff with it, it generates demand for the product in other contexts. Getting sora to generate Garfield in a fighter jet, eating sushi in seconds puts it in the heads of people making business decisions that AI can do a lot of creative and inventive things quickly.
The point is that every end user AI product is subsidized and video was just the most subsidized (because indeed it didn't really have a huge consumer market) so it goes first.
I would expect the next thing to go to be the more extravagant loss leaders in less intensive applications.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's possible that this is a very specific failure; that arbitrary video generation has no meaningful commercial use cases, and very limited recreational use cases. I used Sora very briefly when it came out, and I found it quite limited, with the substantial cost preventing me from toying-around to figure out what works.
It's also possible that OpenAI is strapped for cash as the AI financial bubble approaches a tipping point. OpenAI's compute committments are based on forcasted exponential revenue growth. If growth is linear or only barely superlinear, then the whole thing falls apart, with everyone holding on as best they can until the next equity sale.
NVIDIA stock is flat over the last 18 months by the way.
Even if they aren't running out of money, they are trying to shape up for an IPO this year. Having a boat-anchor like Sora on the books probably wouldn't look good in an S-1.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link