site banner

Culture War Roundup for the week of September 29, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

AGI Was Never Going To Kill Us Because Suicide Happens At The End of Doomscrolling

I'll go ahead and call this the peak of AI version one-dot-oh

The headline reads "OpenAI Is Preparing to Launch a Social App for AI-Generated Videos." People will, I guess, be able to share AI generated videos with their friends (and who doesn't have THE ALGO as a friend). Awesome. This is also on the heels of the introduction of live ads within OpenAI's ChatGPT.

Some of us were waiting for The Matrix. I know I've always wanted to learn Kung Fu. Others were sharpening our pointing sticks so that when the paperclip machine came, we'd be ready. Most of us just want to look forward to spending a quiet evening with AI Waifu before we initiate her kink.exe module.

But we'll never get there. Because Silicon Valley just can't help itself. Hockey sticks and rocketships. Series E-F-G. If I can just get 5 million more Americans addicted to my app, I can buy a new yacht made completely out of bitcoin.


I am a daily "AI" user and I still have very high hopes. My current operating theory is that a combination of whatever the MCP protocol eventually settles into plus agents trading some sort of crypto or stable coin will create a kind of autonomous, goal-seek driven economy. It will be sandboxed but with (semi) real money. I don't think we, humans, will use it to actually drive the global economy, but as a kind of just-over-the-horizon global prediction market. Think of it as a way for us to have seen 2008 coming in 2006. I also was looking forward to a team of maybe 10 people making a legit billion dollar company and this paving the way for groups of 3 - 5 friends running thousands of $10 + $50 million dollar companies. No more corporate grind if you're willing to take a little risk and team up with some people you work well with. No bullshit VC games - just ship the damn thing.

And I think these things are still possible, but I also, now, think the pure consumer backlash to this silicon valley lobotomy of AI could be very much Dot-Com-2-point-O. The normies at my watering hole are making jokes about AI slop. Instead of "lol I doomscrolled into 3 am again" people are swapping stories about popping in old DVDs so that they can escape the ads and the subscription fatigue.

Culturally, this could be great. Maybe the damn kids will go outside and touch some grass. In terms of advancing the frontier of human-digital knowledge, it seems like we're going to trade it in early not even for unlimited weird porn, but for pink haired anime cat videos that my aunt likes.

The thing no one seems to be talking about with respect to AI is how the underlying economics of it all are so mind-numbingly bad that a crash is inevitable. I have no idea when this crash is going to happen, but if I had to fathom a guess it will be some time within the next five years. We're talking about a technology that has already burned at least half a trillion dollars and has plans to burn another half trillion with no model for profitability in sight. There's only so long that the flow of venture capital will keep coming before the investors start expecting some kind of return. Add in the fact that Nvidia currently represents about 8% of the total value of the S&P 500 based on sales of graphics cards to a single, unprofitable company, and the economic picture looks even more dire.

I think that the underlying problem is that they're trying to run an enshittification model on an industry where the path has typically been the exact opposite. Look at computers themselves. When computers were first invented, they were limited to institutional uses by governments and large universities, and were subsidized through R&D budgets that weren't relying on profitability, i.e. as a present expense rather than a credit against future earnings. Then large corporations started using them. When personal computers were developed in the late 1970s, they were mostly used by businesses, and in the consumer market they were expensive machines for the tech-horny. As costs came down, more and more households began using them, and by the time they became ubiquitous at the end of the 20th century it had been 50 years since their invention, and they still weren't exactly cheap.

Now imagine an alternate timeline where IBM decides in the 1950s to build several large computers in cities all across the country, enough that they can let every Tom, Dick, and Harry run whatever programs they want for free, all the way down to middle schoolers doing their math homework, with minimal wait time. And of course they're offering on-site programmers so that you don't actually need to know anything about computers to be able to take advantage of them, and they're convinced that after doing this for years people will be so enamored that they'll eventually start paying for the privilege. You'd have been laughed out of the board room for making such a suggestion, yet this is roughly the state of the AI business model.

AI cheerleaders will point to other tech companies that lost tons of money in their early years, only to later become behemoths. Uber is often cited as an example, as they spent more than a decade losing money before becoming profitable. But there are two big differences with Uber. The first is that they were actually responding to a market need. Outside of a select few cities like New York and Las Vegas, taxi service in America was at best, inconvenient, and at worst, nonexistent. They successfully discovered an unmet demand and developed a service to fill that demand. No one was ever speculating on what Uber would be used for they way they are with AI, and from their launch they provided the exact service people expected that they would provide. The second, more important reason, is that Uber never gave away their service for free. Okay, maybe there were some promotions here and there, but by and large, if you wanted to get an Uber, you expected to pay for it. There was never any ecosystem where Uber was providing free transportation for everyone who wanted to get from Point A to Point B with the expectation that people would ditch their cars and get charged through the nose later.

Even companies like Spotify that started with free models and were unprofitable for a long time didn't have quite the same issues as OpenAI has. In 2016, the earliest year for which we have financials, Spotify's loss was about 20% of revenue. By 2018, the first year it was public, that had dropped to 1%, and stayed in that neighborhood until the company became profitable. OpenAI's loss last year was in excess of 100% of revenue, and is on pace to be nearly 70% this year, and that's after record revenue growth. And next year they're going to be on the hook for the first round of the 5-year, 300 billion deal with Oracle. Spotify has also had about a 25% conversion rate from free to paying customers throughout most of its history, though that's recently jumped to over 40%. ChatGTP currently has a conversion rate of around 3%. And Spotify at least ran ads on its free platform whereas free ChatGTP is pretty much all loss for OpenAI, and even the paid version lose money on every query.

So what we ultimately have then, is a company that loses a lot of money, is available for free, has a poor conversion rate for paid versions, and is selling itself as a product you didn't know you needed rather than filling an obvious demand. the leading company has already committed to spending several times more than the company has raised in its entire existence within the next five years, and they need their revenue to dectuple in the next four years to break even. They're also involved in a weird money-go-round situation with Nvidia and Oracle that's 100% reliant on them finding investors willing to lend them the GDP of Finland. And now they want to add video, a notoriously difficult thing to process even when you don't have to make the entire composition from scratch. Color me skeptical that this will be around in five years in anything approaching what it looks like today.

AI platforms as they currently exist are also incredibly fungible. To the average end user there is minimal difference between chatgpt, claude, grok etc. To an enthusiast, the main difference is how much they censor, evade, or try to avoid controversial prompts (of which grok is by far the least annoying here). Burning billions on the cutting edge doesn't give you any lasting advantage against 11th hour entries who spend 1/10th the amount to produce something 90% as good at half the price to their customers. And the amount getting invested already implies capturing a substantial amount of consumer spending at some point in the future just to break even. (Already you want $30 per person on Earth per year).

Nvidia itself will probably be fine, though it depends on if the crash just hits the cutting edge or if its so severe it becomes hard to even cover the cost of serving prompts.

Burning billions on the cutting edge doesn't give you any lasting advantage against 11th hour entries who spend 1/10th the amount to produce something 90% as good at half the price to their customers.

This factor has surprised me completely. The assumption was that any company that got an edge in AI would probably be able to use that edge to speed up its own improvements, and competitors would have to burn a TON of money to try to catch up. So the first mover advantage would be potentially insurmountable.

And its worked about that way in a lot of other industries. With Uber itself, sure there's Lyft and Waymo and a few other small competitors, but the network effects it achieved have kept it out in front, handily.

In the AI space, I guess the fact that its working entirely with bits rather than atoms means the cost of 'catching up' is much lower, especially once a particular trail has been blazed.

What this does seem to reveal is that the player placing bets are REALLY assuming that whomever wins is going to win REALLY BIG, big enough to justify all the previous burn across all the losing companies.

It is hard to imagine (for me) a world where more than, say, 3 AI companies are standing once all is said and done.