This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
AGI Was Never Going To Kill Us Because Suicide Happens At The End of Doomscrolling
I'll go ahead and call this the peak of AI version one-dot-oh
The headline reads "OpenAI Is Preparing to Launch a Social App for AI-Generated Videos." People will, I guess, be able to share AI generated videos with their friends (and who doesn't have THE ALGO as a friend). Awesome. This is also on the heels of the introduction of live ads within OpenAI's ChatGPT.
Some of us were waiting for The Matrix. I know I've always wanted to learn Kung Fu. Others were sharpening our pointing sticks so that when the paperclip machine came, we'd be ready. Most of us just want to look forward to spending a quiet evening with AI Waifu before we initiate her kink.exe module.
But we'll never get there. Because Silicon Valley just can't help itself. Hockey sticks and rocketships. Series E-F-G. If I can just get 5 million more Americans addicted to my app, I can buy a new yacht made completely out of bitcoin.
I am a daily "AI" user and I still have very high hopes. My current operating theory is that a combination of whatever the MCP protocol eventually settles into plus agents trading some sort of crypto or stable coin will create a kind of autonomous, goal-seek driven economy. It will be sandboxed but with (semi) real money. I don't think we, humans, will use it to actually drive the global economy, but as a kind of just-over-the-horizon global prediction market. Think of it as a way for us to have seen 2008 coming in 2006. I also was looking forward to a team of maybe 10 people making a legit billion dollar company and this paving the way for groups of 3 - 5 friends running thousands of $10 + $50 million dollar companies. No more corporate grind if you're willing to take a little risk and team up with some people you work well with. No bullshit VC games - just ship the damn thing.
And I think these things are still possible, but I also, now, think the pure consumer backlash to this silicon valley lobotomy of AI could be very much Dot-Com-2-point-O. The normies at my watering hole are making jokes about AI slop. Instead of "lol I doomscrolled into 3 am again" people are swapping stories about popping in old DVDs so that they can escape the ads and the subscription fatigue.
Culturally, this could be great. Maybe the damn kids will go outside and touch some grass. In terms of advancing the frontier of human-digital knowledge, it seems like we're going to trade it in early not even for unlimited weird porn, but for pink haired anime cat videos that my aunt likes.
The thing no one seems to be talking about with respect to AI is how the underlying economics of it all are so mind-numbingly bad that a crash is inevitable. I have no idea when this crash is going to happen, but if I had to fathom a guess it will be some time within the next five years. We're talking about a technology that has already burned at least half a trillion dollars and has plans to burn another half trillion with no model for profitability in sight. There's only so long that the flow of venture capital will keep coming before the investors start expecting some kind of return. Add in the fact that Nvidia currently represents about 8% of the total value of the S&P 500 based on sales of graphics cards to a single, unprofitable company, and the economic picture looks even more dire.
I think that the underlying problem is that they're trying to run an enshittification model on an industry where the path has typically been the exact opposite. Look at computers themselves. When computers were first invented, they were limited to institutional uses by governments and large universities, and were subsidized through R&D budgets that weren't relying on profitability, i.e. as a present expense rather than a credit against future earnings. Then large corporations started using them. When personal computers were developed in the late 1970s, they were mostly used by businesses, and in the consumer market they were expensive machines for the tech-horny. As costs came down, more and more households began using them, and by the time they became ubiquitous at the end of the 20th century it had been 50 years since their invention, and they still weren't exactly cheap.
Now imagine an alternate timeline where IBM decides in the 1950s to build several large computers in cities all across the country, enough that they can let every Tom, Dick, and Harry run whatever programs they want for free, all the way down to middle schoolers doing their math homework, with minimal wait time. And of course they're offering on-site programmers so that you don't actually need to know anything about computers to be able to take advantage of them, and they're convinced that after doing this for years people will be so enamored that they'll eventually start paying for the privilege. You'd have been laughed out of the board room for making such a suggestion, yet this is roughly the state of the AI business model.
AI cheerleaders will point to other tech companies that lost tons of money in their early years, only to later become behemoths. Uber is often cited as an example, as they spent more than a decade losing money before becoming profitable. But there are two big differences with Uber. The first is that they were actually responding to a market need. Outside of a select few cities like New York and Las Vegas, taxi service in America was at best, inconvenient, and at worst, nonexistent. They successfully discovered an unmet demand and developed a service to fill that demand. No one was ever speculating on what Uber would be used for they way they are with AI, and from their launch they provided the exact service people expected that they would provide. The second, more important reason, is that Uber never gave away their service for free. Okay, maybe there were some promotions here and there, but by and large, if you wanted to get an Uber, you expected to pay for it. There was never any ecosystem where Uber was providing free transportation for everyone who wanted to get from Point A to Point B with the expectation that people would ditch their cars and get charged through the nose later.
Even companies like Spotify that started with free models and were unprofitable for a long time didn't have quite the same issues as OpenAI has. In 2016, the earliest year for which we have financials, Spotify's loss was about 20% of revenue. By 2018, the first year it was public, that had dropped to 1%, and stayed in that neighborhood until the company became profitable. OpenAI's loss last year was in excess of 100% of revenue, and is on pace to be nearly 70% this year, and that's after record revenue growth. And next year they're going to be on the hook for the first round of the 5-year, 300 billion deal with Oracle. Spotify has also had about a 25% conversion rate from free to paying customers throughout most of its history, though that's recently jumped to over 40%. ChatGTP currently has a conversion rate of around 3%. And Spotify at least ran ads on its free platform whereas free ChatGTP is pretty much all loss for OpenAI, and even the paid version lose money on every query.
So what we ultimately have then, is a company that loses a lot of money, is available for free, has a poor conversion rate for paid versions, and is selling itself as a product you didn't know you needed rather than filling an obvious demand. the leading company has already committed to spending several times more than the company has raised in its entire existence within the next five years, and they need their revenue to dectuple in the next four years to break even. They're also involved in a weird money-go-round situation with Nvidia and Oracle that's 100% reliant on them finding investors willing to lend them the GDP of Finland. And now they want to add video, a notoriously difficult thing to process even when you don't have to make the entire composition from scratch. Color me skeptical that this will be around in five years in anything approaching what it looks like today.
To push back on a few things:
I'm pretty sure that I remember reading that the unit economics per token are actually quite good on Zvi's substack. So the AI companies could actually make money selling their current AIs. The money burn issue is because they're trying to hyperscale and dumping epic amounts of cash into R&D.
Again, a memory from reading Zvi, but I'm pretty sure the vast majority of the AI giants revenue is from API tokens anyway, not subscriptions
I'm not really sure how to differentiate these. Prior to Uber's existence I was annoyed by Taxicabs and still took them when I needed to go somewhere. Once Uber started existing I learned that a functional taxi app was infinitely better and started using that instead. If I had known a slick taxi app was something I needed pre-Uber, I could have invented Uber.
ChatGPT/generative AI is now incredibly useful to my life. Google got slopified before ChatGPT 3.5 was released, I was already appending "Reddit" onto every search. I had a demand for not-shit search and now ChatGPT provides that to me (among so many other things).
I use it at work all the time too for a variety of things (consulting). It's really, really useful and if every GenAI model was Thanos snapped out of existence tomorrow, my life would get more annoying and my work output per hour would decrease.
More options
Context Copy link
I still don’t understand the enshittification model.
There are plenty of reasons to degrade your user experience. Increasing revenue through ads or merch or predatory monetization. Decreasing costs by cutting complicated features and tech support. But the central examples of enshittification aren’t doing those things. They’re paying more to add features that people don’t want. To adopt patterns that don’t seem like they should make more money.
I mean, maybe I’m just wrong. Maybe spamming AI news articles on the lock screen really does get more people to buy Windows. But…why? How?
A couple of causes. The first, that ThomasdelVasto gets at, is that you as a manager have developers, you have a domain space in the business, and you need to generate an endless stream of work to justify the continued existence of your position and the positions of your direct reports. When all you own is a hammer, you are very incentivized to find an endless world of nails.
The other is that top level leadership across tech currently has no vision, and when leadership doesn't have vision they instead default to bullet point lists of "stuff". This cycle repeats itself endlessly, a company is created to solve some problem (like JIRA/Kanban apps all sucking), succeeds (like trello), loses it's divine spark (built the thing, was successful enough to get bought out by Atlassian), and succumbs to endless feature bloat just like all of its predecessors. The "stuff" lists are also how vendor software, security practices, etc are generally decided. A committee of mixed interests will always go with the vendor that meets the most bullet points on your list, regardless of the product's performance, intuitiveness, or other soft metrics that can't be meaningfully added to a checklist.
More options
Context Copy link
Feature bloat is for internal management and employees to make themselves look good to higher ups, and get promoted.
More options
Context Copy link
More options
Context Copy link
I think the models is less that those chatbots will be the face of the profit making for AI companies. Not true, I think the people using the bots now are unpaid trainers, not the future end users. Every issue that comes up now can be fixed once the bot gets a correction from the freeware users. But that’s not a very useful user base anyway. The best use case for such bots is actually business to business. Maybe Walmart wants to use it with its app to help customers find a product that fixes a problem they have, or can tell you where something is. They’d probably want to buy a license for incorporation of the bot into their app. Maybe Apple wants to replace their social media team with Open AI based solutions. Or the CEO of Tesla wants to use AI to suggest improvements to their car line. In those cases, getting a good useful bot would get them an effective and efficient solution probably worth a good deal of money to them (if for no other reason than it reduces headcount), and they will pay for it.
I just encountered a business whose product is "AI renter harassment". Imagine a chatbot that pretends to be a person, and annoys your renters with frequent reminders that the rent is due, and then keeps hassling them for up to three months after move out!
Can't wait for the counter-offer, "AI creditor deflection".
Let the arms race begin…
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
AI platforms as they currently exist are also incredibly fungible. To the average end user there is minimal difference between chatgpt, claude, grok etc. To an enthusiast, the main difference is how much they censor, evade, or try to avoid controversial prompts (of which grok is by far the least annoying here). Burning billions on the cutting edge doesn't give you any lasting advantage against 11th hour entries who spend 1/10th the amount to produce something 90% as good at half the price to their customers. And the amount getting invested already implies capturing a substantial amount of consumer spending at some point in the future just to break even. (Already you want $30 per person on Earth per year).
Nvidia itself will probably be fine, though it depends on if the crash just hits the cutting edge or if its so severe it becomes hard to even cover the cost of serving prompts.
This factor has surprised me completely. The assumption was that any company that got an edge in AI would probably be able to use that edge to speed up its own improvements, and competitors would have to burn a TON of money to try to catch up. So the first mover advantage would be potentially insurmountable.
And its worked about that way in a lot of other industries. With Uber itself, sure there's Lyft and Waymo and a few other small competitors, but the network effects it achieved have kept it out in front, handily.
In the AI space, I guess the fact that its working entirely with bits rather than atoms means the cost of 'catching up' is much lower, especially once a particular trail has been blazed.
What this does seem to reveal is that the player placing bets are REALLY assuming that whomever wins is going to win REALLY BIG, big enough to justify all the previous burn across all the losing companies.
It is hard to imagine (for me) a world where more than, say, 3 AI companies are standing once all is said and done.
More options
Context Copy link
More options
Context Copy link
Almost every claim in this paragraph is incorrect. Every model Openai has trained was profitable, gpt3 and gpt4 both almost certainly made back their training cost. They are pouring an incredible amount more into R&D of course, but that's kind of the point, the market for this stuff is actually red hot. And I genuinely have no clue why you think NVidia only sells to one company, estimates put the amount of chips sold to microsoft(including openai) at
23% of nvidia sales, google at12%, amazon at13%, meta at11%, and Xai at around 10%.I don’t understand it, but there are a crazy number of tech companies purchasing calls to (outdated versions of?) GPT. The corporate market is definitely hot.
More options
Context Copy link
Can either of you actually point at some numbers with respect to profitability?
We don't have access to specific numbers. We know that GPT-3 cost somewhere around $5 million in compute to train and that openai's revenue in 2020 and 2021 while it and it's derivatives before 3.5 were their primary product that their revenue was $3.5 million and $28 million. As you get to the era of many competing models and then need to factor in what their actual margin on revenue is it becomes more muddled after that but their projected revenue for 2025 is $12 Billion and the trend looks exponential. Maybe adoption and usage falls off but the doom and gloom that they aren't finding buyers is just kind of nonsense.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link