This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
AGI Was Never Going To Kill Us Because Suicide Happens At The End of Doomscrolling
I'll go ahead and call this the peak of AI version one-dot-oh
The headline reads "OpenAI Is Preparing to Launch a Social App for AI-Generated Videos." People will, I guess, be able to share AI generated videos with their friends (and who doesn't have THE ALGO as a friend). Awesome. This is also on the heels of the introduction of live ads within OpenAI's ChatGPT.
Some of us were waiting for The Matrix. I know I've always wanted to learn Kung Fu. Others were sharpening our pointing sticks so that when the paperclip machine came, we'd be ready. Most of us just want to look forward to spending a quiet evening with AI Waifu before we initiate her kink.exe module.
But we'll never get there. Because Silicon Valley just can't help itself. Hockey sticks and rocketships. Series E-F-G. If I can just get 5 million more Americans addicted to my app, I can buy a new yacht made completely out of bitcoin.
I am a daily "AI" user and I still have very high hopes. My current operating theory is that a combination of whatever the MCP protocol eventually settles into plus agents trading some sort of crypto or stable coin will create a kind of autonomous, goal-seek driven economy. It will be sandboxed but with (semi) real money. I don't think we, humans, will use it to actually drive the global economy, but as a kind of just-over-the-horizon global prediction market. Think of it as a way for us to have seen 2008 coming in 2006. I also was looking forward to a team of maybe 10 people making a legit billion dollar company and this paving the way for groups of 3 - 5 friends running thousands of $10 + $50 million dollar companies. No more corporate grind if you're willing to take a little risk and team up with some people you work well with. No bullshit VC games - just ship the damn thing.
And I think these things are still possible, but I also, now, think the pure consumer backlash to this silicon valley lobotomy of AI could be very much Dot-Com-2-point-O. The normies at my watering hole are making jokes about AI slop. Instead of "lol I doomscrolled into 3 am again" people are swapping stories about popping in old DVDs so that they can escape the ads and the subscription fatigue.
Culturally, this could be great. Maybe the damn kids will go outside and touch some grass. In terms of advancing the frontier of human-digital knowledge, it seems like we're going to trade it in early not even for unlimited weird porn, but for pink haired anime cat videos that my aunt likes.
The thing no one seems to be talking about with respect to AI is how the underlying economics of it all are so mind-numbingly bad that a crash is inevitable. I have no idea when this crash is going to happen, but if I had to fathom a guess it will be some time within the next five years. We're talking about a technology that has already burned at least half a trillion dollars and has plans to burn another half trillion with no model for profitability in sight. There's only so long that the flow of venture capital will keep coming before the investors start expecting some kind of return. Add in the fact that Nvidia currently represents about 8% of the total value of the S&P 500 based on sales of graphics cards to a single, unprofitable company, and the economic picture looks even more dire.
I think that the underlying problem is that they're trying to run an enshittification model on an industry where the path has typically been the exact opposite. Look at computers themselves. When computers were first invented, they were limited to institutional uses by governments and large universities, and were subsidized through R&D budgets that weren't relying on profitability, i.e. as a present expense rather than a credit against future earnings. Then large corporations started using them. When personal computers were developed in the late 1970s, they were mostly used by businesses, and in the consumer market they were expensive machines for the tech-horny. As costs came down, more and more households began using them, and by the time they became ubiquitous at the end of the 20th century it had been 50 years since their invention, and they still weren't exactly cheap.
Now imagine an alternate timeline where IBM decides in the 1950s to build several large computers in cities all across the country, enough that they can let every Tom, Dick, and Harry run whatever programs they want for free, all the way down to middle schoolers doing their math homework, with minimal wait time. And of course they're offering on-site programmers so that you don't actually need to know anything about computers to be able to take advantage of them, and they're convinced that after doing this for years people will be so enamored that they'll eventually start paying for the privilege. You'd have been laughed out of the board room for making such a suggestion, yet this is roughly the state of the AI business model.
AI cheerleaders will point to other tech companies that lost tons of money in their early years, only to later become behemoths. Uber is often cited as an example, as they spent more than a decade losing money before becoming profitable. But there are two big differences with Uber. The first is that they were actually responding to a market need. Outside of a select few cities like New York and Las Vegas, taxi service in America was at best, inconvenient, and at worst, nonexistent. They successfully discovered an unmet demand and developed a service to fill that demand. No one was ever speculating on what Uber would be used for they way they are with AI, and from their launch they provided the exact service people expected that they would provide. The second, more important reason, is that Uber never gave away their service for free. Okay, maybe there were some promotions here and there, but by and large, if you wanted to get an Uber, you expected to pay for it. There was never any ecosystem where Uber was providing free transportation for everyone who wanted to get from Point A to Point B with the expectation that people would ditch their cars and get charged through the nose later.
Even companies like Spotify that started with free models and were unprofitable for a long time didn't have quite the same issues as OpenAI has. In 2016, the earliest year for which we have financials, Spotify's loss was about 20% of revenue. By 2018, the first year it was public, that had dropped to 1%, and stayed in that neighborhood until the company became profitable. OpenAI's loss last year was in excess of 100% of revenue, and is on pace to be nearly 70% this year, and that's after record revenue growth. And next year they're going to be on the hook for the first round of the 5-year, 300 billion deal with Oracle. Spotify has also had about a 25% conversion rate from free to paying customers throughout most of its history, though that's recently jumped to over 40%. ChatGTP currently has a conversion rate of around 3%. And Spotify at least ran ads on its free platform whereas free ChatGTP is pretty much all loss for OpenAI, and even the paid version lose money on every query.
So what we ultimately have then, is a company that loses a lot of money, is available for free, has a poor conversion rate for paid versions, and is selling itself as a product you didn't know you needed rather than filling an obvious demand. the leading company has already committed to spending several times more than the company has raised in its entire existence within the next five years, and they need their revenue to dectuple in the next four years to break even. They're also involved in a weird money-go-round situation with Nvidia and Oracle that's 100% reliant on them finding investors willing to lend them the GDP of Finland. And now they want to add video, a notoriously difficult thing to process even when you don't have to make the entire composition from scratch. Color me skeptical that this will be around in five years in anything approaching what it looks like today.
I still don’t understand the enshittification model.
There are plenty of reasons to degrade your user experience. Increasing revenue through ads or merch or predatory monetization. Decreasing costs by cutting complicated features and tech support. But the central examples of enshittification aren’t doing those things. They’re paying more to add features that people don’t want. To adopt patterns that don’t seem like they should make more money.
I mean, maybe I’m just wrong. Maybe spamming AI news articles on the lock screen really does get more people to buy Windows. But…why? How?
A couple of causes. The first, that ThomasdelVasto gets at, is that you as a manager have developers, you have a domain space in the business, and you need to generate an endless stream of work to justify the continued existence of your position and the positions of your direct reports. When all you own is a hammer, you are very incentivized to find an endless world of nails.
The other is that top level leadership across tech currently has no vision, and when leadership doesn't have vision they instead default to bullet point lists of "stuff". This cycle repeats itself endlessly, a company is created to solve some problem (like JIRA/Kanban apps all sucking), succeeds (like trello), loses it's divine spark (built the thing, was successful enough to get bought out by Atlassian), and succumbs to endless feature bloat just like all of its predecessors. The "stuff" lists are also how vendor software, security practices, etc are generally decided. A committee of mixed interests will always go with the vendor that meets the most bullet points on your list, regardless of the product's performance, intuitiveness, or other soft metrics that can't be meaningfully added to a checklist.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link