I recently attended a seminar at work lead by openAI (whom my company is paying for tools) which was billed as an opportunity to learn more about using AI to do our jobs more effectively. I attended mostly because I assumed there would be some technical discussions about the technology (which was largely absent) and maybe some interesting demos showing how someone used openAI’s product to solve technical problems (also absent). Instead, I was treated to a bizarre presentation, which felt strangely paternalistic and maybe even a little desperate? In order of events:
- The presentation opened with a discussion of the (impressive) scale of the data centers that openAI will be deploying + a little bragging about sora 2 (I promise you none of the scientists or engineers present give a shit about sora 2)
- It proceeded to a gentle haranguing focused on how we should not resist using AI, and that in every organization AI will become more popular as a few high performers learn how to use it to get ahead (ok, some demos would be great, openAI’s tools have been available for months, now would be a great time to show how a co-worker has used it solve a complex problem)
- Some discussion about how scientists and engineers tend to be bad at using AI relative to manager’s/procurement people/ executives/lawyers and others with what I would characterize as paper pushing roles where accuracy isn’t actually that important.
- Which finally devolved into a q&a. The most charitable questions went something like the following: Hi I am a $tpye_of_physical_scientist I love using your tool to help write python code, but it is completely worthless for helping me solve any kind of problem that I don’t already understand very well. For example, here is a tomography technique that I am aware of people using in another industry that I am mostly unfamiliar with. Right now, my approach to using this would be to read papers about how it works, try to implement it and maybe contact some other experts if I can’t figure it out. Wouldn’t it be great if I could just upload the papers about this technique to your bot and have it implement the new technique, saving myself weeks or months of time. But if you try this basic approach you usually end up with something that doesn’t work and while the bot might be able to give some superficial explanation of the phenomenon, it doesn’t add much to me just doing the background research / implementation myself and comes off as feeling like a waste of time. The response to these questions was usually some variation of the bot will get better as it scales and that you should be patient with it and make sure that you are prompting it well so that it can lead you to the correct solution.
Which brings to my primary point: which is that I am someone who has consistently tried to use AI at work in order to be effective, and while it helps somewhat with code creation, it isn’t a particularly useful research tool and doesn’t save me very much time. Apparently my co-workers are having much the same experience.
It really seems to me that openAI and their boosters believe (or would have me believe that they believe) that transformers really are all that you need and at some point in the near future they will achieve a scale where the system will rapidly go from being able to (actually) help me do my job to being able to comfortably replace me at my job. And the truth is that I just am not seeing it. It also seems like a lot of others aren’t either, with recent warnings from various tech leaders (Sam Altman for instance, by the way what possible motive for making Ai bubble statements unless it’s an attempt to prevent employees from leaving to start found their own startups).
I have been very inclined to think that this whole industry is in a bubble for months, and now that the mainstream press is picking up on it, it’s making me wonder if I am totally wrong. Id be interested if others (especially anyone with more actual experience in building these things) can help me understand if I either just suck at using them or if my “vibes” about the current state of the industry are totally incorrect. Or if there is something else going on (ie. can these things really replace enough customer service or other jobs to justify the infrastructure spend outs).
Jump in the discussion.
No email address required.
Notes -
AI (and more specifically multimodal LLMs) will radically transform the life of every man, woman and child on earth in the next decade.
But that doesn’t mean there won’t be a huge valuation bubble burst along the way.
There’s extreme cross-ownership / circular dealing in the market where Nvidia is pouring hundreds of billions of dollars into AI companies and data centers who buy its chips, pushing up their valuations, meaning they can borrow and issue more capital to make more orders for more GPUs, meaning NVIDIA can pour more money into… etc. This is and has been widely noted as a feature of all major sector-driven bubbles in the history of capitalism.
Even if AI ends up being huge (and it will), that doesn’t mean most people are going to make money on it. The railroad bubble is the most famous example of this; between the 1840s and 1890s every major capital market on earth had multiple successive railroad bubbles (which were the ultimate cause of almost every financial crisis in this era because speculative railroad investments failing triggered bank crises / runs and subsequent failures which triggered credit crises that spiralled out in weeks to the wider market). Railroads really did change the world and drive huge improvements in commerce and communication, and therefore economic growth and productivity specifically. But most people who invested in the railroad business lost the majority of what they put in, even in cases in which construction was completed. Today, commercial railroads are relatively profitable after 130+ years of consolidation and modest valuations, and passenger railroads all lose money outside of Japan.
Big AI companies have no moats. Competitive models are at least semi-open-sourced. Brand means nothing when most corporate and consumer platforms can be easily switched over to another foundation model in seconds, if OpenAI ekes out more margin then you switch to Anthropic or XAI or vice versa, and price-per-token gains are quickly made by all the big players; engineers jump between them far too often to maintain a real competitive edge for long. Plus, whether you’re 3% better at an arbitrary benchmark means very little to most corporates, so within broad quality categories price will be the main factor. AI datacenters have the same GPUs and so compete solely on price for compute; they have tiny labor / upkeep costs, so this is essentially just electricity and GPU depreciation (the latter of which will be an industry standard before long if it isn’t already) plus a tiny margin that your competitors will constantly be chipping away for everyone. Everyone in AI except Nvidia is selling a commodity with little pricing power, and even with Nvidia a bubble burst will depress demand and AMD and the Chinese may well eventually catch up.
Many industries that will be initially disrupted by AI will collapse almost entirely rather than shifting to being primarily AI customers. If half the big SaaS or advertising or media companies signing megabillion AI contracts implode because AI code tools allow their valuable corporate clients or end users (in the case of TV, movies, games) to replicate their products and services in house…that actually means lower revenue for the big AI providers, not higher revenue. The same goes for big spenders on white collar software tools like law firms, financial services companies, accountants, consultants, insurers, tech outsourcers and so on. If white collar workers are fired en masse, demand for Microsoft’s central Office 365 product collapses, because it’s billed on a per-user basis. If the ad industry suffers because consumers spend less because they’ve been fired, there goes the source of 80-90% of Google and Meta’s revenue, which means much less to spend on GPUs.
Thus AI’s success and failure are both bearish for these stocks.
A decade ago everyone was saying more or less the same thing about autonomous vehicles, yet a true AV seems further away now than it did then. Sure, progress has been made, but the most we have to show for it is incredibly slow robotaxis operating in geofenced areas within a few select cities that don't have weather, which taxis are under constant monitoring from central command. As far as consumer products are concerned, the best we have is the Mercedes-Benz Drive Pilot, which allows you to take your hands off the wheel and eyes off the road while traveling on mapped highways during daylight when there is no rain or snow in traffic 40 mph or below. In other words, nowhere outside of urban freeways during rush hour. I'm not trying to knock technological advances, but there's no realistic timeline on when I'll be able to buy a car that will take me practically anywhere my current car will and allow me to zone out on the way, or be drunk.
And that's for a technology that has paying customers, an obvious use case, and has spent significantly less money in the past 15 years than the AI industry has spent in the last 5. A half trillion dollars later and a rep from the largest (or at least most prominent) AI company can't even tell customers what they're supposed to be using the product for, just that they need to be using it more. They can't provide any technical assistance, other than that they should be doing it better, and the next update will totally solve the problem, whatever that is (something tells me that they would have said the same thing before the last update). And this is for one of the few companies that's actually paying for it. I used to subscribe to specialized, expensive legal software for my firm ($1,000/year), and the sales rep was an expert. She (and her competitors) offered an in-depth demo at which they were able to answer all of my questions, and after I bought in I could call at any time and get help. How long do you think it will be before @dukeleto's boss realized that all this is doing is costing the company money and cancels the subscription?
But that's neither here nor there; if this were normal technology like AVs I'd be more optimistic about the industry plodding along gradually. The bigger problem is that we're talking about an industry that's spent 500 billion on a product that doesn't sell, and I've read various places that the amount of planned spending the next few years is in the trillions. By comparison, the year with the highest AV investment was 2021, with somewhere around 13 billion. OpenAI alone plans to spend more than that on training next year, after spending 9 billion this year.
The point I'm making is that the amount of money necessary to keep this train going simply doesn't exist, or at least doesn't exist without them convincing people to actually pay for their product. ChatGTP has about a 3% conversion rate. "Well," the optimists say, the real money is in enterprise sales and software integration. Well, Microsoft has a similar 3% conversion rate for its Copilot add-on. This is Microsoft we're talking about, a company so good at selling its products that they're the industry standard in both business and home use, present in hundreds of millions of computers worldwide. And Spotify had a conversion rate 8 times higher its first year in the US.
So what happens after the bubble pops? I don't want to speculate on how it will unfold because I can imagine any number of scenarios, but I'm pretty sure about a couple things. First is that free access to LLMs will either go away entirely or be severely limited. Whoever is left in the business isn't going to be able to afford to lose money on every query. More dramatically, though, I don't think R&D can plod along gradually like it did with AVs; it's just too expensive. When training a new model costs billions, it's not something you can throw money at from the R&D budget. And in the wake of the bubble bursting, even the idea of it might turn people off. I may be wrong insofar as there may be a future for it similar to AVs, but even then, it's a far cry from what we were promised.
More options
Context Copy link
More options
Context Copy link