I recently attended a seminar at work lead by openAI (whom my company is paying for tools) which was billed as an opportunity to learn more about using AI to do our jobs more effectively. I attended mostly because I assumed there would be some technical discussions about the technology (which was largely absent) and maybe some interesting demos showing how someone used openAI’s product to solve technical problems (also absent). Instead, I was treated to a bizarre presentation, which felt strangely paternalistic and maybe even a little desperate? In order of events:
- The presentation opened with a discussion of the (impressive) scale of the data centers that openAI will be deploying + a little bragging about sora 2 (I promise you none of the scientists or engineers present give a shit about sora 2)
- It proceeded to a gentle haranguing focused on how we should not resist using AI, and that in every organization AI will become more popular as a few high performers learn how to use it to get ahead (ok, some demos would be great, openAI’s tools have been available for months, now would be a great time to show how a co-worker has used it solve a complex problem)
- Some discussion about how scientists and engineers tend to be bad at using AI relative to manager’s/procurement people/ executives/lawyers and others with what I would characterize as paper pushing roles where accuracy isn’t actually that important.
- Which finally devolved into a q&a. The most charitable questions went something like the following: Hi I am a $tpye_of_physical_scientist I love using your tool to help write python code, but it is completely worthless for helping me solve any kind of problem that I don’t already understand very well. For example, here is a tomography technique that I am aware of people using in another industry that I am mostly unfamiliar with. Right now, my approach to using this would be to read papers about how it works, try to implement it and maybe contact some other experts if I can’t figure it out. Wouldn’t it be great if I could just upload the papers about this technique to your bot and have it implement the new technique, saving myself weeks or months of time. But if you try this basic approach you usually end up with something that doesn’t work and while the bot might be able to give some superficial explanation of the phenomenon, it doesn’t add much to me just doing the background research / implementation myself and comes off as feeling like a waste of time. The response to these questions was usually some variation of the bot will get better as it scales and that you should be patient with it and make sure that you are prompting it well so that it can lead you to the correct solution.
Which brings to my primary point: which is that I am someone who has consistently tried to use AI at work in order to be effective, and while it helps somewhat with code creation, it isn’t a particularly useful research tool and doesn’t save me very much time. Apparently my co-workers are having much the same experience.
It really seems to me that openAI and their boosters believe (or would have me believe that they believe) that transformers really are all that you need and at some point in the near future they will achieve a scale where the system will rapidly go from being able to (actually) help me do my job to being able to comfortably replace me at my job. And the truth is that I just am not seeing it. It also seems like a lot of others aren’t either, with recent warnings from various tech leaders (Sam Altman for instance, by the way what possible motive for making Ai bubble statements unless it’s an attempt to prevent employees from leaving to start found their own startups).
I have been very inclined to think that this whole industry is in a bubble for months, and now that the mainstream press is picking up on it, it’s making me wonder if I am totally wrong. Id be interested if others (especially anyone with more actual experience in building these things) can help me understand if I either just suck at using them or if my “vibes” about the current state of the industry are totally incorrect. Or if there is something else going on (ie. can these things really replace enough customer service or other jobs to justify the infrastructure spend outs).

Jump in the discussion.
No email address required.
Notes -
I use ChatGPT pretty much all day every day but as a replacement for Googling mostly. It's great at pinging a dozen news sources on a issue and giving me more information than I'd get from reading a single article (and it's usually not wrong).
If I have trivial code to write in an unfamiliar framework it's good for that too.
It's also good for teaching me entry level stuff in a new topic faster than anything else.
It's generally better at telling me what's wrong if I paste an error message than anything I'd get from Googling.
And that's about it. And this is awesome, don't get me wrong.
But everything else it kind of sucks at. And not just ChatGPT, but Claude (including Claude code as well).
If I ask for help in a mature codebase it will almost certainly waste my time. Ask it for more subtle plot details of a popular sci-fi book that you just read and you will see how hard it hallucinates.
I would be quite worried about doing science or medicine with it if I can't rapidly verify its information.
It's sort of hard to see this improving very quickly? They've run out of gains from training on all of the internet. Inference costs are increasing exponentially but the gains in intelligence are only increasing logarithmically. You will note that the model that they used to win the Math Olympiad is very much not available to the public. Why? Perhaps because it cost millions in inference to do it.
It sure seems like other architectural breakthroughs are needed to keep scaling, and I don't see those as guaranteed.
Or, as Yannic Kilcher put it, "we have entered the Samsung Galaxy era of LLMs"
I've had luck with certain time-consuming rote tasks in medium-large codebases (1M - 10M LOC) like writing good tests for existing legacy code.
It's not doing anything I couldn't have done, it's not even faster than me in terms of wall-clock time to get a good functional test, but I can kick it off in the background while I'm doing other things and come back to some tests that definitely pass and probably even test the stuff I want to test in something approximating the way I want to test it.
More options
Context Copy link
Seconding all of this as incredibly true to my experience. If it wasn't for the references to you job being coding, I'd wonder if this was an alt account of mine that I post on while sleep walking.
More options
Context Copy link
More options
Context Copy link
I'm not a professional mathematician, but I occasionally dabble, and I've gotten some help from GPT5 lately. Not in the vein of "here's a bunch of papers, now solve the problem for me", but rather as an extremely advanced robotic cyberduck that I can bounce ideas off and get some intelligent critique back. It also wrote code for me to test a few ideas, much much faster than I could myself (and I'm a very fast coder). When the speed of being able to write a quick script passes a certain threshold, I feel like there's kind of a state change in how you're willing to approach a problem. Instead of "I wonder if X is true", you can just ask "hey, is X true?" and have the answer in 30 seconds with little mental effort.
It's not a replacement for human thought (yet) - I am absolutely the one driving and I need to correct its mistakes. But I love it as a research assistant. BTW, here's a transcript so you can see what I'm talking about. And this is just me dabbling with a chatbot - I'm sure that, with some effort, skilled professionals can find better ways to incorporate an LLM into their workflow. (Though unskilled professionals are not going to be "fixed" by LLMs.) I think it's just going to take some patience and some experimentation.
I'm also looking forward to when publicly accessible models catch up to the state of the art models that aced the IMO and ICPC recently. There's some secret sauce in what they're doing (being able to pick a good solution out of many proposals), and we don't have access yet. I helped write the ICPC Finals contest, and I can attest that there were some VERY tough problems on it. Problem C (which I wrote) wasn't solved by any human competitors, but both models figured out the most elegant solution to it without much effort - DeepMind's press release included a description of Gemini's solution.
I presume the compute used for this was pretty costly, but the costs will inevitably fall, so at some point you'll be able to have a personal chatbot that's simply better at solving math problems than you are. At that point, it seems hard to believe that you won't be able to find SOME use for it at your job...?
It isn’t that I have literally no use for it, I just use it a lot like the poster above (ie as a substitute for something’s I would have googled before).
Also thanks for actually including your conversation history. I’m not enough of a mathematician to follow all of it but it was nice to see how you prompt it.
No problem, this is something we're all still trying to figure out. I wonder if there'll be a future career path of "prompt engineer", or, more fancifully, "LLM whisperer"...
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
AI (and more specifically multimodal LLMs) will radically transform the life of every man, woman and child on earth in the next decade.
But that doesn’t mean there won’t be a huge valuation bubble burst along the way.
There’s extreme cross-ownership / circular dealing in the market where Nvidia is pouring hundreds of billions of dollars into AI companies and data centers who buy its chips, pushing up their valuations, meaning they can borrow and issue more capital to make more orders for more GPUs, meaning NVIDIA can pour more money into… etc. This is and has been widely noted as a feature of all major sector-driven bubbles in the history of capitalism.
Even if AI ends up being huge (and it will), that doesn’t mean most people are going to make money on it. The railroad bubble is the most famous example of this; between the 1840s and 1890s every major capital market on earth had multiple successive railroad bubbles (which were the ultimate cause of almost every financial crisis in this era because speculative railroad investments failing triggered bank crises / runs and subsequent failures which triggered credit crises that spiralled out in weeks to the wider market). Railroads really did change the world and drive huge improvements in commerce and communication, and therefore economic growth and productivity specifically. But most people who invested in the railroad business lost the majority of what they put in, even in cases in which construction was completed. Today, commercial railroads are relatively profitable after 130+ years of consolidation and modest valuations, and passenger railroads all lose money outside of Japan.
Big AI companies have no moats. Competitive models are at least semi-open-sourced. Brand means nothing when most corporate and consumer platforms can be easily switched over to another foundation model in seconds, if OpenAI ekes out more margin then you switch to Anthropic or XAI or vice versa, and price-per-token gains are quickly made by all the big players; engineers jump between them far too often to maintain a real competitive edge for long. Plus, whether you’re 3% better at an arbitrary benchmark means very little to most corporates, so within broad quality categories price will be the main factor. AI datacenters have the same GPUs and so compete solely on price for compute; they have tiny labor / upkeep costs, so this is essentially just electricity and GPU depreciation (the latter of which will be an industry standard before long if it isn’t already) plus a tiny margin that your competitors will constantly be chipping away for everyone. Everyone in AI except Nvidia is selling a commodity with little pricing power, and even with Nvidia a bubble burst will depress demand and AMD and the Chinese may well eventually catch up.
Many industries that will be initially disrupted by AI will collapse almost entirely rather than shifting to being primarily AI customers. If half the big SaaS or advertising or media companies signing megabillion AI contracts implode because AI code tools allow their valuable corporate clients or end users (in the case of TV, movies, games) to replicate their products and services in house…that actually means lower revenue for the big AI providers, not higher revenue. The same goes for big spenders on white collar software tools like law firms, financial services companies, accountants, consultants, insurers, tech outsourcers and so on. If white collar workers are fired en masse, demand for Microsoft’s central Office 365 product collapses, because it’s billed on a per-user basis. If the ad industry suffers because consumers spend less because they’ve been fired, there goes the source of 80-90% of Google and Meta’s revenue, which means much less to spend on GPUs.
Thus AI’s success and failure are both bearish for these stocks.
A decade ago everyone was saying more or less the same thing about autonomous vehicles, yet a true AV seems further away now than it did then. Sure, progress has been made, but the most we have to show for it is incredibly slow robotaxis operating in geofenced areas within a few select cities that don't have weather, which taxis are under constant monitoring from central command. As far as consumer products are concerned, the best we have is the Mercedes-Benz Drive Pilot, which allows you to take your hands off the wheel and eyes off the road while traveling on mapped highways during daylight when there is no rain or snow in traffic 40 mph or below. In other words, nowhere outside of urban freeways during rush hour. I'm not trying to knock technological advances, but there's no realistic timeline on when I'll be able to buy a car that will take me practically anywhere my current car will and allow me to zone out on the way, or be drunk.
And that's for a technology that has paying customers, an obvious use case, and has spent significantly less money in the past 15 years than the AI industry has spent in the last 5. A half trillion dollars later and a rep from the largest (or at least most prominent) AI company can't even tell customers what they're supposed to be using the product for, just that they need to be using it more. They can't provide any technical assistance, other than that they should be doing it better, and the next update will totally solve the problem, whatever that is (something tells me that they would have said the same thing before the last update). And this is for one of the few companies that's actually paying for it. I used to subscribe to specialized, expensive legal software for my firm ($1,000/year), and the sales rep was an expert. She (and her competitors) offered an in-depth demo at which they were able to answer all of my questions, and after I bought in I could call at any time and get help. How long do you think it will be before @dukeleto's boss realized that all this is doing is costing the company money and cancels the subscription?
But that's neither here nor there; if this were normal technology like AVs I'd be more optimistic about the industry plodding along gradually. The bigger problem is that we're talking about an industry that's spent 500 billion on a product that doesn't sell, and I've read various places that the amount of planned spending the next few years is in the trillions. By comparison, the year with the highest AV investment was 2021, with somewhere around 13 billion. OpenAI alone plans to spend more than that on training next year, after spending 9 billion this year.
The point I'm making is that the amount of money necessary to keep this train going simply doesn't exist, or at least doesn't exist without them convincing people to actually pay for their product. ChatGTP has about a 3% conversion rate. "Well," the optimists say, the real money is in enterprise sales and software integration. Well, Microsoft has a similar 3% conversion rate for its Copilot add-on. This is Microsoft we're talking about, a company so good at selling its products that they're the industry standard in both business and home use, present in hundreds of millions of computers worldwide. And Spotify had a conversion rate 8 times higher its first year in the US.
So what happens after the bubble pops? I don't want to speculate on how it will unfold because I can imagine any number of scenarios, but I'm pretty sure about a couple things. First is that free access to LLMs will either go away entirely or be severely limited. Whoever is left in the business isn't going to be able to afford to lose money on every query. More dramatically, though, I don't think R&D can plod along gradually like it did with AVs; it's just too expensive. When training a new model costs billions, it's not something you can throw money at from the R&D budget. And in the wake of the bubble bursting, even the idea of it might turn people off. I may be wrong insofar as there may be a future for it similar to AVs, but even then, it's a far cry from what we were promised.
AVs seem like an incomparable category. I couldn't pinpoint the beginning of AV hype the same way you can point to the Transformer architecture for LLMs, but the early examples of AVs 10-15 years ago I recall were pretty impressive. It was like 80% of the way to human parity right from the get-go; it made sense that people were predicting a rapid replacement of human drivers, because they'd made such an impressive start. (I appreciate that AV efforts probably existed long before this but I think it's a fair starting point)
And then over the next decade AV capabilities crept up to human levels at like 1% per year. There were no significant breakthroughs, no evidence of rapid progress, and as you state it is only now that we're getting commercially available taxis in specific locations. Even when Waymo started rolling out proper AV taxis in some cities, it did not signal a sudden leap forward in capabilities as you might expect.
Contrast to LLMs. GPT-1 came out in 2018, a year after the Transformer paper, with GPT-2 following a year later. GPT-2 was impressive compared to previous language generators, but still only perhaps at 33% of the level of an average human. with 3 it jumped up to 50%, 3.5 went further, while 4 was perhaps at the 80% level that AVs started at. Every few months since then has since more and more large leaps, such that current models are winning mathematical competitions and are measured at PhD level in a huge variety of domains.
Chart the progress of both technologies, and they'll look completely different. It's fair to think at some point natural limits will stop the endless scaling of LLM capabilities, but thus far extrapolating a straight line has worked pretty well. AVs never even had a line to extrapolate from.
Imagine you're planning a vacation. Your dream vacation is Hawaii; your second choice is Myrtle Beach, but that would only be about half as fun. So you call a travel agent, and find out that you unfortunately don't have enough money for a flight to Hawaii. On the other hand, you could drive to Myrtle Beach, which wouldn't be nearly as expensive. Now suppose the travel agent calls you back and offers you the following proposition: "You can't afford to fly to Hawaii, but I've found a reduced rate ticket that will get you 95% of the way there for only 20% of the full price. Granted, it doesn't quite get you to Hawaii, but isn't getting 95% of your dream vacation better than settling for Myrtle Beach, which is only worth half?"
This is obviously nuts, because getting 95% of the way to Hawaii puts you somewhere in the middle of the Pacific Ocean. It's pretty obvious that if you can't get all the way to Hawaii then you're better off going somewhere else entirely. 80%, or 90%, or whatever of a marketable product is no product at all. 80% autonomous cars are regular cars with fancy cruise control (which is itself only used a small percentage of the time), and 80% of whatever AI is aiming for is fancy, expensive, inefficient Google. And saying you're 80% of the way there is more or less meaningless when it comes to technology investment. It's a vague term that has no bearing on actual numbers; it certainly doesn't mean that you're 80% of the way there time-wise or that you've spent 80% of what's necessary to get to 100%, just as the last 5% of the way to Hawaii costs four times as much as the first 95%.
In 2020, The Information estimated that the AV industry had spent $16 billion on research through 2019. Their conclusion was that the whole enterprise was a money pit and that they'd never be able to climb out of. Car and Driver put this in perspective by noting that they could have given every licensed driver in America two brand new Ford-F150s and still have cash to spare. OpenAI's recent projections for 2025 predict $7.8 billion in operating losses and a $13.5 billion net loss. One company in one year manages to spend half the money that the entire AV industry spent in a decade. And incidentally, the amount of money spent on AV research has actually gone up since then, yet you admit yourself that the improvements haven't exactly been dramatic.
AI companies want to spend another trillion or so in the next five years. Will it get them to that magic 100% mark where they can actually sell something for a profit? Nobody knows, but if it can't, I'm willing to guess that the industry's proposed solution will be to spend more money. The point I'm trying to make is that the amount of money they want to spend simply does not exist, and even if it did spending it is not justifiable to someone who eventually expects to turn a profit. If the amount being spent were on par with AVs I'd be more optimistic, but it's exponentially larger. There's going to be a point where the funding isn't going to be there, VC firms are going to have to eat their losses, and there will be a bear market in tech investment where AI is practically a dirty word. This isn't like AVs where the amount of money involved is small enough that companies can quietly make small gains that take years rather than months; it's significantly worse.
Your travel analogy is awful - it is often very valuable to solve 80% of a problem. A better analogy would be if your travel agent offered you a brand-new cheap teleportation device that had a range of "only" 80% of the way to Hawaii, but you had to purchase a flight for the last 20%. Which would obviously be great! AVs are the exception here, since you need to actually solve 99% of the driving problem for them to be useful (telepresent drivers "stepping in" can help a bit, but you don't want to depend on them).
Uh, and I don't think $64 per licensed driver in America is going to buy them two Ford-F150s. You might want to check Car and Driver's math. (What is with people being unable to properly divide by the population of the US? Does their common sense break down when dealing with big numbers?) Amusingly, I've never seen GPT4+ make this magnitude of a mistake.
Anyway, we should (and will) be taking the next decade to put smart models absolutely everywhere, even though they sometimes make mistakes. And that's going to be expensive. The major risk of AI investment is definitely not the lack of demand. As OP mentioned, the risk really is the lack of "moat" - if you only have to wait a year for an open-source model to catch up with GPT, why pay OpenAI's premium prices?
More options
Context Copy link
Your first two paragraphs just appear to be quibbling over definitions. I don't really care what measurement scheme you use, abandon percentages if you find them useless. The point of the comparison is to show that the advancement in AI capabilities is on a completely different planet to AVs.
As for the comparison of investment, it seems trivial to point out that the difference in magnitude is due to the potential markets. If a company invented Level 5 self driving cars tomorrow, what would they get? You could take away human taxi drivers and truck drivers and some other logistics, and start taking a big chunk of the consumer car market. For a time at least, since other companies would be able to copy you pretty quickly. I'm assuming a lot of companies in that market plan to licence the technology for their revenues, rather than trying to take direct control. Certainly a big market, which likely explains a lot of the valuation for your Teslas and Ubers, but not unlimited.
The impact of a company announcing AGI tomorrow would be unimaginable, even if we assume a slow takeoff with limited recursive self-improvement.
More options
Context Copy link
Got a source for that?
$16B divided by 230M is under $70. That is more than enough for two sets of F150 wiper blades for every licensed driver in America, but only if we don't splurge on Rain-X.
It can go along with the million dollars that every American could have gotten from Bloomberg's campaign.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The issue is that most jobs don't require excellence or speed, they require not screwing up. A driver can be the fastest, cheapest most reliable driver for years but if the driver causes one major accident the driver is an awful driver. There are plenty of bumbling lawyers who keep their job. If a lawyer is 10x productive, writes beautiful contracts and solves seemingly insolvable disputes for years, it counts for nothing if they do something once that will get them disbarred.
What matters isn't the median speed or accuracy of performed tasks, it is the severity of the three standard deviation worst screw ups that count. I really don't see the one in five thousand worst decisions by an AI not being worse than the one in five thousand decisions made by a human professional. Until then AI will be limited in a professional environment.
It will remain super useful for search, making funny videos, helping people fix things etc.
More options
Context Copy link
Waymos are already doing paid autonomous rides. They're not quite as adaptable as good human drivers but they're way way ahead of the SOTA of 2015.
They're not particularly slow though? They do go on the freeway now. And they've rolled out in Atlanta, which does sometimes get snow and ice, though not nearly as badly as you see in the north or midwest.
They're slower than most traffic because they always do the speed limit and don't roll stop signs. They also tend to reroute to avoid more difficult left turns, though this isn't as much of an issue as it was a few years ago. I'm not saying these are necessarily bad things (there are good reasons for strictly adhering to the law), but it's still going to be somewhat slower than a human driver.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is the take I pretty much endorse.
Thinking "AI is going nowhere and will be prove to be a waste" a la the Tulip or NFT craze is wrong.
Thinking "A lot of people/companies are going to get wiped out before the final winners are clear" seems inevitable. Lot of blood on the streets before we're done.
Especially with this:
blatantly occurring. NVidia seems to think they're so dominant as to be unassailable, we'll see if that works out.
But every single company is in it to win it, despite this point:
Hard to predict who is going to make it out.
More options
Context Copy link
More options
Context Copy link
I use the tools quite a bit, especially in the use case of architecting cloud infrastructure, where as you might imagine, every solution you are building has already been done by 100,000 other people before you.
In general there's something to be said for point three, but not how the speaker intended it. If an LLM is incredibly helpful and can eliminate 50+% of your work, most of your work is repetitive junk that can be automated. Just like traditional software in the 90's and 00's automated away a tremendous amount of human labor that amounted to manual data entry, LLMs are very good at automating tasks that are the next level above that.
What do I mean by the next level above manual data entry? Tasks that are performed by many, many people, in only slightly different ways, across many different organizations. Let's say you work in HR and have to compile a weekly email for exec leadership that serves as an analysis of discretionary expenditures, based on some grouping, where the data comes from some attached spreadsheet. This is an incredibly common task people are performing, with many extremely common sub patterns (i.e. huge spend by R&D this week on flights and accommodations, they are attending a conference and chose to file it as discretionary instead of getting training and travel pre-approved). Because there are only so many unique scenarios, LLM's are a good use case for automating this kind of work, even if it is a dynamic analysis task.
The reason engineers and scientists find less general utility from using LLMs is likely just because they aren't walking well trodden ground. Were I in one of these roles that benefit so highly, I personally wouldn't be bragging about how much of my workload is able to be automated.
This is starting to happen in consulting.
All the big consulting firms are crowing about how "AI forward" they are, as they think that is good marketing and brand positioning to clients to show how technologically advanced they are, etc.
It probably is good marketing.
But in a few client meetings now, questions like "so if you're so much more efficient with AI, why are we paying a 7% technology fee?" Or "if your using AI to automate and make your people's time more efficient, can we pay for less time?"
Some of these questions are humourous, some not. But the vibe is building and it's funny to watch. I have a feeling consulting margins are about to start experiencing some pricing pressure as the increase in productivity is turned into margin wars.
More options
Context Copy link
More options
Context Copy link
Partially out of laziness, and partially out of paranoia (sharing details could lead to my doxxing) I never written out a longpoast that's been on my mind since AI went mainstream. Basically I am deeply convinced that some dude(s) at Davos or wherever TPTB hang out these days, decided in advance that AI is going to be The Next Big Thing, whether it makes a lick of sense or not. I've seen, with my very own eyes, the very kind of workshops you're describing being rammed through in mere weeks since GPT-3 was published, and this happened in sectors famous for their calcification. There's an entire "Tech in Education" infrastructure devoted to getting kids hooked on chatbots as soon as they learn to read or write. There are incentive programs for public sector workers to sit through these kinds of presentations. It's nuts.
Now, is it a bubble? I dunno, someone recently linked some stats about OpenAI revenues, if they didn't do insane overinvestment, I think they should be fine, but the hype about it reshaping the world (+/-, chatbot romantic partners cooking people's brains, people's ability to research and think going into the toilet, etc.) feels pretty fake.
I don't think you need to conclude there's a 'conspiracy' to notice that the messaging around AI is a bit schizo right now.
You've got anyone who is deeply invested in the industry constantly vaguely implying that the next big improvement is going to be "THE ONE" that makes it able to replace almost any knowledge work, and that this is inevitable and good.
BUT those same people will try to downplay the actual power/risks of their product. "Sure we're trying to build GOD here, but c'mon don't impose oversight, that would be lame!"
And the AI Doomers who are kinda helping the hype by suggesting that the AI is going to go FOOM sooner than expected and completely upend human society (in the best case) and end us all in the worst.
But they're not very good at arguing for more oversight for various reasons, since their case demands a complete pause or shutdown, rather than merely regulating and monitoring it.
And then the part where the rubber meets the road is in a very odd state. Some people saying that "vibe coding" apparently works but the products coming out there are pretty subpar as far as I've seen. And some industries seem to be integrating AI pretty readily whereas others (like yours) are finding it to be a hassle that might justify the costs but isn't impressive on its own merits.
Consumers can use it for a lot of things that aren't directly productive and capabilities are increasing there but not necessarily towards "able to do all jobs everywhere." Fun distractions on offer, though.
And obviously there's massive capital outlays going into datacenters and power to run them. And you would expect that the smart people spending this money wouldn't do so unless there was some expectation it would return on their investment.
But right now it looks like they're bleeding money that has to constantly be replaced by infusions of VC cash.
Which hey, that's fine if its an actual growth industry. But the capabilities have to grow pretty rapidly if they'll live up to the hype. And then you've got the Oroborous of companies producing the chips investing in the companies selling the compute investing in the companies buying the compute. Either everyone has a lot of faith in where this is going, or they're desperate to keep the train rolling as long as possible and maybe something surprising happens or they can line their pockets and hop off before disaster.
So its really starting look like the goal is "Functional AGI or bust," with 'bust' being the literal implosion of the industry, even if the tech continues to improve.
It's not the messaging that spooks me out, it's the sheer size of the marketing and education infrastructure that was deployed in order to drive adoption, the speed with which it was ready to go, and who it was targeted at. Public and public-adjacent institutions aren't usually pushing people towards the latest fads, but this is exactly what's happening right now.
The best mundane explanation I can think of is that it's some galaxy-brained eurocrat scheme to Lead The World In Innovation or something, except that doing a free marketing campaign for American tech companies (which they usually low-key hate) is a bit of a weird way of doing that, and even if we go with that explanation that still kinda is a conspiracy.
Yeah I see your point there.
I'm just pointing out that you've got the messaging from the boosters and all the money being spent to sell people on it, and then there's the other side where there's messages from the doomsayers AND messaging on the political side and then there's the market's response to tall this, with evidence that spending related to AI development is propping up growth right now.
It is questionable what the real goal of all that is, if we take everything being said at sheer face value.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
https://www.theguardian.com/business/2025/oct/08/bank-of-england-warns-of-growing-risk-that-ai-bubble-could-burst
At this point even important institutions are ringing the alarm bell.
More options
Context Copy link
IMO AI has a lot to do with how you prompt it, you need to give it the necessary docs and then do some troubleshooting with it. It won't usually one-shot a complex issue but it can eventually nail it... which is exactly what that guy said I guess.
I guess it's my 'lived experience' that it works eventually and there's not much else I can say.
Also, even if it's not helping you much as a researcher, is what you do representative? How many people work with tomography as opposed to spreadsheet jobs or just plain old programming? How many people even know what tomography is? It can be simultaneously true that it's not helpful for you but is for many others enough for it to not be a bubble. Terence Tao seems to find it useful for his rather abstract work.
Yeah, my experience is that once you're able to fully articulate your problem and request to them, with specificity and supporting documentation, they get like 80+% of the way there on the first shot, and then you correct them in the places where it wasn't able to quite get your intent.
Which, yeah, that's the same issue you'd have if you hired a human to help too.
More options
Context Copy link
More options
Context Copy link