I recently attended a seminar at work lead by openAI (whom my company is paying for tools) which was billed as an opportunity to learn more about using AI to do our jobs more effectively. I attended mostly because I assumed there would be some technical discussions about the technology (which was largely absent) and maybe some interesting demos showing how someone used openAI’s product to solve technical problems (also absent). Instead, I was treated to a bizarre presentation, which felt strangely paternalistic and maybe even a little desperate? In order of events:
- The presentation opened with a discussion of the (impressive) scale of the data centers that openAI will be deploying + a little bragging about sora 2 (I promise you none of the scientists or engineers present give a shit about sora 2)
- It proceeded to a gentle haranguing focused on how we should not resist using AI, and that in every organization AI will become more popular as a few high performers learn how to use it to get ahead (ok, some demos would be great, openAI’s tools have been available for months, now would be a great time to show how a co-worker has used it solve a complex problem)
- Some discussion about how scientists and engineers tend to be bad at using AI relative to manager’s/procurement people/ executives/lawyers and others with what I would characterize as paper pushing roles where accuracy isn’t actually that important.
- Which finally devolved into a q&a. The most charitable questions went something like the following: Hi I am a $tpye_of_physical_scientist I love using your tool to help write python code, but it is completely worthless for helping me solve any kind of problem that I don’t already understand very well. For example, here is a tomography technique that I am aware of people using in another industry that I am mostly unfamiliar with. Right now, my approach to using this would be to read papers about how it works, try to implement it and maybe contact some other experts if I can’t figure it out. Wouldn’t it be great if I could just upload the papers about this technique to your bot and have it implement the new technique, saving myself weeks or months of time. But if you try this basic approach you usually end up with something that doesn’t work and while the bot might be able to give some superficial explanation of the phenomenon, it doesn’t add much to me just doing the background research / implementation myself and comes off as feeling like a waste of time. The response to these questions was usually some variation of the bot will get better as it scales and that you should be patient with it and make sure that you are prompting it well so that it can lead you to the correct solution.
Which brings to my primary point: which is that I am someone who has consistently tried to use AI at work in order to be effective, and while it helps somewhat with code creation, it isn’t a particularly useful research tool and doesn’t save me very much time. Apparently my co-workers are having much the same experience.
It really seems to me that openAI and their boosters believe (or would have me believe that they believe) that transformers really are all that you need and at some point in the near future they will achieve a scale where the system will rapidly go from being able to (actually) help me do my job to being able to comfortably replace me at my job. And the truth is that I just am not seeing it. It also seems like a lot of others aren’t either, with recent warnings from various tech leaders (Sam Altman for instance, by the way what possible motive for making Ai bubble statements unless it’s an attempt to prevent employees from leaving to start found their own startups).
I have been very inclined to think that this whole industry is in a bubble for months, and now that the mainstream press is picking up on it, it’s making me wonder if I am totally wrong. Id be interested if others (especially anyone with more actual experience in building these things) can help me understand if I either just suck at using them or if my “vibes” about the current state of the industry are totally incorrect. Or if there is something else going on (ie. can these things really replace enough customer service or other jobs to justify the infrastructure spend outs).
Jump in the discussion.
No email address required.
Notes -
Partially out of laziness, and partially out of paranoia (sharing details could lead to my doxxing) I never written out a longpoast that's been on my mind since AI went mainstream. Basically I am deeply convinced that some dude(s) at Davos or wherever TPTB hamg out these days, decided in advamce that AI is going to be The Next Big Thing, whether it makes a lick of sense or not. I've seen, with my very own eyes, the very kind of workshops you're describing being rammed through in mere weeks since GPT-3 was published, and this happened in sectors famous for their calcification. There's an entire "Tech in Education" infrastructure devoted to getting kids hooked on chatbots as soon as they learn to read or write. There are incentive programs for public sector workers to sit through these kinds of presentations. It's nuts.
Now, is it a bubble? I dunno, someone recently linked some stats about OpenAI revenues, if they didn't do insane overinvestment, I think they should be fine, but the hype about it reshaping the world (+/-, chatbot romantic partners cooking people's brains, people's ability to research and think going into the toilet, etc.) feels pretty fake.
I don't think you need to conclude there's a 'conspiracy' to notice that the messaging around AI is a bit schizo right now.
You've got anyone who is deeply invested in the industry constantly vaguely implying that the next big improvement is going to be "THE ONE" that makes it able to replace almost any knowledge work, and that this is inevitable and good.
BUT those same people will try to downplay the actual power/risks of their product. "Sure we're trying to build GOD here, but c'mon don't impose oversight, that would be lame!"
And the AI Doomers who are kinda helping the hype by suggesting that the AI is going to go FOOM sooner than expected and completely upend human society (in the best case) and end us all in the worst.
But they're not very good at arguing for more oversight for various reasons, since their case demands a complete pause or shutdown, rather than merely regulating and monitoring it.
And then the part where the rubber meets the road is in a very odd state. Some people saying that "vibe coding" apparently works but the products coming out there are pretty subpar as far as I've seen. And some industries seem to be integrating AI pretty readily whereas others (like yours) are finding it to be a hassle that might justify the costs but isn't impressive on its own merits.
Consumers can use it for a lot of things that aren't directly productive and capabilities are increasing there but not necessarily towards "able to do all jobs everywhere." Fun distractions on offer, though.
And obviously there's massive capital outlays going into datacenters and power to run them. And you would expect that the smart people spending this money wouldn't do so unless there was some expectation it would return on their investment.
But right now it looks like they're bleeding money that has to constantly be replaced by infusions of VC cash.
Which hey, that's fine if its an actual growth industry. But the capabilities have to grow pretty rapidly if they'll live up to the hype. And then you've got the Oroborous of companies producing the chips investing in the companies selling the compute investing in the companies buying the compute. Either everyone has a lot of faith in where this is going, or they're desperate to keep the train rolling as long as possible and maybe something surprising happens or they can line their pockets and hop off before disaster.
So its really starting look like the goal is "Functional AGI or bust," with 'bust' being the literal implosion of the industry, even if the tech continues to improve.
It's not the messaging that spooks me out, it's the sheer size of the marketing and education infrastructure that was deployed in order to drive adoption, the speed with which it was ready to go, and who it was targeted at. Public and public-adjacent institutions aren't usually pushing people towards the latest fads, but this is exactly what's happening right now.
The best mundane explanation I can think of is that it's some galaxy-brained eurocrat scheme to Lead The World In Innovation or something, except that doing a free marketing campaign for American tech companies (which they usually low-key hate) is a bit of a weird way of doing that, and even if we go with that explanation that still kinda is a conspiracy.
Yeah I see your point there.
I'm just pointing out that you've got the messaging from the boosters and all the money being spent to sell people on it, and then there's the other side where there's messages from the doomsayers AND messaging on the political side and then there's the market's response to tall this, with evidence that spending related to AI development is propping up growth right now.
It is questionable what the real goal of all that is, if we take everything being said at sheer face value.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link