I recently attended a seminar at work lead by openAI (whom my company is paying for tools) which was billed as an opportunity to learn more about using AI to do our jobs more effectively. I attended mostly because I assumed there would be some technical discussions about the technology (which was largely absent) and maybe some interesting demos showing how someone used openAI’s product to solve technical problems (also absent). Instead, I was treated to a bizarre presentation, which felt strangely paternalistic and maybe even a little desperate? In order of events:
- The presentation opened with a discussion of the (impressive) scale of the data centers that openAI will be deploying + a little bragging about sora 2 (I promise you none of the scientists or engineers present give a shit about sora 2)
- It proceeded to a gentle haranguing focused on how we should not resist using AI, and that in every organization AI will become more popular as a few high performers learn how to use it to get ahead (ok, some demos would be great, openAI’s tools have been available for months, now would be a great time to show how a co-worker has used it solve a complex problem)
- Some discussion about how scientists and engineers tend to be bad at using AI relative to manager’s/procurement people/ executives/lawyers and others with what I would characterize as paper pushing roles where accuracy isn’t actually that important.
- Which finally devolved into a q&a. The most charitable questions went something like the following: Hi I am a $tpye_of_physical_scientist I love using your tool to help write python code, but it is completely worthless for helping me solve any kind of problem that I don’t already understand very well. For example, here is a tomography technique that I am aware of people using in another industry that I am mostly unfamiliar with. Right now, my approach to using this would be to read papers about how it works, try to implement it and maybe contact some other experts if I can’t figure it out. Wouldn’t it be great if I could just upload the papers about this technique to your bot and have it implement the new technique, saving myself weeks or months of time. But if you try this basic approach you usually end up with something that doesn’t work and while the bot might be able to give some superficial explanation of the phenomenon, it doesn’t add much to me just doing the background research / implementation myself and comes off as feeling like a waste of time. The response to these questions was usually some variation of the bot will get better as it scales and that you should be patient with it and make sure that you are prompting it well so that it can lead you to the correct solution.
Which brings to my primary point: which is that I am someone who has consistently tried to use AI at work in order to be effective, and while it helps somewhat with code creation, it isn’t a particularly useful research tool and doesn’t save me very much time. Apparently my co-workers are having much the same experience.
It really seems to me that openAI and their boosters believe (or would have me believe that they believe) that transformers really are all that you need and at some point in the near future they will achieve a scale where the system will rapidly go from being able to (actually) help me do my job to being able to comfortably replace me at my job. And the truth is that I just am not seeing it. It also seems like a lot of others aren’t either, with recent warnings from various tech leaders (Sam Altman for instance, by the way what possible motive for making Ai bubble statements unless it’s an attempt to prevent employees from leaving to start found their own startups).
I have been very inclined to think that this whole industry is in a bubble for months, and now that the mainstream press is picking up on it, it’s making me wonder if I am totally wrong. Id be interested if others (especially anyone with more actual experience in building these things) can help me understand if I either just suck at using them or if my “vibes” about the current state of the industry are totally incorrect. Or if there is something else going on (ie. can these things really replace enough customer service or other jobs to justify the infrastructure spend outs).

Jump in the discussion.
No email address required.
Notes -
I use the tools quite a bit, especially in the use case of architecting cloud infrastructure, where as you might imagine, every solution you are building has already been done by 100,000 other people before you.
In general there's something to be said for point three, but not how the speaker intended it. If an LLM is incredibly helpful and can eliminate 50+% of your work, most of your work is repetitive junk that can be automated. Just like traditional software in the 90's and 00's automated away a tremendous amount of human labor that amounted to manual data entry, LLMs are very good at automating tasks that are the next level above that.
What do I mean by the next level above manual data entry? Tasks that are performed by many, many people, in only slightly different ways, across many different organizations. Let's say you work in HR and have to compile a weekly email for exec leadership that serves as an analysis of discretionary expenditures, based on some grouping, where the data comes from some attached spreadsheet. This is an incredibly common task people are performing, with many extremely common sub patterns (i.e. huge spend by R&D this week on flights and accommodations, they are attending a conference and chose to file it as discretionary instead of getting training and travel pre-approved). Because there are only so many unique scenarios, LLM's are a good use case for automating this kind of work, even if it is a dynamic analysis task.
The reason engineers and scientists find less general utility from using LLMs is likely just because they aren't walking well trodden ground. Were I in one of these roles that benefit so highly, I personally wouldn't be bragging about how much of my workload is able to be automated.
This is starting to happen in consulting.
All the big consulting firms are crowing about how "AI forward" they are, as they think that is good marketing and brand positioning to clients to show how technologically advanced they are, etc.
It probably is good marketing.
But in a few client meetings now, questions like "so if you're so much more efficient with AI, why are we paying a 7% technology fee?" Or "if your using AI to automate and make your people's time more efficient, can we pay for less time?"
Some of these questions are humourous, some not. But the vibe is building and it's funny to watch. I have a feeling consulting margins are about to start experiencing some pricing pressure as the increase in productivity is turned into margin wars.
More options
Context Copy link
More options
Context Copy link