domain:questioner.substack.com
I can believe people using AI for different things are having very different experiences and each reporting their impressions accurately.
Partially, but there is also a honeymoon phase and a phenomenon where people feel more productive but have mostly just shifted what they do, not increased their actual productivity.
Perhaps this is something that will pass with increased experience with the tools but it has not been my experience with the people i manage nor for my friends in similar managerial roles. It could of course be a combination of the above as well. Maybe the models just need to get a bit better and people need experience with those models. Who knows?
To me it seems highly specific where AI actually is a meaningful productivity booster for programming. It should be clear though that for these things it is very valuable.
I would be more worried for areas where things don't actually have to be "correct" (for quality or legal reasons), like visual art generation. Even there I imagine things will mostly affect the things liable to be (or already has been) outsourced.
Have you noticed a difference in quality of analysis of mature code-bases versus its ability to make changes/additions to them? The consensus on our team so far seems to be that its analysis is significantly better than its generation, though how much of that is the quality of the AI versus the quality of our prompting is rather up in the air.
This makes sense to me. The impact of the spreadsheet has been huge, but it took a long time to settle in everywhere, and the accounting department still exists, even though the guys "running the numbers" don't anymore. There are still plenty of operational systems running on DOS or OS/2: if it isn't broken, don't fix it, and things take time to replace.
What’s the base rate?
If I saw “rapid revenue acceleration” on a mass email from my upper management, I’d expect roughly zero change in my day to day experience. 95% “little to no impact” is right there in Lizardman territory.
Press releases have the same incentives whether or not a technology (or policy, or reorg, or consent decree, or…) is actually going to benefit me. Companies compete on hype, and so long as AI is a Schelling point, we are basically obligated to mention it. That’s not evidence that the hype is real, or even that management believes it’s real. Just that it’s an accepted signal of agility and awareness.
The article points out a number of stumbling blocks. Centralizing adoption. Funding marketing instead of back-office optimizations. Rolling your own AI. Companies which avoided these were a lot more likely to see actual revenue improvements.
I can say that my company probably stalled out on the second one. I’m in a building full of programmers, but the even the most AI-motivated are doing more with Copilot at home than with the company’s GPT wrapper. There’s no pipeline for integrated programming tools. Given industry-specific concerns about data, there might never be!
But that means we haven’t reached the top of an adoption curve. If the state of the art never advanced, we could still get value just from catching up. That leaves me reluctant to wave away the underlying technology.
Well, that's an interesting questions - e.g. banks use deposits to issue loans, but what Coinbase is doing with its 2 mln bitcoin deposits? This is a valid question but very different from the assertion @Tree was proposing. Looking at https://data.bitcoinity.org/markets/volume/30d?c=e&t=b the trading volume in BTC is in around tens of thousands of coins traded daily, which is of course small part of overall bitcoin mass but still a respectable volume as it seems to me. Over a longer period of months, the volume is in millions, so I don't think it'd be right to assume the BTC market is so illiquid that the prices are substantially caused by lack of liquidity. Of course, I am not an economist, so if somebody more qualified could point out an error in this assessment, I'd be thankful, but that's what it appears to be to me.
Yea I remember a fair amount of this clustered west of Skid Row in a neighborhood called the Toy District I think. Not just toys though, just about anything that can be mass imported from Asia wholesale can be found there now.
https://en.wikipedia.org/wiki/Toy_District,_Los_Angeles
The article discusses the economy of the area a little bit. Looks like this: https://maps.app.goo.gl/1XyjCRMusLzY5SGBA
From my company's perspective, a lot of AI use is limited by policy. We aren't allowed to provide proprietary information. Company policy is to, "only enter things that we wouldn't mind going viral on the Internet." This really limits anything I could do with it. At most I can use it as a Mrs. Manners guide. The coders are able to use it more, which frees them up to play Madden for longer or attend more meetings with the Product Owner.
My experience as a senior software engineer is that I am not worried about AI coming for my job any time soon. My impression (somewhat bolstered by the article) is that AI is most efficient when it is starting from scratch and runs into issues when attempting to integrate into existing workflows. I tell the AI to write unit tests and it fails to do all the mocking required because it doesn't really understand the code flow. I ask it to implement a feature or some flow and it hallucinates symbols that don't exist (enum values, object properties, etc). It will straight up try to lie to me about how certain language features work. It's best utilized where there is some very specific monotonous change I need to make across a variety of files. Even then it sometimes can't resist making a bunch of unrelated changes along the way. I believe that if you are a greenfield-ish startup writing a ton of boilerplate to get your app off the ground, AI is probably great. If you are a mature product that needs to make very targeted changes requiring domain knowledge AI is much less helpful.
I can believe people using AI for different things are having very different experiences and each reporting their impressions accurately.
I actually somewhat like bitcoin in the long term as a store of value. It is the first mover in terms of creating artificial scarcity, and has surprisingly few weaknesses in terms of preserving that scarcity. Contrast that to something like gold where changes in mining output or industrial demand can impose external price pressures outside of the supply/demand for a safe haven.
That being said, I think the rest of crypto is arguably the largest bubble in human history. I don't see any real value provided by the chains that try to act as both a platform and as a currency. And i expect that at some point those will all come crashing down. And when this happens, I expect that bitcoin will take a major hit. I doubt it will be a lethal blow, but I could easily see a >50% loss happening. That's a lot of risk if you are trying to preserve value.
Jesus is God (the Father) is not in the Nicene Creed either. Nathan Jacobs addresses the issue here: https://nathanajacobs.substack.com/p/does-jesus-claim-to-be-god
There was not even a proposed vector for value with Blockchain most of the time. AI is very different.
Why would it decrease pink collar work? Or do you mean the administrative overhang? But why would that hit pink collar stuff more than anything else?
The big problem for now is some form of data validation. There are a lot of customer support jobs that can be 99.9% done by AI, but aren’t because of the tail risk that some combination of words will reveal the wrong customer’s information, will allow someone into the account without the right checks, etc, plus general reputational risk like the fact that countries and states are now accusing Facebook LLMs of flirting with minors or whatever. All the stuff that LLM red-teaming groups or ChatGPT jailbreak communities do, essentially. You can fire a bad employee as legal liability, but if its your LLM and the foundation model provider has a big fat liability disclaimer in its contract (which it will), you’re more fucked than you’d be if an employee had just gone rogue.
The eventual solution to this - as with self-driving cars - is to improve accuracy and consistency (by running things through multiple LLMs, including prompt security ones like those slowly coming online through Amazon bedrock and other platforms) until the risks are negligible and therefore insurance costs fall below the $50m a year a big corporation is paying for call centers.
But it will take a few more months, maybe a couple of years, sure.
The popular interpretation is of course something about stupid managers following a hype train, but I imagine there is a more charitable explanation along the lines that AI adoption (/workforce replacement) can be expected to result in an increase in productivity and profits once an initial trough of decreased productivity is overcome by building experience and figuring out the best way to integrate AI. The sort of long-term planning that requires going against local incentive gradients (in this case, forcing workers to use AI even if it is detrimental to their productivity for now) is exactly what upper management is there for; if workers/subdivisions doing what is currently optimal were always a winning strategy, management could easily be replaced by a simple profit-based incentive scheme.
Investors want to hear that the company is taking advantage of AI technology.
I don't think it's a kickback thing. I work at a megacorp (over 10k employees worldwide) and the focus on AI came all the way from the C suite
5% success rate doesn't seem that bad, from the perspective of this being "the latest hyped technology." Did blockchain integration turn out this well? It's true that it's a lot less than you'd expect from the perspective of this being "the power of God created by man," but I don't think we've reached that point (already.)
Recently at my company, all job level descriptions were updated to include use of AI, and all promotions now have to include a bit about how you use AI.
I'm definitely on the bullish side, but it's quite ridiculous. I just have a script that I run every morning to burn up a bunch of tokens so I meet whatever metric they're tracking. (I did use AI to write that script!)
Early 40s. The skiing injury/insomnia/gastric/heart items are all things that I would put in an acute, or at least "actual malady", category. Your friend's lingering ankle injury is probably more like a chronic condition than acute if we are being really choosy on our diction, but it is nonetheless something being clearly wrong, not the maintenance/preventative work that a nutritionist would generally be for someone without a physiological nutrition/diet problem. I am still missing the purpose of chiropractors, nutritionists, etc, for "normal adults".
Personal anecdote, we had an order from the higher ups that we must use LLMs, and that they will be tracking how often we use them
You're not the first person to tell me that at various companies. Is there some kind of misaligned incentive there, like a KPI from higher up, or a kickback scheme? Or are they true believers.
It increasingly feels to me like the Tyler Cowen's of the world are right. That the impact will be large, huge even, but take a lot more time to play out then the boosters predict. It will take time, not only for the tech to improve, but for people and companies to learn how to best use it and for complementary infrastructure and skills to build up. The parallels to the personal computer or internet seem increasingly on point, especially the dot com era. People were, rightly, astounded by those. And, for all the jeering pets.com got, and all the (mostly valid!) reasons it wouldn't work, it ended up mostly just ahead of its time. I and everyone I know buy my pet food through an online subscription or auto-recurring purchase. In 20 years I expect AI will be omnipresent in white collar work.
My expectation is that LLM/AI will boost productivity in the white collar sector and reduce pink collar jobs, but not totally upend the jobs market in those two sectors.
I think the answer to your question, in that other post, is yes. This is mainly incidental to the book and so I only have a bit of tangential material about it, near the very end, but I think you're right on the money.
I appreciate the objection, but will mostly decline to respond for the time being. Chapter six is substantially about exactly what you're describing here. It may or may not persuade you but we'll be in a better position to talk about it at that time.
For now what I'd suggest is that shared (genetic) substrate is critical. Your friends, etc. can share higher-order internal realities with you ('culture') only so long as they have the necessary genetic substrate in the first place. Assuming that all humans have the necessary components to be 'like you' given a similar-enough upbringing would be a major mistake. Humans do have an enormous capacity for learned associations and behaviours, but only so long as etc. You get it. We'll come back to this later.
And even then there's a tendency to, er, 'anthropomorphize' other people and assume that just because they laugh at the same jokes, those jokes occur to them in the same way. I'd be wary of this mistake too. There are many possible internal pathways which result in the same external response. Maybe this can be thought of as an interesting window into convergent evolution.
ETA: There's also the phenomenon of mass, lowest-common-denominator culture to consider. A mixed population necessarily devolves to those cultural levels which its mixed foundation can support.
But we're not talking about people yet. Let me set it up first.
This seems like an extremely odd metric to support the argument that you are making.
At the very least, to use the 5% success rate to understand AI's revolutionary potential, we need to know what the average value unlocked in those 5% of successes is, and the average cost across the whole dataset. If the costs are minimal, and the returns are 100x costs for the successes, then even if only 5% succeed every single company should be making that bet.
On top of that, what's the timeline function? When were these programs launched? How long have they been going on? Are the older programs more successful than the the newer ones? If most of the 5% are over a year old, while most of the 95% are less than a year old, we might be judging unripe tomatoes here.
Then, add to that, there's value in having institutional knowledge and expertise about AI. By having employees who understand AI, even if the pilot program fail, they'll see opportunities to implement it in the future and understand how to integrate it into their workflow.
It just seems odd to declare AI dead based off this data.
More options
Context Copy link