domain:questioner.substack.com
I actually somewhat like bitcoin in the long term as a store of value. It is the first mover in terms of creating artificial scarcity, and has surprisingly few weaknesses in terms of preserving that scarcity. Contrast that to something like gold where changes in mining output or industrial demand can impose external price pressures outside of the supply/demand for a safe haven.
That being said, I think the rest of crypto is arguably the largest bubble in human history. I don't see any real value provided by the chains that try to act as both a platform and as a currency. And i expect that at some point those will all come crashing down. And when this happens, I expect that bitcoin will take a major hit. I doubt it will be a lethal blow, but I could easily see a >50% loss happening. That's a lot of risk if you are trying to preserve value.
Jesus is God (the Father) is not in the Nicene Creed either. Nathan Jacobs addresses the issue here: https://nathanajacobs.substack.com/p/does-jesus-claim-to-be-god
There was not even a proposed vector for value with Blockchain most of the time. AI is very different.
Why would it decrease pink collar work? Or do you mean the administrative overhang? But why would that hit pink collar stuff more than anything else?
The big problem for now is some form of data validation. There are a lot of customer support jobs that can be 99.9% done by AI, but aren’t because of the tail risk that some combination of words will reveal the wrong customer’s information, will allow someone into the account without the right checks, etc, plus general reputational risk like the fact that countries and states are now accusing Facebook LLMs of flirting with minors or whatever. All the stuff that LLM red-teaming groups or ChatGPT jailbreak communities do, essentially. You can fire a bad employee as legal liability, but if its your LLM and the foundation model provider has a big fat liability disclaimer in its contract (which it will), you’re more fucked than you’d be if an employee had just gone rogue.
The eventual solution to this - as with self-driving cars - is to improve accuracy and consistency (by running things through multiple LLMs, including prompt security ones like those slowly coming online through Amazon bedrock and other platforms) until the risks are negligible and therefore insurance costs fall below the $50m a year a big corporation is paying for call centers.
But it will take a few more months, maybe a couple of years, sure.
The popular interpretation is of course something about stupid managers following a hype train, but I imagine there is a more charitable explanation along the lines that AI adoption (/workforce replacement) can be expected to result in an increase in productivity and profits once an initial trough of decreased productivity is overcome by building experience and figuring out the best way to integrate AI. The sort of long-term planning that requires going against local incentive gradients (in this case, forcing workers to use AI even if it is detrimental to their productivity for now) is exactly what upper management is there for; if workers/subdivisions doing what is currently optimal were always a winning strategy, management could easily be replaced by a simple profit-based incentive scheme.
Investors want to hear that the company is taking advantage of AI technology.
I don't think it's a kickback thing. I work at a megacorp (over 10k employees worldwide) and the focus on AI came all the way from the C suite
5% success rate doesn't seem that bad, from the perspective of this being "the latest hyped technology." Did blockchain integration turn out this well? It's true that it's a lot less than you'd expect from the perspective of this being "the power of God created by man," but I don't think we've reached that point (already.)
Recently at my company, all job level descriptions were updated to include use of AI, and all promotions now have to include a bit about how you use AI.
I'm definitely on the bullish side, but it's quite ridiculous. I just have a script that I run every morning to burn up a bunch of tokens so I meet whatever metric they're tracking. (I did use AI to write that script!)
Early 40s. The skiing injury/insomnia/gastric/heart items are all things that I would put in an acute, or at least "actual malady", category. Your friend's lingering ankle injury is probably more like a chronic condition than acute if we are being really choosy on our diction, but it is nonetheless something being clearly wrong, not the maintenance/preventative work that a nutritionist would generally be for someone without a physiological nutrition/diet problem. I am still missing the purpose of chiropractors, nutritionists, etc, for "normal adults".
Personal anecdote, we had an order from the higher ups that we must use LLMs, and that they will be tracking how often we use them
You're not the first person to tell me that at various companies. Is there some kind of misaligned incentive there, like a KPI from higher up, or a kickback scheme? Or are they true believers.
It increasingly feels to me like the Tyler Cowen's of the world are right. That the impact will be large, huge even, but take a lot more time to play out then the boosters predict. It will take time, not only for the tech to improve, but for people and companies to learn how to best use it and for complementary infrastructure and skills to build up. The parallels to the personal computer or internet seem increasingly on point, especially the dot com era. People were, rightly, astounded by those. And, for all the jeering pets.com got, and all the (mostly valid!) reasons it wouldn't work, it ended up mostly just ahead of its time. I and everyone I know buy my pet food through an online subscription or auto-recurring purchase. In 20 years I expect AI will be omnipresent in white collar work.
My expectation is that LLM/AI will boost productivity in the white collar sector and reduce pink collar jobs, but not totally upend the jobs market in those two sectors.
I think the answer to your question, in that other post, is yes. This is mainly incidental to the book and so I only have a bit of tangential material about it, near the very end, but I think you're right on the money.
I appreciate the objection, but will mostly decline to respond for the time being. Chapter six is substantially about exactly what you're describing here. It may or may not persuade you but we'll be in a better position to talk about it at that time.
For now what I'd suggest is that shared (genetic) substrate is critical. Your friends, etc. can share higher-order internal realities with you ('culture') only so long as they have the necessary genetic substrate in the first place. Assuming that all humans have the necessary components to be 'like you' given a similar-enough upbringing would be a major mistake. Humans do have an enormous capacity for learned associations and behaviours, but only so long as etc. You get it. We'll come back to this later.
And even then there's a tendency to, er, 'anthropomorphize' other people and assume that just because they laugh at the same jokes, those jokes occur to them in the same way. I'd be wary of this mistake too. There are many possible internal pathways which result in the same external response. Maybe this can be thought of as an interesting window into convergent evolution.
ETA: There's also the phenomenon of mass, lowest-common-denominator culture to consider. A mixed population necessarily devolves to those cultural levels which its mixed foundation can support.
But we're not talking about people yet. Let me set it up first.
I think of AI a lot like I think of my experiences working with H1Bs. LLMs have no concept of truth, no actual work ethic, and basically make whatever mouth sounds get you to leave them alone. With enough supervision they can generate work product, but you can never exactly trust it. If you put them in charge of each other, things go completely off the rails exponentially.
The problem with LLMs will always be supervising them. I think in any area where the truth doesn't matter (fiction, art, chat, summaries of text to a lesser degree) LLMs might crush it. I think for many other automated tasks (data entry from disparate documents), their error rate will probably be in line with a human. But in terms of advanced knowledge work, I expect their output to always have a high variance, and it would be catastrophic for any company to integrate them into their workflow without even more knowledgeable and experienced humans in the loop.
Of course, you then run into the problem of not training up those humans because AI is doing the entry level jobs, or allowing them to go to seed because instead of keeping their skills sharp, they are doing nothing but double check AI output.
Personal anecdote, we had an order from the higher ups that we must use LLMs, and that they will be tracking how often we use them. I asked Windsurf (which they provided me with a license for) and it generated C# code with the following issues (amongst many others):
- It wrapped all uses of HttpClient in a using block. Despite HttpClient implementing IDisposable, you aren't actually supposed to dispose it because this will lead to socket exhaustion when under load
- All DB queries it generated were called synchronously. Similarly to the socket exhaustion issue above, this will lead to thread exhaustion (and generally lower capacity for simultaneous connections, throughput, etc.). On the bright side, at least it parameterized them all.
I started generating crap to please whatever tracking mechanisms they are using, but have completely ignored the output.
Have you traveled for work to any great extent? If not, what you're yearning for likely isn't travel as much as vacation, lack of responsibility and limited adherence to social rules.
Most people who have to travel for work, even those who specifically sought it out for that reason, bounce off hard.
I was browsing through the news today and I found an interesting article about the current state of AI for corporate productivity.
MIT report: 95% of generative AI pilots at companies are failing
Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L.
There seems to have been a feeling over the last few years that generative AI was going to gut white collar jobs the same way that offshoring gutted blue collar jobs in the 1980s and 90s, and that it was going to happen any day now.
If this study is trustworthy, the promise of AI appears to be less concrete and less imminent than many would hope or fear.
I've been thinking about why that might be, and I've reached three non-exclusive but somewhat unrelated thoughts.
The first is that Gartner hype cycle is real. With almost every new technology, investors tend to think that every sigmoid curve is an exponential curve that will asymptotically approach infinity. Few actually are. Are we reaching the point where the practical gains available in each iteration our current models are beginning to bottom out? I'm not deeply plugged in to the industry, nor the research, nor the subculture, but it seems like the substantive value increase per watt is rapidly diminishing. If that's true, and there aren't any efficiency improvements hiding around the next corner, it seems like we may be entering the through of disillusionment soon.
The other thought that occurs to me is that people seem to be absolutely astounded by the capabilities of LLMs and similar technology.
Caveat: My own experience with LLMs is that it's like talking to a personable schizophrenic from a parallel earth, so take my ramblings with a grain of salt.
It almost seems like LLMs exist in an area similar to very early claims of humanoid automata, like the mechanical Turk. It can do things that seem human, and as a result, we naturally and unconsciously ascribe other human capabilities to them while downplaying their limits. Eventually, the discrepancy grows to great - usually when somebody notices the cost.
On the third hand, maybe it is a good technology and 95% of companies just don't know how to use it?
Does anyone have any evidence that might lend weight to any of these thoughts, or discredit them?
On top of this, they aren't paying taxes.
Does anyone else get annoyed when they see someone complain about "grammar mistakes" that aren't actually mistakes, where this is mostly a product of the complainer's overly-simplified understanding of language rules (usually due to poor education)? Whether it's the incredibly-frequent egregious misunderstandings of the rule of paragraph breaks in dialogue, total failure to recognize the (admittedly dying) subjunctive mood, or mistaking an imperfective-aspect dependent clause in a past-tense sentence for a "mistaken" switch to present tense (because English grammarians refer to the active participle as the "present participle"), I keep finding myself getting quite irritated.
Damn. Been reading posts about your Marital issues for a few years now. This did feel inevitable. Not going to lie, I got the impression you were chronically miserable.
Even though I don't usually share your perspective on things I really enjoy reading your posts.
Anecdotes of your life and work are of great interest to me because from the bits and pieces you put out there, I feel like your an older version of me. I'm a young german graduating in a technical field and looking to start a family with my partner soon.
So it does pain me whenever you write about how unaffordable everything is and how having a young family in Germany sucks hahah. I still choose to be more optimistic.
I really hope I will see more upbeat posts from you in the future!
I haven't encountered that yet. This woman's version seems to be that all non-black people have hereditary environmental poisoning that damaged their ability to produce sacred melanin. There is a video where she appears to momentarily imply it was deliberate, though does not name a perpetrator (maybe a community effort; very sophisticated systems thinking).
It seems like a big part of the originating narcissism here might be about distinguishing themselves from Africans, who they've decided are completely unrelated parasitic interlopers out to steal Black Americans' birthright, wealth and identity (this woman is very loose with deploying the p-word).
My experience as a senior software engineer is that I am not worried about AI coming for my job any time soon. My impression (somewhat bolstered by the article) is that AI is most efficient when it is starting from scratch and runs into issues when attempting to integrate into existing workflows. I tell the AI to write unit tests and it fails to do all the mocking required because it doesn't really understand the code flow. I ask it to implement a feature or some flow and it hallucinates symbols that don't exist (enum values, object properties, etc). It will straight up try to lie to me about how certain language features work. It's best utilized where there is some very specific monotonous change I need to make across a variety of files. Even then it sometimes can't resist making a bunch of unrelated changes along the way. I believe that if you are a greenfield-ish startup writing a ton of boilerplate to get your app off the ground, AI is probably great. If you are a mature product that needs to make very targeted changes requiring domain knowledge AI is much less helpful.
I can believe people using AI for different things are having very different experiences and each reporting their impressions accurately.
More options
Context Copy link