This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I've all of the sudden seen AI blackpilling break out into the normie space around me. Not so much about FOOM, and paperclipping, or terminator scenarios, but around the sudden disruptive nature, and especially around economic upheaval. Not exactly sure why. Veo3 has been part of it.
For example, coworkers suddenly aware that AI is going to completely disrupt the job market and economy, and very soon. People are organically discovering the @2rafa wonderment at how precariously and even past-due a great deal of industry and surrounding B2B services industries stand to be domino'd over. If my observation generalizes, that middle class normies are waking up a doompill on AI economic disruption, what is going to happen?
Let's consider it from 2 points of view. 1 They're right. and 2. They're wrong. 1. is pretty predictable fodder here - massive, gamechanging social and economic disruption, with difficult to predict state on the other side.
But is 2 that much less worrisome? Even if everyone is 'wrong', and AI is somehow not going to take away 'careers', people in mass worrrying that it's so will still manifest serious disruption. People are already starting to hold thier breath. Stopping hiring, stopping spending, running hail mary's, checking out.
Somehow, it's only senior management who doesn't realize the impact. (They keep framing 'If we can cut costs, we'll come out on top, instead of following the logical conclusion, if everyone stops spending the B2B economy collapses.) - I have a nontechnical coworker, who has recently recreated some complex business intelligence tool we purchased not long ago using readily available AI and a little bit of coaching. He had an oh shit moment, when he realized how cannibalized the software industry is about to get. The film industry seems about to completely topple, not because Veo3 will replace it immediately, but because, who's going to make a giant investment in that space right now?
I suspect the macro economic shock is going to hit faster than most are expecting, and faster than actual GDP gains will be made, but maybe I'm just an idiot.
If 1/3 of your coworkers are worried, you'll notice.
If 1/3 of senior management is worried, that's not a majority, and management won't say anything.
More options
Context Copy link
I’m continuing to make the confident prediction that AI, no matter how smart, will be given to insanity, personality disorders, just plain stubbornness, etc, and that GPP from the serious cybernetics corporation will be like mules or llamas in past days; it has its uses, but for applications requiring sanity, common sense, and on-taskness over brainpower, you still need a person. This is will make the white collar job market worse, especially at entry level, and probably some parts of the blue collar job market. It won’t end the concept of employment. There will probably, at the end of the day, just be more NEETs and lower pay for college degrees in nothing in particular.
These are mere engineering issues.
If you were a zoomer you'd have already solved a problem related to your chatbot getting insane and you'd know it's not a hard constraint.
More options
Context Copy link
This is a little wild of a prediction given that it already seems to be proven wrong.
Current gen AIs already seem poised to be pretty disruptive.
I think the main reason they are not as disruptive is because they aren't done cooking. Why try and squeeze out work from an AI right now when the AI will be better and cheaper in 6 months?
This is my take. Its still a nascent technology.
For some reason this popped into my head. and I compared it to this or even this.
I don't think AI is going to be a VR style hype train to nowhere.
More options
Context Copy link
It will always be better and cheaper in sixth months.
Meanwhile I've already used it to do things for my startup that I can't even imagine how I'd have paid for otherwise.
I've also had better luck using AI as a disposable coder I can rush to test and let it run into walls rather than hoping my dev team would be able to figure things out organically. The tension of streamlining vs overloading a dev team is always an issue, and I've certainly found human intervention necessary for a great many edge cases that will show up.
Certainly the administrative loads and documentation tracking for what projects are extant and what is viable is helped. In the weirdest sense AI is useful as a stupid conversation partner to sanity check, and it doesn't mind being bullied or rejecting effusive praise.
More options
Context Copy link
Would you mind sharing any examples? I've failed to engage with AI in any meaningful way, not for ideological/luddite reasons, but the simple inertia of doing things the way I've always done them. I'd love to try something new, and don't know where to begin.
Don't really have time to get into it atm but I didn't want to leave you hanging. Just play with it. Try to get it to do stuff. You'll be amazed.
More options
Context Copy link
Me too, I haven't really played around with AI for some weird anxiety about not being able to apply it effectively. I'm kind of sticking my head in the sand on AI to avoid thinking about my FOMO.
You can literally just start by asking it how to apply it effectively.
More options
Context Copy link
Think of it this way: you've missed out on a bit, sure, but AI progressing so fast right now, that the value you left on the table so far is really insignificant compared to what you can get at any point you decide to jump in.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Most of the pessimism I've seen has been quite different than this.
Rather than fear that the AI will work, the fear seems to be that management will buy into the hype and fire everyone, regardless of whether it works or not.
If it does work, you're out of a job because the whole industry has been displaced. If it doesn't work, you're out of a job because management was greedy and they all follow each other like lemmings, so it's going to be a nightmare to find a new job at a company that isn't infected with the same mindset.
I think that this is justified. There is a reason why tropes such as the Pointy-Haired Boss from Dilbert or the Boss from BOFH exist. In a lot of non-tech companies, non-technical people are in charge of IT management. Often, this attracts a certain sort of people. "We are using tech from a decade ago and it is working okay for our needs" is no way to bedazzle the board or future directors. While IT might be best seen as some more complex version of plumbing which should be mostly unnoticed by the users when it works well, your average head of IT has delusions of grandeur which go well beyond that of the head of facility management.
For example, anyone who understands the basics of the blockchain will immediately notice that it does not offer anything interesting to 99% of non-scammy enterprises. Luckily, your IT manager can count on the probability of a board member understanding what the blockchain is to be very small, so they can sell a fairy tale of the block chain being the future of IT, point out how people got rich from bitcoin, and how there is money on the street just waiting to be picked up, and it with their plan United Dairy Producers Inc will get a slice of the cake. And they can also depend on consultants popping up who will happily sell them some repackaged open source blockchain software.
While blockchain might serve as a baseline for "empty hype", AI certainly has a non-zero potential for most corporations. But unlike the blockchain, there is no decisive first-mover advantage for adopting AI tech for non-tech companies, if your archenemy Dairy and Cheese United adopts some tech a year ahead of you, it seems unlikely that they will bankrupt you in that time because they just reduced their accounting costs to zero.
More options
Context Copy link
The big fear I personally have is that management buys into the promises of consultants and vendors who claim that AI will be their solution and advise cost cutting before benefits materialize, or worse when the low hanging fruit is all that is plucked.
If you want a preview of this nonsense, just attend literally any conference now. Every single vendor is now forced to add 'AI' into their suite of offerings, but more concerningly they are saying that they will deliver AI capabilities to clients who hire these guys. The incentive structure for companies is to have an internal manager assigned to spearhead this endeavor, and that persons incentive in turn is to outsource all responsibility possible to the vendor. A vendor that is thus inclined to lie because they have little to lose is the most likely outcome, and I've seen it happen so many times. Builder.ai is the clearest high profile example of late, but I've seen this happen like a dozen fucking times already.
More options
Context Copy link
No I think I was unclear, yes this is in line with what I'm seeing. When I said sr. mgmt doesn't realize the impact, I mean the follow through logic of the macro effects of every other company also freezing spending and hiring.
More options
Context Copy link
More options
Context Copy link
My gut feeling is that this AI wave will be a short panic and then basically blow over. To predict massive job loss you have to assume that jobs are already distributed rationally, and that companies are good at adapting to and using new technology efficiently. Neither of these are even remotely true!
If you've ever seen how the sausage gets made at a major company, jobs are very much withheld and created on more of an internal, political basis than any actual needs the companies have. On top of that, most major organizations are still barely adapted to using spreadsheets and the most simple algorithmic techniques that were created decades ago. Literally just using excel to automate tasks could save these companies tens of millions of dollars a year. And yet... they don't?
So the idea that just because there's a new technology coming out that can do a bunch of fancy new stuff, does not convince me that we'll have massive job loss at all. What will likely happen, and what has already been happening for a while, is that the people in white collar roles who can use these tools will just shift off more and more work to them privately, and pretend they are still just as busy. The roles might get a tad more competitive.
But we're not going to be in a doomsday scenario where everyone loses jobs, even IF AGI comes out tomorrow.
Citation needed.
I think that "tens of millions of dollars" is not a good measure for potential savings. A more appropriate way to look at savings would be to consider the fraction of the total costs of the company.
Software consultants and vendors of enterprise resource planning software have been
plaguingadvising all kinds of businesses since the early 90s. I find it very unlikely that there is any sector which could save ten percents of its costs by just using Excel. Stuff like electronic inventory management or customer relations databases are standard for pretty much any business larger than a lemonade stand.The lowest-hanging fruits left are probably more on the order of 1% of the total costs, and I think that it is reasonable for companies to be wary there. Consultants have been known in the past to overpromise and underdeliver, and that shiny ERP solution which is supposed to take care of all of your software needs might end up being a software hellscape which requires an expensive specialist to run and only does half of what you want it to do.
I will grant you that in large organisations, departments often become fiefdoms whose bosses employ a lot of people simply to show how important their department is, not because they are needed. But for most organizations, these oversized departments came to be long after the invention of spreadsheet software, and at the core they are indicative of a political, not a technological problem.
More options
Context Copy link
Let me join the chorus of voices enthusiastically agreeing with you about how jobs are already bullshit. I've never been quite sure whether this maximally cynical view is true, but it sure feels true. One white-collar worker has 10x more power to, well, do stuff than 100 years ago, but somehow we keep finding things for them to do. And so Elon can fire 80% of Twitter staff, and "somehow" Twitter continues to function normally.
With that said, I worry that this is a metastable state. Witness how thoroughly the zeitgeist of work changed after COVID - all of a sudden, in my (bullshit white-collar) industry, it's just the norm to WFH and maybe grudgingly come in to the office for a few hours 3 days a week. Prior to 2020, it was very hard to get a company to agree to let you WFH even one day a week, because they knew you'd probably spend the time much less productively. Again, "somehow" the real work that was out there still seems to get done.
If AI makes it more and more obvious that office work is now mostly just adult daycare, that lowers the transition energy even more. And we might just be waiting for another sudden societal shock to get us over that hump, and transition to a world where 95% of people are unemployed and this is considered normal. We're heading into uncharted waters.
When working from home, I find I'm more productive because I know I can block out my time the way I like, so there's no panic rush to try and get it all done in hour A to hour B. If I'm not busy (because there are times when there just isn't that much 'real' work to be done), I can go off and do housework or do personal things online, then the next batch of real work comes in via email or whatever and I work on that. There isn't the rush over "I have to get this done by X o'clock, because I have to be out of here by clocking off time, because I have to be home on time to make sure I don't miss the delivery" or whatever, so I can be more thorough.
In the office, if the 'real work' isn't enough to fill up the day, then I do waste time online or pretending to be busy or procrastinating so putting off work because I want to fill up those empty hours. The difference is that at home, I'll go and put on a load of laundry. At work, I'll have some tabs open and a spreadsheet and pretend to be 'working'.
More options
Context Copy link
Hmmm these are reasonable points. But as someone else pointed out, part of the fact that we're in this state is that the government has strong incentives to keep the unemployment rate down.
Then again government dysfunction is increasing too!
More options
Context Copy link
I haven't worked at very many firms but it has not been my experience that any of the office jobs in my department are perfunctory. Around 200 of us move billions of dollars in investments, originating and underwriting new construction investments, managing those investments over their lifecycle, inspecting them and eventually exiting them. As one of the tech guys that builds and maintains the tools used by the teams doing these various tasks I have a decent idea of what each group does and I just don't really think it's the case that any of the job categories are bullshit. How big each group is does have some politics to it, maybe originations could be run leaner and our tech team could run at either a lower headcount and need to focus on keeping things working or a higher headcount and build more tools in our backlog but ultimately that isn't arbitrary and the marginal employee will add more value even if it's not clear if the marginal value exceeds the marginal cost.
Some of our employees are very much doing email jobs, they interface with outside syndicators who hunt for deals for us to evaluate and then enter the deal information into our system. We even build tooling and imports to make this process smoother but someone actually does need to be the person to ask the syndicators what's going on when things aren't perfectly normal and build up the case for or against an individual investment.
I'm not sure what exactly people are imagining when they think about bullshit jobs, it's always some vagueness or pointing out that a lot of time is spent waiting around rather than hammering nails for the whole shift or whatever. But it actually is genuinely important that when the email comes in you have someone to evaluate what it's saying and pull the right levers in response. The act of coordinating these people is also itself a pretty complicated job and I can attest that automating these tasks is tricky and full of difficult process questions.
There are bullshit jobs, but only in fat companies.
E.g. I know of a bank that has a whole dozen person 'enterprise architecture ' department that's supposed to manage their IT architecture.
They manage, instead, an erroneous, mostly fictional model of the actual IT architecture.
"Bullshit jobs" strikes me as a massive motte and bailey.
There definitely are bullshit jobs. But a very common case of a "bullshit" job is one where the employee does work that's actually essential to a company or to societyy, but doesn't directly produce tangible things, so it feels like his job is useless.
"To a company" does a lot of legwork here. Jobs that are essential to companies but only for the companies to compete for the market and squeeze each other out, with nearly zero sum for society, are the classic example of a bullshit job.
Sounds like you're just talking about rivalrous or zero sum work, which certainly exists but I don't think is usually what people mean when they invoke bullshit jobs. Usually people are trying to bring to mind people digging holes and filling them back in again, not competition.
More options
Context Copy link
Doesn't squeezing each other out mean that they are becoming leaner and more efficient so giving the same products/services at lower cost to society? Can you give me some examples of the sort of jobs you are talking about? I am not sure I understand.
I specified advertizing because advertizing is explicitly not about leaning down the production to make the products cheaper, but rather at capturing more buyers. For a buyer, it does not matter whether they buy product A or product B if the products are identical but one captures 90% of attentionspace. For companies, it means an order of magnitude in revenue.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I once worked for a massive multinational company where I interacted relatively frequently with the enterprise architects. They were getting shit for their models being inaccurate, which they were, but this wasn't because they were creating erronous models but because the it architecture changed by the time they managed finish a model. When you have thousands of people constantly updating something, and not documenting what they're doing, it's hard to create an accurate up to date model.
You can of course create a high level model but that isn't very useful. What they realised had to be done was automating at least part of the model generation but since that couldn't get any budget for that (in part because they were behind on model creation!) they were stuck with manually updating their models and people not using their work.
Were their jobs bullshit? They were needed at the company but they things were structured in a way where they were unable to produce much value.
This isn't a truly massive company, it's a national one with maybe <50 in-house developers. There's a 35 year old system that's barely documented because management is, as always clueless and doesn't care about pushing it or even allocating workforce for it.
Short of cheap competent AGI agents, with the workforce is retiring, the entire company is inevitably headed to a systemic crash which will create a gigantic PR problem because it's the sort of business that needs to be reliably up at all times and has ~100,000s of customers. So far they haven't really had one longer than a few hours.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
If you find your work meaningful and seeing that it is not bullshit, well good for you. I've also been mostly lucky in that aspect that I've done very little bullshit through the years. But I've ended up recently in "Bullshit Jobs" territory by doing stuff is that essentially specializing to tech that is designed for scaling to millions of concurrent users and applying it for B2B that is going to see tops of a couple of thousands users if they capture the majority of the market. There is very little wrong with the tech in itself, and it is useful... but the thing that I'm using it for is not benefiting the business, improving the world or making me happy because it is being misapplied. I quit my last job for the very reason, thought I was out of it and all of a sudden I got transferred back to doing the same thing at the new place.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You say this is because of political/irrational reasons but I think it's because the human capital available to them is too stupid to use a spreadsheet to (e.g.) predict when they need to restock the warehouse. Something intelligent agents should be very useful at!
Currently, you can either have Pete walk the warehouse every week and plan a re-order list based on his vibes, or you can buy an expensive business intelligence package, which is risky because it requires you to switch a ton of your existing workflow over. The smart simple solution a college grad in hard sciences with a spreadsheet can come up with is not available to most businesses because that grad doesn't want to work in an import-distribution business.
I've worked with this type of business before, and I think you've diagnosed the problem correctly. I don't see them being able to get AI to automate any of these steps though for the same reasons that they couldn't get spreadsheets to help them.
With current AI, sure, but we might not be too far away from AI bridging that gap from the other side.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Well said.
I’m inclined to take the indictment of economic inefficiency even further, and point out that jobs at major companies are also created on an EXTERNAL political basis. As @hydroacetylene says, the government has a variety of incentives (which may or may not include his schizo one) to keep the (formal) employment rate high. While I am skeptical that LLM-based AI will ever get good (or at least reliable) enough to make mass unemployment a realistic possibility, even if we grant the hypothetical that it actually will, I fully expect the government to disrupt the disruption by just writing some legislation which obliges corps to employ human rubber-stampers (“supervisory oversight”) on AI processes, thereby neatly regenerating all the white-collar jobs which have been automated. Legal compliance hits startups just as much as it hits status quo inc.
Absolutely. Agreed on all points here. Even if we do get mass automation, social pressure won't allow for mass unemployment.
What if we get decent UBI though? Or enough promises of a socialist utopia where no one has to work the masses buy into.
More options
Context Copy link
More options
Context Copy link
The year is 2050. Humans have long since ceased to do productive work. Amazon and Walmart are giant government contractors competing to provide unemployed people with ‘basic’- a groceries, clothes, household goods, etc ration. The two mega corps keep lobbying for basic to increase; their primary bargaining tool is offering to hire more ‘process supervisors’ and ‘account managers’ to sit in rubber rooms.
The year is 2050. Mass unemployment has been forestalled by government revenue per employee maximums. Practically speaking, this means the vast majority of jobs are sinecures, but the social prestige of being sinecured to a particular firm or brand has skyrocketed in value, and likewise the PR cost of sinecured associates can potentially be Bud-Light-level apocalyptic. As a result, sinecures at highly-desirable firms often take on substantial relations efforts on behalf of the firm for free, both to maintain their sinecure, acquire additional sinecures, and potentially rake in social media influencer consulting fees), but also to increase the social prestige of the firm overall, and by proxy, themselves. Having multiple sinecures is possible for people who are motivated to do things that increase brand value or mindshare, and people with exceptional social desirability may acquire many more. Income supplementation via gig work will still be possible for things that can't be done as well by computers and robots or require the human touch, though naturally the returns will be low since the supply is huge. Of course, anyone with valuable sinecures must be on good behavior in public, for bringing disrepute or negative reaction to the firm will mean instant termination, and potential blackballing.
Those who have been blackballed or otherwise unable to merit a sinecure on the basis of their social worthiness must provide by going and doing actual labor, probably many hours of grinding gig work, or finding a valuable contracting niche. Most actual work below executive level done at the firms is performed by independent contractors, who supplement their sinecures (if they have one) with hourly contracting fees. And of course, there's always welfare, but which only provides at sustenance levels.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
schizopost: Maintaining a high female employment rate in white collar labor is the end all be all of western governments, sufficiently big corporations are being blackmailed into not automating these tasks to prop it up. The SEC, IRS, and EEOC exist to make sure the government always has a hammer to drop.
/schizopost
It’s probably just economic inefficiency.
Why?
If your reply is "because WIMMEN RULE THE WORLD and they force the government to go along and prop them up with bullshit jobs to be 'equal' to men" then I say you are mistaken.
It's economics. In order to grow the economy, business and government wanted a larger pool of workers. Here are all these stay-at-home women, get them into the workforce (and if that has the happy side-effect of depressing wages and reducing our labour costs well that's nice too). Social expectations shifted, helped along by feminism, that women who didn't go out to work were "wasting" their education and were somehow being parasitic on society. Economic expectations around two-income families meant that things like mortgages were calculated on the basis "both of you are earning, right?" Tax revenue is also based on "everyone who can work is working" and that includes working-age women. A lot of service industry jobs (and I'm including things like nursing and teaching here) are now female-dominated.
Economic necessity also means that women need jobs (no more staying at home being supported by parents if single).
Youi can try having "the man is the breadwinner, the women is the homemaker and child raiser" system back, but unless the guy is making very good money, it won't be economically feasible, and for low-income households, given government supports, the woman is probably better off being unmarried and raising the kid, with a live-in boyfriend or a boyfriend who lives elsewhere and just visits. Is that ideal? No. Is that what we've got now? Yes. Okay, take all the women out of the workforce, raise men's wages accordingly. And every cost-cutting management rule goes "so now we need cheap immigrant labour or automation instead".
More options
Context Copy link
More options
Context Copy link
I agree with this -
...but this would suggest to me that the disruption will come from new entrants (startups and scaleups) who can effectively leverage AI tools to transform workflows. If Status Quo Inc don't incorporate AI effectively and sell their services for $1000/hour, but Insurgent Inc are able to sell materially equivalent goods for $100/hour, then clients and customers will eventually switch suppliers. Obviously this won't apply as strongly in industries with very strong incumbent advantages, but even here I would expect some disruption - see e.g., Palantir making inroads in military procurement.
I do think that disruption is happening, but again it's much more difficult than it seems. Startups typically take a few years just to get off the ground with a viable product, and then it takes like a decade for them to eat a whole industry.
And often before they eat the whole industry, they end up getting acquired by big Corpo and their value gets absorbed and spread out to all the people not doing a super efficient job lol.
More options
Context Copy link
Everything that requires extensive capital investments and permitting will be very slow to change, as will the government, and areas where there are natural monopolies.
Disruption isn't really possible for the majority of the economy.
For sure there are sectors less prone to disruption, but I don't know about the majority of the economy. Finance alone is 20% of the US economy and there's huge potential for disruption there. Professional scientific and technical services are another 10%, and many of those (e.g., management consultancy) are also vulnerable to AI transformation. Healthcare (20%) obviously has massive legal barriers in place but AI will increasingly nibble around the edges (e.g. in healthcare administration, AI therapy).
The financial sector is both highly regulated and employs a disproportionately small number of people compared to it's share of GDP. While it accounts for 20% of GDP it only employs some 4% of the workforce, which is about half of manufacturing (which is supposedly dead in America) or less than a third of healthcare.
Even if the finance industry would be disrupted it would affect relatively few people, despite it's large share of GDP and would therefore not lead to wider scale panic. At best/worst we're looking at a mild upward pressure on unemployment.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Ah. You've met both my employer and the department that employs them at the State level, I see.
More options
Context Copy link
In the last couple of year you can see in the gaming industry that small teams deliver great games and rake massive profit s - not that it hadn't happen before, but they are encroaching on AAA territory. If you can deliver the same as a massive bloated company at fraction of the costs - eventually someone will displace it.
On the other hand there are other parts of the gaming industry where the big breakthrough came from taking something relatively niche and low-budget and dumping huge amounts of money into it. To name two examples, Monster Hunter was fairly sizable as a franchise but was ticking over on PS2-era budgets by developing primarily for handhelds, then decided to go AAA for World and massively succeeded. Genshin Impact arrived in what was previously a low-budget Gacha gaming landscape and singlehandedly reshaped it, with a pricetag of $100m upfront and estimated $200m more a year since.
Returnal did pretty well by dumping a AAA budget into the roguelike genre.
More options
Context Copy link
More options
Context Copy link
Yeah Clair Obscur is amazing. I also do see this but that being said, most market share is still with massive games. There is more room for indie but these behemoth business aren't going to go under overnight.
Didn't Clair Obscur receive funding from multiple sources? Allegedly the list of backers included Microsoft, a French government program, etc. Not exactly "indie".
Also, the team was like 40 people + contractors. Thats AA on the smaller side, not indie.
More options
Context Copy link
It's not unusual for foreign games to be funded by that government (one example that comes to mind is The Long Dark).
More options
Context Copy link
Ok fair! I don't know the formal definitions of indie. They're small though for sure.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The major theme of Mark Twain's Life on the Mississippi, outside of his obvious love and nostalgia for the mighty river of his boyhood, is the unending march of progress. He speaks of the raft-men (who he also immortalized in Huck Finn), who would lazily guide rafts loaded with upstream cargo down to the Gulf and spend their free time preaching their strength and tussling with one another with an Appalachian verve that reminds me of nothing more than WWE fights. They were beaten out by the steamboats. He talks of his experience as a pilot of one of these steamboats, a highly lucrative profession due to its technical complexity and fine art, carefully dodging the sandbars and "reefs" (usually - sunken trees) and adverse currents and following side-channels to cut out major parts of the voyage, not only at day but even at night, through nothing more than one's memory, with the constant risk of the bomb of a boiler sitting below-decks and threatening to detonate and kill most of those aboard, as happened in fact to Twain's brother - liveliness, risk, and the gaudy beauty of those old, painted boats, which Twain recalls pulling into his hometown's dock as a child, with the tempting offer of a life on the river and wealth beyond his imagination. The trains came down the river postbellum, and the passengers and freight moved over, for the most part, and little tugboats in the hire of the newly muscular federal government came along to pull all the stumps and dredge all the sandbars and haul long trains of barges for far cheaper than the steamboats ever could, leaving the pilot's job simpler and his steamboat derelict and unwanted.
Twain, of course, was a fervent Progressive - of the movement of the time, which meant recognition of what was changing, and what was changing so much for the better. And indeed, he describes these incredible shifts in the world of his time. The mosquito-ridden bogs of New Orleans were drained by a modern system of sewage and water control that left nowhere for them to breed. The agonizing, slow stage-coach journey he took out West was replaced by a train that made the trip in a fraction of the time and in total luxury. All of these things were changing over the 19th century. I remember feeling almost dizzied when reading Twain's Innocents Abroad, when he stops in his pleasure-trip in France in 1868 - the last time I had read about France, it was 1815 and Napoleon's troops were marching north for their doomed encounter with Blucher and Wellington, and their movement was compassed by Napoleon's compass, firmly set to the number of miles a pack-laden man could march in a day. But now the country was covered by a beautiful and perfect train network - why march to the Netherlands when you could go by train? Incredible, incredible. Or I remember how Chekhov (yes, that one), who was a doctor as well as a playwright, had expressed shock at some of the war-deaths in War and Peace, as he himself was (now, later), perfectly capable of curing the gangrene that was irreversibly fatal in 1812. (But I can't find that quote again at present, so treat the source as apocryphal - but not the medical fact.) The end of the 19th century was a different world from the start, completely and totally.
Hopefully you've enjoyed at least some of this meandering, but let me make my point clear. What gave the Progressive movement of the 19th century such muscle was the obvious, incontrovertible, and massive improvements to life of applying its methods generally. Everyone became richer, healthier, and in better control of their environment - especially in America, where the fruits of the movement rapidly percolated down to the common man. There were disruptions, and pretty major ones too. The steamboat industry was one of those sacrifices. But the great wealth of the time defanged the worst of the Luddism that could have arisen in response. Luddism is always on the back foot compared to the powerful evolutionary quality of progress, but it can make some temporary gains if there's enough general sympathy - and there just wasn't, and the reason why there wasn't is clear.
We've recently been sold a story that computing is the next Industrial Revolution. Certainly computing is now everywhere, absolutely everywhere. What was once an analog control mechanism became a custom-programmed digital interface; the custom-programmed digital interfaces have become small installations of Linux. Everything is "smart," which (to be honest) often doesn't live up to its own name, but the processing power is there. The ubiquitous internet has changed how we interact with just about any question of fact and knowledge. AI is, in a sense, just a continuation of that, another horizon of computing. Where before we would have people sitting and doing manual entry, now we have a prompt sent to an LLM to produce similar output. Everything that required a little human fuzziness and finesse to corral uncertain inputs into uncertain outputs now falls under the domain of the digital. So now we don't have to bother getting our fuzzy mindsets to cleanly interface with discrete digital systems, but instead can interface with those fuzzy AIs and get what we want without worrying about the specifics. That, I think, is what's roughly on the table here. Obviously jobs are at risk, just like the old manual computers were replaced by calculators, and how the required number of secretaries went down as computing technology went up, and how email replaces the need for a great many form-shufflers, but there are meaningful changes in how people can interface with the world - as a simple example, no more balancing a checkbook, just log onto your online banking portal and you can see exactly how money entered and exited your account (and a short hop to your credit card's website will give you the rest of the breakdown).
But people are, this time, generally unhappy in a way that goes beyond the disruptions of the past. The main division I've noticed in optimism here - beyond the AI fanatics, who I think are an unrepresentative subset of hobbyists invested in the technology for reasons other than pure practicality - is between ownership and everyone else. There was a post on here some few weeks back, where a small businessman was using AI and was pretty happy with what it was giving him. That's the small end of AI. On the large end, CEOs in big businesses are creaming their pants about AI to the shareholders under the impression that shareholders are very interested in AI, and less cynically, they might even believe that AI is an important improvement to their business model. (I have connections in the industry on both sides of the buying-AI and selling-AI divide, and at the moment neither one has a good idea of what LLMs will be useful for but definitely don't want to be left out - my paraphrase, but not my words on that one. So I'm a little more dubious than the CEOs are, here.) So if you stand to control the use and output of AI, you're all in favor. If not, then you're a lot more skeptical of whether it will benefit society. That's it. There are other questions about efficacy, which we don't need to get into, but assuming it will do something, the answer of whether or not it is good depends on whether you will get control over it.
And this is not a new question for computing. I'm sure the median reader here is aware of the "right to repair" movement arguing that non-licensed mechanics should be able to repair proprietary hardware, like cars and farm equipment. But the reason this movement had to start, the shift from the old mechanic status quo, was the introduction of computing to vehicles. EULA terms for the software on these vehicles, most famously from John Deere, would invalidate the license if anyone other than the manufacturer was involved in the repair. Computing, because of the tight copyright and licensing scheme for the distribution and reuse of software, has become a powerful tool for ownership in America and abroad. If you get a purely mechanical tool, it is possible - maybe not easy, but possible - to modify it to meet your needs, and certainly legal. With software, this is often illegal. Old software, because its source code is both under copyright and not published, disappears into the ether instead of being used as a meaningful basis for new software - the public domain of software is only those things which people have, for their own reasons, decided to publish generally. And more recently, in the age of cloud computing and the internet, the tools we use most commonly aren't under our own ownership and on our own servers, but on some large company's server - a company who can make unilateral decisions about our software, nominally responsive to the market but certainly not responsive to you. (I'm still personally salty over a Firefox UI change from fifteen-odd years ago.)
This is why the response to AI is so muted among your coworkers, in my opinion. It's obvious to the little guy that you don't control what's happening with software. The ownership is simply removed from you. There's no real alternative than to get what's coming to you. If someone retrains an LLM and makes it worse for you, then you'll just have to suck it up, won't you? If they replace your job, you're not getting any of the profits, are you? It's just more leverage for power and less for everyone else. And I don't think this is going to change, not as long as we regulate software under our current rules, with copyright and the EULA. Those rules are not a necessity of the technology, but they sure do create "natural" monopolies, as much as if we'd let Carnegie copyright molten steel and hold onto that copyright for 90 years. Until this changes, there is never going to be good news out of computing, because the only news will be that the bastards who rule your life get to twist your nuts a little tighter. AI is no exception.
Install Linux, btw.
Great post.
I've recently learned about Solow's Paradox, the idea that productivity growth in developed countries doesn't appear to be reflecting the impact of computers / the internet / the smartphones despite their obvious incredible impacts on society and it's been pretty fascinating to think about.
Consensus seems to be split between productivity statistics overestimating inflation and underestimating real growth and theories that computing really just hasn't significantly moved productivity for various reasons [personally I think this would explain a lot....]
Parkinson answered this one in 1955. Work expands to fill available time. If you come up with a way to do useful tasks in less time, the tasks will be made harder or more BS tasks will be added.
This didn't happen during industrialisation and electrification though. Yes, to some extent the nature of work expanded and changed, but productivity grew very obviously and rapidly in those eras in a way that we don't seem to be seeing with computerisation.
I think it did happen during industrialisation for white collar work, though. Previously, you had a clerk writing documents all day. Maybe he could do ten letters a day (figure pulled out of the air, not backed up by data). The Victorian postal system was incredible; in London you would have multiple daily deliveries of post (so it was possible to write a letter in the morning, post it, and have a reply by the evening).
This wasn't happening in a vacuum, things like the expansion of the railways meant faster travel and now it was possible to send and receive goods over longer distances.
Then the typewriter gets invented. Now your productivity in the office has skyrocketed (relatively). Now you can do ten letters in the morning! Naturally, no employer is going to pay workers to sit around for the rest of the day, or go home after half a day's work. Now that your output is more than ten letters per day, your employer wants you to do twenty letters per day, because now the business can grow to support that.
And typewriters were the thing that made startups (to use a comparison) possible. Now women could work. Now you could buy your own typewriter and set up as a secretarial service for small local businesses that maybe didn't or couldn't afford to employ a clerk, but did need documents written (or typed) up. The new job of "typist" was created:
Now costs came down and productivity soared. And gradually the role of "secretary" no longer meant "a job for a man, possibly a university graduate, who will deal with more than just correspondence" and became "a job for a woman who can type and take dictation but is a vocational training job".
By 1891, the time of publication of this Sherlock Holmes story, typewriters as the new office tech were commonplace enough that they could be used in crime:
Until typewriters, and secretaries, and typing pools, became the new normal and that reached the saturation point of "we are at thirty letters a day" which became the new standard of productivity. And then came word processors, and... rinse and repeat.
More options
Context Copy link
Probably an issue of diminishing returns. At some point between 1900 and 1955, the amount of useful work available fell below productivity * the amount of time available to do useful work. So instead of doing more useful work, we started refining the work we were doing. More safety procedures, more record-keeping, more documenting, more reports on all of the above -- all of that can be increased without bounds.
But that requires a set value for “useful work.” Previous societies found more useful work to do as more time became available. Time freed from labor on farms became time spent in factories, for instance. So your model demands an explanation of why useful work has suddenly plateaued, as a function of useful work, not of bureaucracy.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Presumably, if one wanted to, one could just firewall the main API servers. The big players are well known and with the possible exception of full-size DeepSeek, local models are not powerful enough to be very useful.
I’m not in favour of it, but I don’t think there’s anything stopping a majority voting for this. The only reason AI hasn’t been stomped on is the arms race and the fact that overwhelmed first world countries like the UK see it as the key to getting back in the black. Neither of this are immutable facts of the universe.
AI is the boot, it's going to be doing all the stomping. Microsoft, Amazon, Google, Facebook, Twitter are some of the most powerful companies in the world, they have a gravity well that pulls everyone else behind them. Shut down AI, what does that do to your stock portfolio? Your pension? What does that do for your reelection campaign, is the other guy going to get the algorithm on his side, millions in his warchest? How do you coordinate against AI when all major internet forums are looking to AI as a revenue source?
Or the 20 million people spending hours a day on character.ai, they're not going to let their wAIfus and husbandos go without a fight.
Not to mention that everyone else in business has some kind of interest in AI. The manufacturers want to automate their factories and improve their logistics, services companies want to boost productivity.
And the arms race, as you mention, DARPA, the Pentagon and leading lights in the Chinese Communist Party. That alone is an insoluble problem for decels, what do you say to the paranoia of American strategists? Without a technological advantage, the US doesn't stand a chance against China. They're certainly not going to let China get ahead. And China is not going to stop, it's clearly identified as a key technology to advance in. The public in China love AI, they're very optimistic about it.
Governments couldn't care less about implementing unpopular policies, mass migration for one. Or ending the death penalty. Or invading foreign countries for dubious reasons. If they see it as a core priority, they'll make it happen regardless of what people think. AI is almost certainly far more seductive than any of these things, with far stronger institutions backing it. I'm very bearish on decels having any success whatsoever. Remember PauseAI? Basically nothing happened. It was a squib, hundreds of billions in capital was redeployed to rapidly advance AI in 2023, the exact opposite of what they wanted.
I'm thinking of the
case, where say 70% of people become unemployed or suffer a sharp reduction in status. I don't like mass migration either, or the repeal of the death penalty, but the opposition to those is ~50% of the population max and most of those are pretty wishy washy about it. Governments hate disruption more than anyone, if too much happens too fast I can entirely see the government just bringing the hammer down, like China did with Ma. There's nothing technologically inevitable about cloud-based AI remaining available. And once it looks like one side of the China/America divide might start dialling this stuff down, I can well imagine their opposite number gratefully following suit.
In short, government with unanimous popular backing is still the biggest beast out there. IF it comes to the kind of unemployment figures above, I think AI companies will bend the knee or be broken. Obviously, if things remain as they are, the future is much more murky.
If unemployment rises to 70%, then AI can also be used for combat power and war economy work.
Imagine a swarm of AI-equipped drones, faster and more coordinated than anything in Ukraine today. Imagine the ground-based robots they're trying out but with a machinegun on top: https://x.com/XHNews/status/1921201829066797357
Automated trucks for logistics, all coming from automated factories. That's all eminently possible with 70% unemployment, plus more exotic stuff like satellite swarms spying on everybody in real time, decapitation strikes with novel nerve agents we can't even detect.
How is a human military going to fight that, especially when AI is going to be deeply embedded in their communications? Perhaps a government or sections from a government will merge with a leading AI company or nationalize them earlier in the game but I can't see how they'd successfully shut them down without rendering themselves globally irrelevant. If they wait until 70% of people are unemployed, they might just get crushed.
What do you do if 70% of people are made obsolete? Shut down AI and send them to do useless work? Put tariffs on AI-made products overseas? Seems like delaying the inevitable.
Neither superpower wants to slow down, Trump's America explicitly wants to win the AI race with Stargate while China has allocated considerable effort to developing AI. It's bipartisan in America, Biden was also keen to restrict GPUs leaving the US. I don't think there's any anti-AI faction in China at all, I'm not aware of a single evil AI in the entire Chinese cultural corpus. We haven't even stopped the 'randomly develop gain of function megadeath viruses for no good reason' arms race after a megadeath virus leaked, so what are the chances of stopping the 'immense power and wealth' race after it gives out immense power and wealth?
There's AI and there's AI. People detection is a simple matter which you can do on-chip. Anything like
in the next 5-10 years, like automated coding or automated logistics, is going to be heavily relying on a handful of APIs (approx 4 now) provided by a handful of companies. China could shut down LLMs in China tomorrow if it wanted to - firewall OAI and Anthropic, close down Deepseek. Boom, done. America would have a slightly harder time but it's basically straightforward.
Neither wants to, yet. But if the societal disruption starts to become uncomfortable, they can and they may well. I'm not talking about evil AI, I'm talking about obvious and destabilising social disruption. More than immense wealth and power, governments like stability. China and America are quite capable of running private military AI research on things like YOLO whilst mutually deciding that giving public/corporate society access to AGI is too disruptive to tolerate.
More options
Context Copy link
More options
Context Copy link
70% sure, maybe. But what happens if it's 'just' 2008 levels of sudden disruption? And then a small stagnant window before another dive. I am more worried about falling into a series of local minima, where the immediate 'solutions' get us into a worse scenario.
In some respects 70%+ emplyment disruption, or a skynet scenario could be better, in creating a clear, wide consensus on the problem and necessary reaction. I am more worried about a series of wiley cyote getting over a cliff before he realizes it, falling, then repeating as he tried to get ahead of the next immediate shift.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm personally of the opinion that normie dooming about AI and the job market is just a way to express the growing malaise that's enveloping the west and has little to do with actual macroeconomic effects.
Yes, if we do manage to create truly transformative AI that obviates labour everything is on the table - I'm not too interested in litigating AGI timelines but if that does happen it's not just going to be a weak white collar labour market, we're walking out the other end gods or 6 feet under.
On the other hand, if we top out somewhere around "useful tools" level as we are now it's not clear to me that anything is likely to change macro-economically. There's been no meaningful changes to any macroeconomic statistics attributable to LLM's as of yet [unless you count the wild valuations of AI companies].
In my domain, software, there's quite bitter culture wars about how useful LLM's are on a micro level [personally I find them very useful but certainly not a replacement for anyone yet] but there really haven't been noticeable improvements in software productivity on a macro level. All the commercial software I use day-to-day is still varying degrees of shit and there's been no noticeable change in velocity in open-source [the use of Copilot in the .net repo is quite amusing, https://github.com/dotnet/runtime/pull/115762].
Even the internet, clearly the most transformative and life-altering invention of the last half-century has had questionable impacts on GDP and productivity growth.
It's true that individual tasks may become obsoleted like copy-writing and perhaps translation [although every translator friend of mine seems to be drowning in work right now] but jobs have always been augmented by technology to replace tasks and that's nothing new. The vast majority of white collar work is in my opinion either transformative-AI-complete or there because we want a human [for liability/regulation/comfort] even if a machine could already do it. If we reach the point where AI is not outcompeted by AI-human centaurs in doing meaningful white collar work then we should be much more concerned about not being paperclipped than whether Becky can still get a job in HR or marketing.
To return to the original point, my opinion is that AI kvetching is largely driven by people wanting a safe way to express the sentiment that life in the West is just generally going downhill. I was struck by a quote I heard from a friend the other day, that "nobody dreams of the 22nd century" like men in the 20th century used to dream of what the 21st might look like.
A blue triber might tell you it's because the billionaires are taking all the money and that climate change will end the world, a red triber might say it's because mass immigration and the death of Christian values is ending western society, and a grey triber might say it doesn't matter because AI will save/fix/kill us all. Nobody can agree on the causes, but pretty much nobody in the first world thinks life will look better in a century without some sort of eschatological transformation.
Anecdotally my extended family in the old country seem to be much more optimistic about the future and everyone seems pretty optimistic about AI despite being mostly pretty poor by first world standards. Each generation saw pretty drastic jumps in quality of life and things are looking upwards for the next generation too. You can see here that it's almost all poorer countries on an upwards trajectory excited for AI and richer countries going downhill that don't like it which is interesting to consider.
https://hai.stanford.edu/ai-index/2025-ai-index-report/public-opinion
I mean if you take tasks off the lap of your workers you don’t need so many of them. If you can take half of my job away, you can just give me double the workload of tasks that only a human can do and therefore you need half the staff. And while you didn’t get rid of everyone, you’re saving a lot of money, while also putting significant downward pressure on the wages of those who remain.
Do the above over most of the kinds of jobs normies have, and it is an apocalyptic loss of jobs. If 70% of normie jobs reduce headcount by 50%, that’s a lot of people. And since nobody needs to hire them, they’re either trying to retrain for new jobs or they’re simply dropping out of the labor market.
I cannot speak for others but at my own software company it would be a mistake to assume all the work we currently get done is all the work there is to do. We have a long and constantly growing backlog of things we would like to do to improve our product but must constantly prioritize due to much less capacity. If all our productivity doubled with AI the result would likely not be "the same work done with fewer people" but "much more work done with the same people."
The C++ and Javascript creators foresaw that this day would come and made sure you can never get rid of technical debt.
I keep hearing stories of COBOL programmers getting paid insane amounts of money...
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
But now you have half the staff, and your competitors have half the staff, so presumably the market price of the stuff you are selling will face downwards pressure. It could be that your existing clients will want to buy more your stuff as its now cheaper and you get now clients who previously were not able to afford your product, perhaps you find out you need to hire more people...? But if half the workforce got fired at step 0, that is much less people able to buy any products despite their cheaper price...?
It will be nightmarishly complicated to adapt to when its happening, let alone predict.
People make comparisons to horses and combustion engine. True, many horses got "unemployed", yet traffic increased a lot. There are probably more people involved in logistics and industries enabled by it than ever were in "horse service industry". And horses were never the presence on the demand side of the equation, they never bought anything.
It also reduced the need for horses, who have been reduced mostly to glorified pets.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is insightful and food for thoughts! The explanation makes sense, but even better that this trend is showing up in the data of a study.
I wonder why that is. Capitalism/Market economy of ideas converged on doomerism as being more attention hogging? Or some underlying malaise which lead to insecurity?
More options
Context Copy link
As a startup founder LLMs are enormously useful but not primarily for coding.
LLMs are great for issues such as: "which AWS service to use", how to make pitch deck, which type of adds should I buy, how do I configure a firewall for my server, how to find a good accountant, how to file taxes, gitlab or github and so on.
I don't have a large team of experts, I don't have a lot of time and I have a multitude of different roles. I am CFO, CTO, CEO, dev ops engineer, lawyer, and HR. AI allows me to be somewhat competent at all these tasks and allows me to solve them quickly. The new paid version of ChatGPT is great at quickly producing mock ups and even suggestions for features and user stories.
LLMs are going to allow far more people to start a company and to be able to launch things. Need a NDA and LLMs will make a professional looking PDFs in a minute or two.
This is a form of Gell-Mann amnesia effect. When there's instant feedback and excellent legibility of when answers are correct vs incorrect, like programming, we instantly see the flaws. But on softer squishier questions, you accept the answers. But it's all similarly bad AI slop.
More options
Context Copy link
More options
Context Copy link
I think there’s a lot of pretty catastrophic scenarios even with modest AI advancement. They don’t tend to get a lot of examination because most people tend to be focused on the really extreme best and worst case scenarios.
What kind of catastrophic scenarios are you thinking about?
I can see a lot of ways in which current/modest AI advancements are going to make the world worse (damage to the education system, the end of audiovisual content as reliable evidence, propaganda/astroturfing/surveillance) but really these are all just extensions of existing slow decline, not catastrophic scenarios per se.
There is only one cohort of white collar worker that knows what is to be obsolete overnight in the world today - the Eastern Bloc boomers and oldest genx-ers. The ones that were in their 30-s and older when the berlin wall fell.
And they didn't took it easy. So I expect if we shed white collar jobs fast the shock to be painful. And few of the white collars today have the rural connections to be able to move into trades or something similar.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Something is just off in the first world, and COVID and AI accelerated it. I don’t know what exactly died. But the US, in particular, seems to be dealing with the kind of crisis Western Europe did in the previous century, a loss of faith in all institutions and the massacre of all meta-narratives. Neither my progressive or even conservative friends care much about the Constitution or the framers or the civic religion any more. I don’t know anyone who’s optimistic about the future. I certainly know some people who have optimism about their own future, or who are making the best of their lives as they exist, but about the social fabric people feel… trapped, like we’re already six feet under and there’s no escaping it.
People want to put this at the feet of wokeness, or Trump, or communism, or atheism, but I don’t know what it is. Even those narratives seem snuffed out.
It happened very fast too. There have always been minorities who are loudly disgruntled with good reasons (particularly older people in declining regions), but in the noughties and even the early teens the dominant outlook was Thatcher/Blair/Reagan/Clinton style optimism. By 2019 (even pre-pandemic) if you weren't some kind of doomer you stuck out like a sore thumb as either an out-of-touch establishment tool or a Silicon Valley investor talking their book.
In the UK, you can date the change to somewhere between the 2012 Olympics and the 2016 Brexit referendum. The US isn't very different.
Given the timing and speed of the shift, I am inclined to blame algorithmically-curated social media.
The media started pushing it in the early oughts, though. Star Trek had a drastic tone shift from optimistic to dark and serious- and this is probably what killed the franchise, that’s not really what Star Trek is all about- just for one example.
Also Battlestar Galactica. I think a number of events in the 00s combined to make it clear that we hadn't got answers to all of our problems - the 2008 financial crisis, 9/11 and the inability to turn a theocratic Islamic state into a liberal Western one. Environmentalism. These problems were huge but obviously totally unsolvable by ordinary people.
Also, people were bored. Nobody wanted to hear that we had solved everything and we just had to a) wait for laissez faire economic growth to solve all our problems and b) accept that anything which wasn't solving itself just had to be that way. They/we wanted change and adventure. I always think that was a big part of the response to Covid - people were longing for a Big Problem in which we could all Do Our Part.
Your system was the same dysfunctional managerial as today, except it was not under strain and the race communists have yet not obtained enough power.
All the shit you're dealing with now was in the same rusty pipeline, coming.
More options
Context Copy link
Yeah, I think this is a big part of the Fourth Turning stuff.
More options
Context Copy link
I see it as the final death of the naive optimism that was abroad in the 1989s and 1990s. That was unsustainable becait frankly wasn’t true and couldn’t ever possibly be true. We were kind of faking it by kicking various cans down the road repeatedly. Once we ran out of road, pretending that we were simply going to win Civilization VI style was completely implausible, but this is what people literally believed. We ran out of road because is Islamic theocracy, because we developed a serious addiction to buying now and paying later, and various forms of laziness and gluttony and so on. That was sustainable for two reasons: we were the default currency and the world’s largest market, and we had hands down the best military that could not be seriously challenged. Those conditions could not last because those conditions never do. No nation or empire will ever stay on top forever. But we’d so structured our economy, or lifestyle, our government spending as if we were going to be The Rome that Never Falls.
Once 9/11 happened we slowly came to realize just how much we had let slip away. Arabs with box cutters could strike at will, and not only could we not stop them, we couldn’t even find those responsible. We can’t remain at the top of education when China and India were eating our lunch in STEM. Why buy from Americans when China can make it better and cheaper.
More options
Context Copy link
Yeah, the revived battle star was a big deal for portraying suicide bombers in a sympathetic light on big-budget prime time TV. I picked Star Trek as an example because its whole brand is cheery, optimistic future where we can all settle our differences.
I’ll also point to the Catholic Church sex abuse crisis, hurricane katrina, Russo-Georgian war, and no child left behind just blatantly and obviously failing.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I once heard someone call it “Late Soviet America” and that’s really what it feels like.
Amusingly, the Soviet gerontocracy everyone made fun of? Yeah every single one of them was younger than Trump, Biden, Mconnel, Pelosi, etc.
Better healthcare and less degenerate lifestyles.
In lieu of term limits, elected officials should be required to drink a full liter of vodka over the course of the work day, and another one before bed.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
6 feet under what? Very amusing to imagine a superintelligence apathetic enough to exterminate humanity but sentimental enough to bury us afterwards.
Other corpses and/or rubble, presumably.
More options
Context Copy link
Paperclips, naturally.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
So many contract renewal conversations with SaaS providers lately are like "they want how much per year for this service? that's three months salary for a typical engineer and our engineers say one of us can build this in-house in 6 weeks with LLM tools"
People are saying a lot of things and doing little. I've heard similar claims, but haven't seen any meaningful increase in SWE productivity and I've talked with friends and managers at other companies and they say largely the same thing.
The one thing I've seen is a slight cut back in use of consultants, particularly third world ones, but that might as well be a result of cutbacks due to economic uncertainty.
There definitely is some increase, but not at the right level yet. You can't yet fire all your junior devs and let the LLM close JIRA tickets on its own. What you can do is retrain your senior devs much faster. An LLM is the best form of introductory documentation right now. The bottom falls out when you start asking it hardcore questions, but this can be solved by more extensive (and expensive) training.
And the neat thing is, by the time those senior devs age out of the harness and you'd struggle with replacing them because junior devs have gone the way of the dodo and there's too few younger, experienced devs, AI agents will be good enough.
More options
Context Copy link
In my experience the questions do not really need to be that hardcore, but perhaps we have different definitions of what hardcore is.
I do agree about LLMs being a very good way to get introductory information. How valuable this is for the median developer I don't know. A lot of people seem to be working with the same languages and APIs for a long time.
More options
Context Copy link
More options
Context Copy link
But this uncertainty is what I’m interested in. How much is effect but how much could snowball into cause? Buyers get skidding, forecasts go down, and so forth. I’m not saying it’s the leading cause of uncertainty to anywhere near it. But I am noticing it becoming a contributing factor
Not uncertainty due to AI, uncertainty due to tariffs.
No I know. Of course that’s the biggest part of it. My overall point is I’m seeing uncertainty expressed in AI uncertainty, whether that’s just a rebundling of tariff etc uncertainty or not, my fear is that it is contributing to increased general uncertainty, which will be additive economic results trending from that uncertainty
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link