site banner

Culture War Roundup for the week of March 2, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Software giant Oracle corporation is laying off thousands of workers and killing their Texas data center plans, per Reuters and Bloomberg. It appears that their capital expenditures have gotten ahead of their ability to pay for them and now they face the regrettable need to say it out loud shortly before markets close on a Friday afternoon.

In December, the company said it expects capital expenditures for fiscal 2026 to be $15 billion higher than the $35 billion figure the company estimated during its first-quarter earnings call.

The layoffs will impact divisions across Oracle and may be implemented as soon as this month, the Bloomberg report said, citing people familiar with the matter. Some cuts will be aimed at job categories that the company expects will shrink due to AI.

This may be indirectly tied to the Iran conflict as Mid East sovereign wealth funds have begun pulling back from investment.

I'm interested to see the fallout of this one. My understanding is that the Ellison clan is fairly tight with the Trump admin.

Beyond that, I have concerns that this may be the match that lit the fuse on AI spending. I have spent the last six months trying to figure out why these valuations made any sense whatsoever. The expense profile of companies like Anthropic and OpenAI looked a lot more like Caterpillar to me than Salesforce. When it came to Oracle, I couldn't make sense of it at all.

In terms of explanations, I only had three explanations I had were that I was:

  1. Missing critical information
  2. Retarded
  3. Right

I still don't know which one it is.

Some of you here are clearly smarter and more educated than me. What do you think I'm missing here? My gut prediction is that this spirals into an even bigger flight from capital in the next six months, which causes holy hell on the retail market because the average investor is more leveraged now than they have been at any point in my lifetime. I'm also assuming it'll kill quite a lot of "LLM Wrapper" companies, like the one run by fear porn expert Matt Shumer.

I assume Google will be OK.

Beyond that, I don't have any idea.

Any predictions?

How many important actors in the AI space need to be religious fanatics for it to start to alter the spending patterns?

There's some subset of people you run into who genuinely believe in the Singularity, that the moment AGI is cracked nothing else matters. The whole concept of worrying about debt load after your company cracks AGI is silly, if another company cracks AGI first then having good profit margins won't save you. If it ushers in the end times, or Gay Luxury Space Communism, worrying about whether you lied to shareholders? Stupid.

The religious fanatics will say whatever they need to in order to push ahead.

the moment AGI is cracked nothing else matters

Are they wrong? You could argue they're wrong about the timeline, which is largely trying to predict unknown unknowns, but it does seem to me that once/if AGI is cracked it is pretty much true that nothing else matters.

Now, I'm kind of a doomer about AGI in the LW/rationalist sense, but even if you're not, every extant political system rests on the assumption that humans are a necessary input for production, and every economy relies on the spending of consumers to stay afloat.

What does debt load or shareholders matter in a world where humans no longer have any say via their ability to produce or their demand for consumption?

They could be right, but that doesn't change the facts: if you're examining this from an investor's perspective, the insiders don't really think in the same frame of mind we do.

The investor is like a sports gambler trying to bet on a fight, and the companies are like a fighter who thinks he's figured out a brand new unbeatable Steven Seagal kick that will win them the fight. The gambler is betting based on conditioning, style vs style, form in fights going into the match, etc. The fighter thinks that conditioning and form are pointless to worry about it because the only question is will the kick work or won't the kick work.

How many important actors in the AI space need to be religious fanatics for it to start to alter the spending patterns?

Believing that AGI is possible doesn't really require any kind of religion. Certainly it's speculative, but so is a belief that modular nuclear can reduce cost down to make starting a modular nuclear startup reasonable. really by this standard any startup is a religion. Really I think you're just sneering because you don't have any actual arguments. The argument in favor of AGI is reasonable and fits well into our general shared understanding of material reality. Maybe we'll find out that LLM architecture doesn't scale up to human level general intelligence and we can update our model, but that update requires information not in evidence.

How exactly am I sneering? Calling something a religion does not mean that it is (necessarily) wrong or stupid, only that attempting to reason within a normal paradigm with believers in it is probably a waste of time, because their eschatology makes your reasoning irrelevant.

Trying to tell a Jehovah's Witness in 1970 about Stock:Bond ratios in retirement savings would be silly, because in the Jehovah's Witness' mind there was no retirement to worry about: the world was going to end in 1975 and that's all there was to it. If he had been correct, you would be the idiot trying to talk about Stock:Bond ratios; the world didn't end, so all the people who had no savings in 1976 were worse off.

The AGI fanatics don't care about P:E or debt ratios, because they won't matter once the Rapture Singularity occurs. It's the same reasoning. It's an essentially religious framework, immanentizing the eschaton. If you believe in it, sure, great, argue for your religion. It doesn't change the way to look at it from an investing perspective.

Religions rest on dogmas which cannot be effectively interrogated. If you want to say that AI investment is predicated on a belief that AGI is possible and can't be understood without considering that part of the potential upside then I think you have a good argument. Calling it a religion is a maneuver to avoid having to actually interrogate that possibility. And really I think most of the AGI people will be happy to go about this probabilistically, you don't need AGI to have a 100% chance of coming about for these investments to become rational, as little as a 5% chance can make these investments start to make sense. The P:E and debt ratios are still relevant, they still represent a kind of lower bound if the big bet doesn't pay off merely owning a lot of a critically important commodity is a much better place to be sitting than if you got literally nothing. The Jehovah's Witness is much more certain and making a much more all or nothing bet.

And really I think most of the AGI people will be happy to go about this probabilistically, you don't need AGI to have a 100% chance of coming about for these investments to become rational, as little as a 5% chance can make these investments start to make sense.

If you're willing to concede that you're talking about most AGI people, then I think we're in complete agreement! I'm not saying that all belief in AGI can be described as religious in nature, or that all people who have any level of belief in it can be described as religious. I'm saying that there exists some percentage of people working in AI who have an absurd, religious level of belief both in the odds of AGI occurring and the things that AGI will be able to do, and the combination is such that these people have effectively zero concern for all the sorts of things that ordinary investors care about. There's some level of irrational belief in the Singularity that is best analogized to religious belief in the apocalypse, and I think there's a percentage of workers in AI who have the belief.

I understand the cringe at @FiveHourMarathon likening it to religion, but there is something apocalyptic about the idea. Not in the sense that it's world-ending, but in the sense that there's something vaguely amazing that's supposed to happen that will change humanity, etc. How are we supposed to know when we've hit AGI? Sam Altman or whoever saying so isn't going to move the needle much, as it will just be perceived as a cynical marketing ploy. If it hits some benchmark that's great but I'm sure by some benchmark we had AGI in 2023. Besides, these benchmarks are all industry inventions, anyway.

OF course, no one in the industry would ever say that we've reached AGI, because that would instantly shut off the money spigot and expose them all as frauds, even if they are true believers. As soon as they describe a product as AGI the expectation level would skyrocket, as this is their supposed end goal, but when the sun goes up, sun goes down, moon goes up, moon goes down, and a month later they're still stuck with a 3% conversion rate, a trillion dollars in debt, and a product that the tech gurus all agree is slightly better than the last iteration, it's over. At that point, no one has any reason to give AI companies any more money.

So if it does happen, it has to happen in a big noticeable way that nobody can ignore. It also has to be an unalloyed good approaching luxury gay space communism, because if it's anything else, Altman et al. are fucked as well. I honestly don't understand the glee with which AI promoters predict that 50% of all "knowledge jobs" will disappear within a year. Hell, the Chief Legal Officer of Anthropic went to Stanford Law School earlier this year and basically told the students that they should all drop out. Do they not understand basic economics? Do they not understand that 50% of the highest-paid workers getting laid off in a year's time would create an economic disaster the likes of which we've never seen? Do they not understand that this will have a ripple effect into non knowledge-work, as cratering demand combined with an employment glut would reduce jobs and depress the salaries of the jobs that remained? Do they not realize that many of the enterprise clients they depend on to pay full-freight for this product will be out of business? Do they not realize that everyone whom they owe money to will also be in a tight spot and will expect to be paid the full amount of the money owed? Do they not realize that the AI companies themselves are likely to go bankrupt in such a scenario? It has to be a messianic vision, because it can't be anything else.

How are we supposed to know when we've hit AGI?

My personal AGI benchmark is an unemployment rate of 20-25% within most or all developed countries. I do agree that most (all?) of the benchmark worship is largely pointless but you can't really hype your way into that kind of unprecedented structural unemployment.

It has to be a messianic vision, because it can't be anything else

Yeah I mean the AI companies would tell you this themselves. If we really get a level of AI that enables 50% knowledge worker structural unemployment it's unprecedented levels of disruption to the political system and to the economy even before accounting for x-risk thinking. Honestly the only glee I've seen is from blue collar workers who don't seem to realize that we're all fucked together if anything approaching this level of disruption to white collar work does materialize.

Just take the legal industry; Anthropic released a report earlier this year that claimed 88% of all legal tasks could be automated by AI, though only a small percentage of those tasks were actually being automated by Anthropic's customers

Assuming you're referring to this report, it's not saying what all the headlines are claiming it says. The exposed tasks in blue here refer to tasks that could be theoretically doubled in speed using either a LLM or using LLM tooling, even tasks that LLM's categorically aren't doing right now like authorizing drug referrals.

It's not claiming that LLM's can assist with all of these tasks at current capacities and it's not claiming that all of these tasks can be fully automated even with significantly more powerful LLM's. This is a bit of a spurious claim, but it's been taken hugely out of context, and the report even admits that employment trends in jobs exposed to LLM's are currently indistinguishable from jobs that aren't.

This isn't really an argument against short-time AGI believers though; even extremely maximalist predictions like AI2027 predict that pretty much nothing happens in the broader employment market until the models reach a tipping point and suddenly large swathes of the population become unemployed and things start getting very weird very quickly. Even if you believed lawyers, researchers and SWE's are all irrelevant by 2028 you still need to hire them to get the models over the finish line.

The disconnect seems to be that the bears point at the lackluster current capabilities relative to AGI expectations(which is largely true) and bulls point that the trend lines are still holding (which is also largely true); the trend lines have to bend eventually but that could be well after the employment market is annihilated, and we are all living in luxury gay space communism or have been paperclipped. Frankly nobody just has a good answer on whether this all leads.

I honestly don't understand the glee with which AI promoters predict that 50% of all "knowledge jobs" will disappear within a year. Hell, the Chief Legal Officer of Anthropic went to Stanford Law School earlier this year and basically told the students that they should all drop out.

People keep accusing them of having glee at this but I don't really see any of this glee. They seem sober and worried about this happening and are practically begging policymakers to come up with some frameworks for how to cope with that future. The same people who accuse these claims of being gleeful then go on to say that saying policy needs to be set up to cope with mass unemployment is hype rather than genuine concern. I don't like defending particularly altman of all people but you really do seem to have them in an impossible position. What can someone truthfully say if they believe AGI is possibly imminent?

The problem I have is that they don't act like they believe AGI is imminent. They say they do because they have to; if they didn't then people would stop giving them money. Just take the legal industry; Anthropic released a report earlier this year that claimed 88% of all legal tasks could be automated by AI, though only a small percentage of those tasks were actually being automated by Anthropic's customers. Meanwhile, they're telling students at a top law school that they should learn to splice cable or something because first year associate jobs will be automated away. Aside from the confidentiality concerns of Anthropic monitoring law firm AI use, and the fact that first year associates have been useless for as long as they've existed, Anthropic's own hiring practices do not suggest that 88% of legal work can be automated away by AI.

I can't find reliable totals for how many lawyers Anthropic employs, but they hired 24 last summer, and I'm sure they had some on the payroll prior to that. A gander at their website also shows several open positions, though these all have different titles and multiple offices listed, so it might be more of a constantly hiring situation. I can't find reliable estimates on their total employee count, but I've seen everything from 2500 to 4500 employees. If they currently have 30 lawyers working for them and 3,000 total employees, that's one lawyer for every 100 employees. That's, to put it mildly, and insane ratio. For comparison, Wal-Mart has 155 in-house attorneys and 2.1 million total employees. FedEx has 60 in-house attorneys for 370,000 US employees. Tech companies have higher ratios, but not that high; Apple and Google are in the 1/200–300 range. These numbers are estimates, of course, and I'm not trying to make the argument that Anthropic doesn't need all these lawyers or that they're hiring more than necessary. My point is that AI doesn't seem to have reduced their reliance on in-house attorneys in comparison to other companies, and this is at a company that should, and supposedly is, having their attorneys make extensive use of their AI tools.

The other thing is that when you look at these job openings, they all have extensive experience requirements. The lowest I saw was 3 years experience, and a few required 10 to 12 years. This is common for in-house positions. There were also a bunch of oddly specific experience requirements, which are often more in the "nice to have" category than anything else. The one requirement that was common to all positions and obviously non-negotiable is that the candidate have an active license in at least one state. Now, I am licensed in three states, and meet absolutely none of the other requirements, though I have been working for 10 to 12 years in wholly unrelated fields. Something tells me that if I were to apply for one of these jobs and somehow got an interview, telling the hiring team that I had mad AI skillz that would allow me to complete 88% of my work and get up to speed on the remaining 12% quickly would not impress them. Then again, being a true believer was one of the requirements, so who knows.

Can you lay out exactly what you'd expect them to be doing if they thought AI was imminent? I don't really think they'd be bothering worrying about pinching the salary of 30 employees if they thought ai was imminent. I also don't think in house lawyers really scales the way you're implying. 30 lawyers gives you what? 3 teams of lawyers? You're doing a lot of lobbying because you're a major player in new tech so one of those teams is your lobbying arm, one is working on corporate mergers and acquisitions(I'm sure they're trying to buy some kind of image model team), and one is probably cooking up stuff on what they're liable for/keep the lights on legal work. It's just not the kind of thing you scale linearly with employees.

I'm not saying they have too many lawyers. I'm saying that if their products were as good as they claim they are, they'd be able to make do with fewer lawyers. They claim 88% of legal tasks can be automated, and legal employees are among the most expensive. What kind of advertising is that? You can use our software to automate your legal work and save! Except we have more lawyers on the payroll than the industry average, and when litigating we hire white shoe firms whose lawyers are of the type who have their secretaries print things out for them. If the technology isn't saving Anthropic any money then why should we believe it will save anyone else money?

You can cite all the reasons why you think Anthropic needs a bigger legal department, and maybe they do, but keep in mind that there are other companies that have other unique issues that Anthropic doesn't have to deal with. For instance, they don't get sued all that often. I represent a subsidiary of a global machinery company based in Japan that got sued a dozen times last month. For one thing. In one jurisdiction. They're getting sued somewhere, for something, multiple times per day. The US arm of the parent company, whom you've certainly heard of, has five people in its in-house legal department. To be fair to Anthropic, once a company starts getting sued constantly they usually hire national coordinating counsel to manage their litigation for them, but they still have to prepare assignments to local counsel and accept service, and do all the other boring things that come with the territory, as well as monitor the litigation and grant settlement authority.

Anyway, of the six openings they're advertising, two deal with vendor contracts, one with datacenter construction, one with customer contracts, one with international compliance and one with "frontier" issues, i.e. problems that don't exist yet and don't have clear answers. M&A and lobbying are the kinds of things that get contracted out and that the in-house team doesn't do much hands-on work with. It's more like the counsel would occasionally meet with/provide reports to a senior member of the legal team, maybe a junior member occasionally supervising it, but not something anyone is doing full time.

If they currently have 30 lawyers working for them and 3,000 total employees, that's one lawyer for every 100 employees. That's, to put it mildly, and insane ratio.

I think a lot of this comes down to the fact that nobody really has no idea where risk is going to be priced in to these business models.

All the AI companies are trying to push it onto the end user to the maximum extent possible - they'd like to keep humans around to function as accountability sinks and not much else. That works great if you accept that the "agent" isn't intelligent and has no agency.

The thing is, if you're claiming that your models are so self-aware that they deserve their own retirement plan, sooner or later somebody's going to believe it and claim that either the AI or the corporation has some form of liability for it. That's some incredibly novel legal ground, and I wouldn't doubt that a fairly large number of those lawyers are wargaming defenses to that right now.

I understand what you're saying, but I've actually looked at the job openings, and they're nothing like that. Of six openings, exactly one, [Frontier Counsel], is involved with unusual, cutting edge issues. The rest are just boring stuff like contracts and datacenter construction. And this position appears to be new; Deputy Counsel has an announcement of the opening on her Linkedin from 3 weeks ago, and it may or may not be filled yet, so it's unclear if there is even anyone dedicated to this full-time at present.

Interesting. How complex are these contracts that they need that many lawyers to handle them?

More comments

Stop being vague and start thinking about specifics. If there’s going to be UBI, how is it going to be paid for, how is it going to be distributed, how do the economics of the whole thing work?

AGI euphoria promoters have been much more vague about the post revolution economy than even Marx was in the mid-19th century. “Yeah man everyone will get their $2k a month in welfare bux, you will live in a nice pod and crochet all day or something, this will all happen with minimal social upheaval and the economics will work themselves out”.

The tech people aren't actually the government and can't decide how these questions are answered. You're asking the wrong people for solutions. All they can do is warn what is coming and make suggestions. Which you guys consistently characterize as glee and hype mongering. What guarantees can Dario make about the structure of redistribution that Trump or his successor will implement? Do you not see that this is an impossible ask?

He can think about the consequences of his technological innovation on society. This is something we ask of many creators; it is fair to ask Mark Zuckerberg if he thinks social media is harmful or what should be done about its negative impact on children or whoever (and indeed this is something Meta at least pretends to care about)

Yes, he can think about them and he in fact is viewable in many interviews and on several podcasts going on about them. But he's not the government, it isn't his role to propose specific policy.

According to Microsoft and OpenAI, AGI happens when OpenAI earns $100 billion in revenue.

Ha!

Believing that AGI is possible doesn't really require any kind of religion.

Believing that after AGI is cracked nothing else matters is the religion.

Much like fanatical Jehovah's Witnesses my father grew up with didn't save money because they thought the world was going to end in 1975.

In the same way you can't really examine the financial strategies of someone who thinks the world will end in 1975; you can't examine a company making decisions about investment and debt load based on immanentizing the eschaton.

Believing that if we can automate all labor for pennies on the dollar then there will be a major economic step change doesn't require faith. It all follows from some pretty simple reasoning. Calling it religion proves too much.

I think where the faith part comes in is "and that won't change the economy so radically that money won't work the same way".

If I'm producing houses (for example) that only cost $500 to build, I'm hoping to sell or rent them at market prices, which means I make a profit of X000%. But that relies on "then I sell that house to Joe and Mary who work jobs that mean they can get the mortgage for $200,000 to buy that house". If Joe and Mary aren't working any more because the same AI has taken their jobs, then there may still be a market for housing, but the price has to come down to $500 or $600, that is, be within the range of income they now have.

I think people are still stuck on the idea of "costs down, but sales prices stay the same" because they haven't really incorporated it into their world view that consumer demand may remain the same, but ability of consumers to pay those prices will drastically decline because much fewer people are in the jobs generating high enough income levels because AGI has replaced those jobs.

So if UBI is the way forward, then the owners of the AGI industries are going to pay for that via taxation, which means they are going to (1) have to sell their $500 house for as close to $500 as they can get, not the current house prices and (2) they're just transferring the money from one hand to the other, since the money to buy the house comes from the UBI they are being taxed to pay.

I honestly don't think we're managing to imagine the world of employment altered so drastically by the 'AGI means pennies on the dollar labour costs' dream, and the subsequent effect this will have on the economy. Nobody is selling superyachts to the dwellers of favelas, and you can't (currently) run an economy on superyacht sales to billionaires alone.

And seemingly now we're talking megayachts versus superyachts. Up to gigayachts? But even there, the superyacht global market is estimated to reach $45 billion by 2032, while current USA economy is valued at $30 trillion. So how are we going to replace all that consumption when fewer people have real disposable income from work anymore?

I think people are still stuck on the idea of "costs down, but sales prices stay the same" because they haven't really incorporated it into their world view that consumer demand may remain the same, but ability of consumers to pay those prices will drastically decline because much fewer people are in the jobs generating high enough income levels because AGI has replaced those jobs.

You can accuse AI industry people of many things, but not having thought about this kind of thing really isn't one of them. There are lots of interviews of them musing about how this could all work out. There's a wide variety of takes but few of them are "the economy is going to just be normal after the singularity except we'll be very rich".

And I really don't think this is the reasoning @FiveHourMarathon is using to call the AI bulls religious, precisely the opposite really.

So if UBI is the way forward, then the owners of the AGI industries are going to pay for that via taxation, which means they are going to (1) have to sell their $500 house for as close to $500 as they can get, not the current house prices and (2) they're just transferring the money from one hand to the other, since the money to buy the house comes from the UBI they are being taxed to pay.

If you're taking a very small piece of every financial transaction in an economy run by AIs you'll be fabulously wealthy, but thinking about this as the steady state missing a lot of the value proposition. Even ignoring that in the transitionary period where you can offer the house for $5000 in a market that lags behind the new reality, when houses can be had for a fraction of the cost then society is able to just have a lot more houses. If you take a step back and just look at what a fully automated economy actually is then if you own the backbone AIs then you can just print a giga yachts for anyone with the raw materials and the government can decide it has a monopoly on the raw materials and redistribute the proceed from selling them. Raw materials here being land, natural resources, electricity ect. When you grow the pie this much the details of how you distribute pie slices, while important, is definitely able to be overcome. The amount of abundance in such an economy would be staggering.

you can just print a giga yachts for anyone with the raw materials

That is where the sticking point is. How do I get the raw materials? Joe the guy with no job (AGI and robots automated it away) and no stocks (because he didn't get in to buying stock in the AI firms/never had the upbringing where you buy stocks) and no backup fortune is dependent on UBI (if all goes well) to live. Where does Joe, out of his UBI, get the gold, steel, energy and so forth to print a gigayacht?

When I read the handwaving about post-scarcity and AGI can just pull all this out of thin air, it does sound more akin to the miraculous multiplication of loaves and fishes than reality as we currently have it set up. Maybe AGI will change the world so that the guy living on a rubbish tip in a Third World slum can now access the same private beaches, luxury mansions, and gigayachts as Jeff Bezos - or maybe not.

You can accuse AI industry people of many things, but not having thought about this kind of thing really isn't one of them.

I do think there's an unconscious bias there where the thinking is predicated on "people like us, guys I know who work with me, my social bubble" and not "the guy who drives the bin lorry" because they don't understand what it's like to live in that socio-economic class.

Joe might be broke, but he still has antibiotics, vaccines, a smartphone, out of season fruits, climate control, and budget airline tickets; none of these could be had for any price 100 years ago.

Now, I don't actually think this level of AGI goes well at all, but at the limit it's everyone (or 99.99% of people) die or there's some level of redistribution that allows people to survive. In such a redistributive world it's not really implausible that even a pittance of UBI buys enormous amounts of material goods, much like a relative pittance today buys goods that a billionaire couldn't have had 100 years ago.

More comments

Still if there's a societal wide gamble and a small sector of people think the potential payoff for slamming as much investment as possible into AI is 'infinity wellbeing' for humanity. That then distorts a ton of decision making especially if it turns out they're some combination of wrong and/or don't actually have sufficient resources to get to the promised land

Son of a bitch. I hadn't even considered that.

The entire Oracle trajectory makes sense to me. Oracle was very profitable for a long time because they were a legal extortion company that grudgingly shipped a database. Once MS SQL Server and Postgresql got good enough to compete, and once new companies wised up enough to avoid Oracle in the first place, the writing was on the wall. If they didn't diversify they'd die. This mapped to the increasingly wild hail Mary throws they've been making. Each attempt that failed to materialize hyper growth made the next attempt even more important. After their cloud offering ended up a distant fourth place and nfts didn't pan out, what was left? AI, obviously.

Completely ignoring the potential of the technology, Oracle really has no place in the market. They don't have a foundation model: they aren't even trying to make one. At best, they seem to be aiming for some kind of position as a utility company for GPGPU compute. Less charitably, they seem to be selling their credibility by laundering dodgy debt through their corporate credit rating.

I'd assumed that everyone involved in this was as greasy as Ellison, and much like most of the tech industry, I have learned not to make the mistake of anthropomorphizing Larry Ellison.

Doing some more reading, it looks like Anthropic employs a disproportionate number of rationalists and effective altruists. Even if you think those two philosophies have some good parts, they definitely have some peculiar failure modes, and some are worse than others. This is the same philosophy that Anthropic's chief philosopher holds.

I'm not a fan of this at all. I grew up around true believers. I even held the snakes. They're scarier than a con man, because at least the con man has predictable goals.

In short, thank you for bringing this to my attention, and fuck you for putting this evil in my head.

I know this is an aside, but can anyone explain NFTs to me in a way that makes sense? I look at things like the linked article and I still can't figure out why anyone thought they were a good idea or even a workable idea. There must be some steelmanned case for "this is what they can be used for", I just haven't stumbled across it. 'Here's a thing that's totally digital. You can own a piece of it, except you won't own it. It's more like you have a licence for it. Yeah, just like paying Microsoft that subscription fee every month. But you can still make money off it by...' and that's the step where I break down.

If I squint, I can see that "I pay for the right to pixel number three thousand of this digital image" is kinda like owning a limited edition engraving or print. Fine. But it's still not the original. Maybe I can sell my print and get the price or even a bit more for it, but it's not the original drawing that is still in the artist's possession and that holds all the value. If the token is non-fungible, then my pixel three thousand can't be replaced by a swapped-in pixel.

Except it can? Or am I completely stupid? I can sell my token for money because nobody else can own a token like it. It's like selling a house.

But I can own a house to sell. I can't own the original digital piece of art that I'm selling my token from. Or can I?

This is what I'm struggling to understand.

It's basically a fancy, unbreakable, unforgeable Certificate of Authenticity that comes with your aunt's collectible plates. The thing is, Certificates of Authenticity are a huge part of the economy. A huge percentage of what people pay for in many consumer goods is essentially some form of branding value, the provenance of the good rather than the use value. The NFT is a way to totally detach that from the manufacturing process, while also making it more concrete.

Whine about it if you must, but a huge percentage of the global economy is run on the basis that a Rolex Submariner is worth more than an Armida built to the same specs, a shirt from Ralph Lauren or Lacoste is worth more than one from LAA, an Hermes purse is worth more than one from Quince, etc. The material cost or production cost differences between the fashion brand name items and the knockoff is minimal, or sometimes even reversed: LAA almost certainly pays their workers more than Ralph Lauren does. The use value difference is virtually zero for the consumer, except insasmuch as the consumer draws value out of having an "authentic" xyz.

You can argue that there is no material difference between an "authentic" Rolex and a superfake, or between an Hermes Birkin and any other leather tote bag, but a huge portion of the economy is built on the opposite assumption. Is there any sense in which any visual art is worth more when authentic versus printed at a sufficient resolution and quality? The industry is built on that assumption.

The NFT, if accepted as the financial representation of that branding value, allows that value to be entirely controlled and separated from the good itself. The sense of "participation" that people have when buying something from a stylish brand can be marketed separately from the good itself, leaving behind the current crisis level concerns about superfakes and copies.

That's the bit I don't understand. You buy a Rolex, you have a Rolex. Yes, it's ridiculous that we are paying for the branding, but the whole aura of luxury goods is involved with the reputation for quality built up (and we saw the reverse with Burberry, where their reputation as stuffy upper class brand nose-dived once they started selling to chavs, though it seems their sales soared, so that's one example of where taking a brand downmarket paid off).

Buying a certificate that says "You own a picture of a Rolex" is what does not make sense to me.

Exclusivity is what makes luxury goods sell for such a high price, the reputation for high quality, outside of sometimes the very initial push that started the company, is a cope so that one does not have to admit they are that so vain and easily manipulated that they bought a technologically inferior watch (automatics are technologically inferior to quartz watches) at car prices just to keep up with the joneses. Expensive materials and manufacturing techniques are also a cope. If Burberry had kept the exact same quality and sold their products cheaper at prices chavs could now afford, upper-class people would still have turned away from the brand; they were buying because it separated them from people like chavs.

In the digital world, exclusivity is difficult. Digital data is freely copiable. The only way you could get exclusivity of a digital product is through a database; a company would sell you an exclusive digital product and would ensure its exclusivity through control over their database. But that requires that you trust these people when they claim their product will be kept exclusive and that you trust that they will still exist in the future. If you bought an expensive pet or mount in a MMORPG, that lasts until the servers shut down, and if someone makes a server emulator for the game then anyone can have the pet or mount.

NFTs enabled true exclusivity in the digital world; not only is the ownership of an NFT on the ledger not copiable, but you can also guarantee exclusivity through code; the code itself limits issuance, meaning that at no point the issuer can decide to make your NFT mass market and destroy its value. What Yugo Labs and other collectible people figured is that the crypto crowd is ironic enough that they would be willing to purchase exclusivity tethered to something essentially worthless (generated ugly monkey avatars), after it took off they started adding a marketing cope to keep attracting less irony-pilled buyers to drive up prices, that it was actually membership into an club, that it gave you access to unique experiences, etc...

Note that I'm defending monkey pictures here not in the sense that I think they're a good thing, but that they're no more vapid than luxury goods that sell on being purposefully exclusive, they just have less of a fig leaf to hide that vapidness.

Yeah I can't defend the monkey picture application of NFTs, I'm talking about the underlying technology.

Burberry's sales have been cratering. It's just the lifecycle of the heritage brand. Lately they've been trying to reset to their "heritage" items, but the efforts have been flailing so far. You can tell, because I've read their puff pieces placed in the NYT and WSJ fashion sections, and they don't mention WWI, which is basically the birth of the brand. In WWI the iconic Burberry "Trench" coat existed for the trenches, and it was so good that while it was never issued British officers would buy them with their own money.

What I think could be a useful application of the NFT concept is if we manage to culturally tie the concept of owning an authentic rolex (and the utils that produces for people who own one) into owning the watch + the NFT associated with it.

It might be good to start with the much older fungible tokens(erc-20 tokens) that have at times been called colored coins. A fungible token is any token where every one is just as good as any others. A lot of erc-20s are used like stock certificates or company bucks, in polymarket bets for instance use the USDC erc-20 token which is pegged to the USD in transactions. There's all sorts of uses for these tokens in the crypto ecosystem, some great like stable coins, some silly, most meme coins you've heard of are erc-20 tokens. With an erc-20 you could sell general admission tickets to a concert so any who shows up can prove their wallet owns at least one concert coin.

Now, what if instead of general admission tickets you want to sell tickets for specific seats? You could create a new erc-20 token for every individual seat and only issue one for each, so whoever has that coin is the owner and no one else is, but that's a lot of overhead if you want to issue a lot of tickets. Enter ERC-721 the NFT. Now you can make unique tokens for unique assets.

Am I understanding this correctly, is the NFT not really "you can sit in this particular seat" because there's one seat but a zillion tokens? or is there some item that backs up the NFT so that you could cash it in or translate it into physical assets?

No, each nft is unique so you can address it to a particular seat. Fungible tokens are not unique so you can only use them for general admission.

There's NFTs as in "monkey pictures" and NFTs as the technologies. NFTs enable digital property that escapes the issue of control over the database. Right now, digital ownership is based on a) the database admins will not tamper with "your property" and b) if they did, courts will adjudicate the issue and force the database owners to restore your property. In that context, it being your actual property is debatable, it's more of a contract you entered with the company to get certain services that is kept by that same company. The contract's original, binding version is not yours, the original is in the company's control and possession. NFTs means that this contract, for instance, digital show tickets, is inscribed onto a public ledger that is hardened against tampering, not just an entry in Ticketmaster's database. A house is actually a great example of what an NFT could represent; owning a house is not having the house in your pocket, it's having the deed in your name. An NFT could be the deed to a house; it would be unique, impossible to (practically) forge, kept on a public, safe ledger (again, not just an entry in someone's database), and there's all sort of neat stuff you could do with it; keys and locks that unlock with proof of ownership (or a revokable proof of access from the owner). Legal and financial transactions (like mortgages) that are adjudicated automatically by code, etc... Whether we'll get there anytime soon is questionable, but that's what the technology enables. Anything unique could be represented as an NFT.

The monkey picture kind of NFT is mostly just playing around with the economic effect of introducing exclusivity to a market, untethered from any other inherent usage value. The image is typically not even part of the NFT (as in, it's not written into the blockchain, as that is expensive, at least if you want to keep it on the base chain; it's kept externally), so what you're paying for is more like certificate. Often the image is created by an algorithm, so if you have the code and your token's properties you can recreate the image. You could recreate any of the other images too, but you wouldn't have the certificate to them. You can then use proof of ownership of that certificate programatically, for instance your ownership of a monkey could be verified by anyone and used as a ticket to access a party where you get your retinas burned.

Okay, your first example makes better sense, tethering it to something that is a physical store of value and then all the fancy tech comes in with ledgers and so forth.

The monkey pictures stuff never made sense to me.

And I see that FTX sank money into the monkey pictures, then unsuccessfully sued were named as colluding to artificially inflate the price in a lawsuit when the value went down after the hype faded:

“FTX has several deep ties to Yuga such that it would be mutually beneficial for both Yuga and FTX (as well as Sotheby’s) if the BAYC NFT collection were to rise in price and trading volume activity. Upon information and belief, given the extensive financial interests shared by Yuga, Sotheby’s and FTX, each knew that FTX was the real buyer of the lot of BAYC NFTs at the Sotheby’s auction at the time that Sotheby’s representatives were publicly representing that a ‘traditional’ buyer had made the purchase,” the lawsuit said. FTX is not named as a defendant.

I have to ask once again, how the fudge did this guy fool all the smart EAs and Bay Area rationalists into being his cheerleaders? I think this is one of those cases where a dumb idiot like me would have said "this is too good to be true, also buying monkey pictures is stupid" but the smart people got fooled with "shh, it's all Bayesian calculation and the blockchain and crypto! Crypto is the future!" besides him throwing money at liberal causes (the Carrick Flynn election attempt will live in my heart for aye).

The Sequoia Capital interview will never fail to be a thing of beauty and a joy forever:

The FTX competitive advantage? Ethical behavior.

An NFT ticket gives an unforgeable token, but this token only means anything as long as venues accept it, which they'll contract out to ticketmaster anyway, and they can revoke any access rights from a token as easily as if it was in their database.

NFT house ownership is a great example of something only useful for weird speculation. Otherwise, property rights only mean anything at all if enforced by violence, generally a state monopoly. But if you trust the state to respect that right, the state can just as well maintain the database. If you don't, the token being secure doesn't make the house any more secure.

Not that an NFT house deed is very secure, since by their nature they're bearer deeds. Which might be a useful concept to have for various legal purposes, but I don't think most people would be very comfortable with their house ownership being susceptible to burglary or 5$ wrench password cracking, since to the extent that they function as NFTs, transactions made under duress should be irreversible by courts.

can anyone explain NFTs to me in a way that makes sense?

The best use case I've seen is for transferable, non-physical assets that provide value in the real world.

A concrete example would be a lifetime ticket to a band's shows.

This is the best analysis that I've seen with regards to OpenAI's business model. OpenAI in particular seems pretty hosed unless they can crack AGI or at least some sort of currently non-existent network, data or technological moat, or else their only option seems to be to angle their way into a bail-out.

Anthropic at least is a true believer in AGI and is well aware of the risks of over-capitalizing even if AI does end up making huge breakthroughs. They're better positioned with having made less spending commitments and having pivoted into enterprise, but they still ultimately need AGI or some sort of moat to make it in the mid-long term.

But inference is profitable!

I mean, it is, but selling tokens by themselves is inevitably going to be a commoditized business. The price of inference is going to be a race to the bottom with compute buildouts and efficiency improvements, and selling tokens, for as long as Chinese models can get 90% as good within 6-9 months for a fraction of the price, is not going to make a trillion dollar business.

Still, at the end of the day the finances don't really matter in my view; if they do crack AGI then the finances start rapidly fixing themselves and/or stop being relevant very quickly, and even if they don't and go bust all the researchers will still exist, and there still will be cheap distilled open-weights Chinese models served at commodity prices, the genie isn't going back into the bottle.

Is inference really profitable? Maybe in and of itself, but these companies use so many accounting tricks that it's hard to tell. Every new model requires huge R&D and capital expenditures, which have to be amortized over the lifespan of the product, which isn't infinite since these companies rely on constant expansion to stay in the hype cycle. Could Open AI turn a profit if it stuck to selling it's current models and cut its R&D and capital spending to something similar to a normal company? Or does it require the constant promise of a super product to keep the hype cycle going?

You can pay per token for open weights models served by third parties that are a few months behind SOTA if you don't believe that the first party cost per token is real.

Inference is unquestionably profitable in and of itself on API pricing, given that there's plenty of third-party inference providers selling tokens for dirt cheap and price/capability has fallen by orders of magnitude.

Whether inference is still profitable after factoring in R&D and all the costs that go into training each model is an open question; Epoch AI have a good post trying to estimate this.

Really, it's academic though, because even if it was profitable the frontier labs can't actually cut the R&D and capital expenditures; if they tried, they'd get dragged down within 12 months by distilled models and commodity hardware, so in the end it's reach heaven [AGI] or die.

OpenAI in particular seems pretty hosed unless they can crack AGI or at least some sort of currently non-existent network, data or technological moat, or else their only option seems to be to angle their way into a bail-out.

OpenAI's model is the base for Copilot, yes? Are they just hoping to eventually be bought out?

While Microsoft certainly has a history of paying big bucks for companies whose products it will then run into the ground (e.g. Skype), at 840G$ OpenAI might be a bit large for them to just swallow outright.

If OpenAI was years ahead of the competition, Microsoft might still shell out that kind of money to gain a monopoly on coding assistants, but that is thankfully not the case.

Seems unlikely. There's no indications Microsoft wants to buy them out, and it would be largely unviable without a huge drop in OpenAI's valuation regardless

What do you mean by the expense profile? Some quick Googling, maybe inaccurate, shows these annual operating expenses for 2025:

Caterpillar - about $56 billion

Salesforce - about $31 billion

OpenAI - about $28 billion

Are referring to OpenAI's plans for massively increased expenditures in the future?

Are referring to OpenAI's plans for massively increased expenditures in the future?

I am.

I was going to make my usual argument about AI being used for target acquisition in Iran, new mathematical proofs, finding zero-day exploits in Firefox, general-purpose robots, just about everything...

But nobody's going to be persuaded by that who hasn't already been persuaded at this point.

What happens without the 'AI bubble'? In the minds of the finance class, it means that the big tech companies go back to share buybacks. They conducted hundreds of billions in buybacks 2015-2022 and have since largely stopped to fund their investment in AI. Enormous amounts of money are being diverted from asset managers and financial elites to producers of HBM, to advanced packaging, to Nvidia, to power plants, construction workers, AI researchers... That's what they're unhappy about.

This is what definancialization looks like. It's anathema to a certain short-termist mindset that has predominated since the 1980s, a shareholder-first capitalism that has resulted in a hollowing out of productive industry. The beancounters preferred to offshore, to cut R&D, to cut investment, to cut costs.

There's a conflict between financial capitalism and productive capitalism and productive capitalism is taking back the reins. The beancounters are discovering that they're no longer in charge and are spreading fear and doubt to try and get the tech companies to change course. The tech companies know a bit more about technology than the beancounters and are fully committed to competition and investment.

And so we get these headlines:

Oracle and OpenAI drop Texas data center expansion plan, Bloomberg News reports

In September, the companies had announced plans for an additional ⁠potential expansion of 600 megawatts near the flagship Stargate site in Abilene, Texas. That capacity will now be fulfilled at one of the other data center campuses being built, a source familiar with the matter told Reuters on Friday.

They're just moving their plans around. The other article:

But investors have grown worried about how it would fund the data center expansion needed to serve OpenAI and ⁠other customers, including Elon Musk's xAI and Meta.

In December, the company said it expects capital expenditures for fiscal 2026 to be $15 billion higher than the $35 billion figure the company estimated during its first-quarter earnings call.

They're spending more money and investors are upset ('it should've been me getting that money, not people working in the real world!') Oracle is a relatively small company but they used to do enormous buybacks, $150 billion 2015-2022. Now they stopped, now they're issuing shares and borrowing money to invest. Investors don't like that at all. Thus we get this bizarre discourse about how supposedly all these companies are selling API access at a loss when open-source models are very cheap and suggest huge profits on inference. Then there's all this talk about how R&D costs should be classified - beancounter talk. The people who actually know the real numbers in Google, Microsoft, Amazon have clearly made their decision to spend big, why should we second-guess them based on vibes?

One thing which might happen, which would be hilarious, is that OpenAI is slightly too early but Anthropic is just right. That is, AGI is going to take just slightly longer than expected and OpenAI implodes from overinvestment while Anthropic rides to the moon.

I'm a lot more optimistic about Anthropic's business plan than OpenAI's. They recognized that getting their hooks into enterprise users from the get go is a better strategy than having hundreds of millions of users who don't pay.

Same, and I'm also more optimistic about Claude than ChatGPT. Alignment and safety did turn into effectively usable capabilities.

I think we're in an AI and tech in general bubble (Cory Doctrow has a good piece explaining why tech is overvalued: only the promise of growth keeps tech P/E above other industries, and eventually this has to settle down). As far as the broader market goes, I'm not sure. I'm up 12% this year on a strong group of rail/industrial/shipping and biotech stocks, but I don't know enough about the broader economy to really say if my picks are representative.

I think this may be the beginning of a long NVIDIA/AI route, but I have no idea if that will ripple to the rest of the economy. Iran and oil seem to be more dominant IMO.

Cory Doctrow has a good piece explaining why tech is overvalued

While reversed stupidity isn't intelligence, Doctorow is up there with Jim Cramer when it comes to counterpredictions and bad or misunderstood models. He's not saying things because they're true, or because he believes they're true, or even that he's really capable of 'belief' in any externally validated way. He's saying them because he thinks they'll persuade his readers, and you should take that as the insult it's intended as.

Again, that doesn't mean that he's wrong. Indeed, he's particularly frustrating even when I agree with him! But you'll notice none of the evidence he brings actually supports his argument, and often isn't even evidence.

If you actually have a link to a specific one, I'll either quite happy point out specific parts to the pattern or eat crow. But if you notice it, you'll notice he can't stop doing it.

((which makes the recent 'ai transcripts as "like masturbating in front of a stranger"' , yes that's a direct quote, a little interesting in a way he didn't intend, and doubly stupid in the way he did.))

Well he's certainly right about this one. I'll see if I can find the specific piece, but there's no magic that justifies a higher P/E ratio for tech companies vs. say target or Union Pacific Rail other than promises of growth. At some point NVIDIA, GOOGL and AMZN have to trade at the same P/E range as the rest of the S&P. The only thing that justifies a higher ratio is a promise of above market growth, which will eventually stop at some point. I think we are currently at that point.

The only thing that justifies a higher ratio is a promise of above market growth

Also risk, it's actually more risk than growth

Margins too, kind of, but that's mostly just "risk with extra steps"

P/e already reflects margins. But yes, risk ought to affect p/e.

Do the tech companies even really have that high of a P/E ratio anymore? Microsoft's doesn't really stand out. Amazon's seems a little high, but not absurdly so.

Google has a P/E of 27, AMZN 29, MSFT 25, NVDA: 36 Target: 15.

Target...?

As a comparison for P/E of a company in a mature industry.

Oh, there's a company by that name. I thought you meant some sort of "average index benchmark P/E" or something.

Is Target a good comparison, given that they're bleeding foot traffic like a gut-shot deer?

More comments

shipping

I wonder what that's going to look like in a week. It seems like something should happen.

It's weird man because the war increases rates (good for shipping) but also increases risk for losses (bad for shipping). If the war continues long-term there will be less shipping volume total. The US backstop of oil shipping is another wildcard that I don't know how to interpret.

There's a lot of shipping that isn't oil shipping, and a lot of oil shipping that doesn't go through Hormuz; the biggest impact on shipping as a whole is probably fuel costs.

The fact that Europe is now almost entirely at the mercy of the US for it's natural gas seems conspicuous.

Well, I can think of at least one tinpot warmonger besides Trump who might be willing to sell us natural gas which does not have to pass through Hormuz. Not that I would prefer to deal with him, but Trump can't exactly make us freeze to death next winter.

"Tinpot" is pretty good for Carney but he hasn't started any wars yet! :-)

(Now if you want to talk about people who have the ability to get large amounts of natural gas to Europe, you are certainly dealing with a smaller set -- I'm not sure Trump's really in it either though so you might be stuck with the Ruskies after all...)

I'm going all in on nothing. The market overall continues posting at worst anemic returns as it has for the past few months, but no crash. And yes, I am long the market.

People often forget that the price of assets is tied not to some absolute and abstract sense of value but to the relative standing of everything vis a vis everything else, including money.

The market won't crash just because things go poorly, for it to crash there has to be an imbalance where something becomes a better place to store nominal value than (tech) stocks.

There was an interesting article that in the last 15+ years there was a lot of “mandatory” buying of stock that are invested in by 401ks.

As boomers retire / more people divert savings to current spending there may be less “mandatory” buying of stock.

The Chinese chose housing as the store of value. But then you end up with a different set of incentives resulting in people putting their savings into multiple housing units that won't necessarily be used for any actual housing. Which I'm not even going to criticize since that's not any more silly than storing value in pieces of shiny metal. If they came to the conclusion that an apartment is worth a certain value, I don't second guess that despite the lack of obvious housing utility.