site banner

Culture War Roundup for the week of March 2, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Software giant Oracle corporation is laying off thousands of workers and killing their Texas data center plans, per Reuters and Bloomberg. It appears that their capital expenditures have gotten ahead of their ability to pay for them and now they face the regrettable need to say it out loud shortly before markets close on a Friday afternoon.

In December, the company said it expects capital expenditures for fiscal 2026 to be $15 billion higher than the $35 billion figure the company estimated during its first-quarter earnings call.

The layoffs will impact divisions across Oracle and may be implemented as soon as this month, the Bloomberg report said, citing people familiar with the matter. Some cuts will be aimed at job categories that the company expects will shrink due to AI.

This may be indirectly tied to the Iran conflict as Mid East sovereign wealth funds have begun pulling back from investment.

I'm interested to see the fallout of this one. My understanding is that the Ellison clan is fairly tight with the Trump admin.

Beyond that, I have concerns that this may be the match that lit the fuse on AI spending. I have spent the last six months trying to figure out why these valuations made any sense whatsoever. The expense profile of companies like Anthropic and OpenAI looked a lot more like Caterpillar to me than Salesforce. When it came to Oracle, I couldn't make sense of it at all.

In terms of explanations, I only had three explanations I had were that I was:

  1. Missing critical information
  2. Retarded
  3. Right

I still don't know which one it is.

Some of you here are clearly smarter and more educated than me. What do you think I'm missing here? My gut prediction is that this spirals into an even bigger flight from capital in the next six months, which causes holy hell on the retail market because the average investor is more leveraged now than they have been at any point in my lifetime. I'm also assuming it'll kill quite a lot of "LLM Wrapper" companies, like the one run by fear porn expert Matt Shumer.

I assume Google will be OK.

Beyond that, I don't have any idea.

Any predictions?

This is the best analysis that I've seen with regards to OpenAI's business model. OpenAI in particular seems pretty hosed unless they can crack AGI or at least some sort of currently non-existent network, data or technological moat, or else their only option seems to be to angle their way into a bail-out.

Anthropic at least is a true believer in AGI and is well aware of the risks of over-capitalizing even if AI does end up making huge breakthroughs. They're better positioned with having made less spending commitments and having pivoted into enterprise, but they still ultimately need AGI or some sort of moat to make it in the mid-long term.

But inference is profitable!

I mean, it is, but selling tokens by themselves is inevitably going to be a commoditized business. The price of inference is going to be a race to the bottom with compute buildouts and efficiency improvements, and selling tokens, for as long as Chinese models can get 90% as good within 6-9 months for a fraction of the price, is not going to make a trillion dollar business.

Still, at the end of the day the finances don't really matter in my view; if they do crack AGI then the finances start rapidly fixing themselves and/or stop being relevant very quickly, and even if they don't and go bust all the researchers will still exist, and there still will be cheap distilled open-weights Chinese models served at commodity prices, the genie isn't going back into the bottle.

Is inference really profitable? Maybe in and of itself, but these companies use so many accounting tricks that it's hard to tell. Every new model requires huge R&D and capital expenditures, which have to be amortized over the lifespan of the product, which isn't infinite since these companies rely on constant expansion to stay in the hype cycle. Could Open AI turn a profit if it stuck to selling it's current models and cut its R&D and capital spending to something similar to a normal company? Or does it require the constant promise of a super product to keep the hype cycle going?

You can pay per token for open weights models served by third parties that are a few months behind SOTA if you don't believe that the first party cost per token is real.

Inference is unquestionably profitable in and of itself on API pricing, given that there's plenty of third-party inference providers selling tokens for dirt cheap and price/capability has fallen by orders of magnitude.

Whether inference is still profitable after factoring in R&D and all the costs that go into training each model is an open question; Epoch AI have a good post trying to estimate this.

Really, it's academic though, because even if it was profitable the frontier labs can't actually cut the R&D and capital expenditures; if they tried, they'd get dragged down within 12 months by distilled models and commodity hardware, so in the end it's reach heaven [AGI] or die.