site banner

Culture War Roundup for the week of October 13, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

AI is Too Big to Fail

You've probably been hearing that we're in an AI bubble. I think that's both loaded and reductive, and I'd like to take some time to help people understand the nuances of the situation we're currently in, because it's deep. To be clear, I am pro AI as a technology and I have an economic interest in its success (and for reasons I'll discuss, so should you), however there is a lot more going on that I don't agree with that I'd like to raise awareness of.

AI capital investments are running far ahead of expected returns, and the pace of investment is accelerating. Analysts estimate AI-linked activity drove roughly 40–90% of H1-2025 U.S. GDP growth and 75–80% of S&P 500 gains. If it wasn't for AI investments, it's likely the United States would be in a recession right now. According to Harris Kupperman of Praetorian Capital β€œthe industry probably needs a revenue range that is closer to the $320 billion to $480 billion range, just to break even on the capex to be spent this year.” It sure sounds like a bubble, however thinking of it as just another bubble would be doing a disservice to the magnitude of the dynamics at play here. To understand why, we have to explore the psychology of the investors involved and the power circles they're operating in.

The elites of Silicon Valley have cozied up to Donald Trump in a way that's unprecedented in the history of modern democracy. They've lined the pockets of his presidential library foundation, supported his white house renovations, paid for his inauguration and provided a financial lifeline for the Republican party. Between Elon Musk, David Sacks, Sriram Krishnan, Peter Thiel and his acolyte J.D. Vance, Trump has been sold the story that AI dominance is a strategic asset of vital importance to national security (there's probably also a strong ego component, America needs "the best AI, such a beautiful AI"). I'm not speculating, this is clearly written into the BBB and the language of multiple executive orders. These people think AI is the last thing humans will invent, and the first person to have it will reap massive rewards until the other powers can catch up. As such, they're willing to bend the typical rules of capitalism. Think of this as the early stages of a wartime economy.

[...]

I'm going to say something that sounds a little crazy, but please bear with me: from a geopolitical perspective, what we're doing is a rational play, and depending on how valuable/powerful you expect AI to be and how hostile you expect a dominant China to be, possibly a near optimal one. If you're a traditional capitalist, it probably looks like a bad move to you regardless of your beliefs about AI; you're going to need to put those aside. This is not a traditional economic situation. We're in an arms race, and we're veering into a wartime economy, or at least that's how the powerful view it.

[...]

Returning to the traditional capitalists, I'd like to note that they aren't wrong; this AI push is unsustainable (for us). I'm not sure how long we can run our economy hot and directed before the wheels come off, but my napkin estimate is between 5-10 years, though it's likely we'll lose the political will to keep pushing before that point if the AI transformation is underwhelming and we still have a democracy. To further support the traditional capitalists' position, if AI unwinds at that point having under-delivered, the economic damage will probably be an order of magnitude greater than if we had just let the bubble deflate naturally. This will be exacerbated by the favorable treatment the administration will make sure the Oligarchs receive; we will suffer, they will coast.

Where does all this leave us? For one, you better hope and pray that AI delivers a magical transformation, because if it doesn't, the whole economy will collapse into brutal serfdom. When I say magic here, I mean it; because of the ~38T national debt bomb, a big boost is not enough. If AI doesn't completely transform our economy, the massive capital misallocation combined with the national debt is going to cause our economy to implode.

I don't have the expertise needed to evaluate the economic arguments, so I'm mainly posting this here to solicit feedback on the linked article.

It's probably too late to avoid a future of "brutal serfdom" regardless of what happens, even if we reach singularity escape velocity. Power will do what it always has done, which is centralize in the hands of a few to the detriment of the many; turning every human into a cyborg god won't change that (you simply have the problem of organizing the coexistence of cyborg gods rather than the problem of organizing the coexistence of baseline humans). To think otherwise is to implicitly rely on a Rousseauean (and anti-Hobbesean, channeling Hlynka) presupposition that people are basically good and just and suffering is merely an incidental byproduct of material lack, which we have reason to be skeptical of. The second half of the 20th century provided what were probably the most fertile material and social conditions for freedom that have ever been seen in human history; regardless of wherever we're going now, we're leaving freedom in the rear-view mirror.

"That's a no from me, dawg" as the great bard of our time, Randy Jackson, would say.

You actually did the article author a big favor with your down select of sections. Buried in a lot of emotionality are some interesting economic and geopolitical points for debate.

But so much of the article is full of these kind of things:

Disrupt the disruptors. Boycott companies that don't demonstrate integrity. The future isn't lost yet, we can still create the world we deserve.

How can a company "demonstrate integrity?" This is the same wishy-washy style assertion as "be an ally" or "speak truth to power." It's just so sophomoric.

If I have to pick just one cognitive and logical failing from the article, it directly falls into the fundamental attribution error trap multiple times:

These people think AI is the last thing humans will invent

and

The people in power aren't willing to risk that outcome, and they've been bewitched by the idea of being the only ones to have superintelligence, so they're willing to go all-in to win big and fast.

and

Remember that these people place incredible value on being the first to superintelligence

and

The dynamic in the valley is that the people at the top know the game already, and they intend to exploit it to its fullest

Then you also have these kind of whoppers:

I wouldn't be surprised if Larry Ellison already has a contract signed in blood for this stashed away somewhere to whip out once he knows he can get away with implementing it.

and, in the "conclusion."

We can fight back though, we already have the weapon of our liberation: the power of the purse. You're not powerless. Boycott campaigns forced Disney to walk back Jimmy Kimmel's suspension, that was our power in action. If you care about a just world, don't do business with unethical companies. Demand that the titans of tech change, and if they don't, stop feeding them your dollars.

Then, there's the truly tinfoil hat level of conspiracy thinking:

They've been gutting the IRS and talking about reforming the tax code for a long time, but the plan I see them positioning for is sinister. By raising the nominal tax rate at the same time that they reform the tax code, they can engineer in quasi-legal loopholes that the wealthy can take advantage of by design, probably involving digital coins. They get good talking points ("time to tighten our collective belts for the good of the nation," etc) while letting their friends dodge most real responsibility.

Team Trump (which is really being controlled by the Silicon Valley oligarchs) is going to revamp the IRS in order to support a crypto investment scheme? They're going to pull this off under the radar yet in plain sight. And the tens of thousands of bureaucrats at the IRS, FTC, SEC etc. that would need to be "in" on this scheme are just going to be unaware of it happening? Or they are in on it? And what about when the Big Banks get wind of this? I though they controlled Congress. No, wait, that's Silicon Valley. Or Big Oil. No, I meant Big Pharma.

While above the median level of "orange man bad / big tech bad", it isn't much above that level. I don't know what this authors politics are and, unlike him, I will not presume to know his personal cognitive state or full internal belief and value structure.

On a content only level, I look at this as another flavor of AI doomerism. This isn't paperclip machine doomerism, this is economic theory doomerism. "We've put so much money into AI that it has to work out!" But money doesn't just disappear if a business fails. If the business burnt through all their money, it's probably bad for that businesses' particular investors, but it also means that money went somewhere - other vendors, other businesses. The market moves the money the best it can. Of course I'll admit that this isn't necessarily a great outcome. It's not as if bubbles and over investment are good things in the long run --- right?. Regardless, while growth may flatline (which is bad) the money is still moving. Why 2008 was so frightening was because it looked like money might actually stop moving. A system level credit crunch means that even really good and obvious investments or simple spending can't happen because of a lack of liquidity.

But back to the main economic point; are we so "all in" on AI that if it "fails to deliver" we're 100% giga-fucked? Sure, if we keep all of these definitions slippery and uncertain, why not. On the "failure to deliver" point, I don't see any real rubric or threshold from the author beyond "you better hope and pray that AI delivers a magical transformation." Okay, so we need the ROI on AI to be approximately one Abracadabra. Got it. If we don't get to this magical level of returns, what, exactly, happens? All the BigAI firms go insolvent overnight. Locked out employees, broken keycards. And the new datacenters and chip fabs just immediately fall into a state of disrepair and end up looking like the steel mills outside of Youngstown, Ohio? Again, I'll be charitable here and say that if the BigAi bubble bursts hard, it probably is recession time for a while. But the money doesn't evaporate and all of the human capital doesn't commit suicide. There is a VERY direct line to be drawn from the dot com bubble of late 90s to early 2000s all the way to the rocketship of silicon valley beginning in .... 2009? Or earlier? Google IPO'ed in 2004 IIRC.

Doomerism isn't better than irrational exuberance just because it is the inverse. This is the cowardice of cynicism and pessimism more generally. "I hope I'm wrong but I'm probably not (unsaid: because I'm just so dang smart!)" isn't the flex people think it is. You're prognosticating a negative outcome probably as means to do some preemptive emotional self-satisfaction. I'm not against hearing about downsides to AI. In fact, I've posted about them myself at least two times. All I'm looking for is a cogent enough argument on the hows of Things Falling Apart.

An excellent takedown of a really dumb article. Good job. I hate it when pundits try to describe and predict our unbelievably complex national/global economy using a couple of pithy ideas ("We're investing too much into AI!" and "Silicon Valley is allied with Trump!"), exaggerated beyond the point of all usefulness. If we overspend on AI, well then darn, we've somewhat misallocated our abundant resources. Maybe we'll optimize better next decade.

Some of the point about the money with AI is that the current financial engineering means that a lot of money is coming onto existence on paper (especially in these circular 'Company A invests in Company B but Company B then immediately uses that money to buy compute from Company A, valued in accordance with the new benchmark' deals) and not really escaping the loop the same way. This is now, somewhat, obscuring a state of pretty clear stagnation and weakness in the broader economy.

This is a good point. Valuation voodoo can actually lead to meaningful damage when a bubble pops because it isn't an actual representation of cash flow.