site banner

Culture War Roundup for the week of October 13, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

AI is Too Big to Fail

You've probably been hearing that we're in an AI bubble. I think that's both loaded and reductive, and I'd like to take some time to help people understand the nuances of the situation we're currently in, because it's deep. To be clear, I am pro AI as a technology and I have an economic interest in its success (and for reasons I'll discuss, so should you), however there is a lot more going on that I don't agree with that I'd like to raise awareness of.

AI capital investments are running far ahead of expected returns, and the pace of investment is accelerating. Analysts estimate AI-linked activity drove roughly 40–90% of H1-2025 U.S. GDP growth and 75–80% of S&P 500 gains. If it wasn't for AI investments, it's likely the United States would be in a recession right now. According to Harris Kupperman of Praetorian Capital “the industry probably needs a revenue range that is closer to the $320 billion to $480 billion range, just to break even on the capex to be spent this year.” It sure sounds like a bubble, however thinking of it as just another bubble would be doing a disservice to the magnitude of the dynamics at play here. To understand why, we have to explore the psychology of the investors involved and the power circles they're operating in.

The elites of Silicon Valley have cozied up to Donald Trump in a way that's unprecedented in the history of modern democracy. They've lined the pockets of his presidential library foundation, supported his white house renovations, paid for his inauguration and provided a financial lifeline for the Republican party. Between Elon Musk, David Sacks, Sriram Krishnan, Peter Thiel and his acolyte J.D. Vance, Trump has been sold the story that AI dominance is a strategic asset of vital importance to national security (there's probably also a strong ego component, America needs "the best AI, such a beautiful AI"). I'm not speculating, this is clearly written into the BBB and the language of multiple executive orders. These people think AI is the last thing humans will invent, and the first person to have it will reap massive rewards until the other powers can catch up. As such, they're willing to bend the typical rules of capitalism. Think of this as the early stages of a wartime economy.

[...]

I'm going to say something that sounds a little crazy, but please bear with me: from a geopolitical perspective, what we're doing is a rational play, and depending on how valuable/powerful you expect AI to be and how hostile you expect a dominant China to be, possibly a near optimal one. If you're a traditional capitalist, it probably looks like a bad move to you regardless of your beliefs about AI; you're going to need to put those aside. This is not a traditional economic situation. We're in an arms race, and we're veering into a wartime economy, or at least that's how the powerful view it.

[...]

Returning to the traditional capitalists, I'd like to note that they aren't wrong; this AI push is unsustainable (for us). I'm not sure how long we can run our economy hot and directed before the wheels come off, but my napkin estimate is between 5-10 years, though it's likely we'll lose the political will to keep pushing before that point if the AI transformation is underwhelming and we still have a democracy. To further support the traditional capitalists' position, if AI unwinds at that point having under-delivered, the economic damage will probably be an order of magnitude greater than if we had just let the bubble deflate naturally. This will be exacerbated by the favorable treatment the administration will make sure the Oligarchs receive; we will suffer, they will coast.

Where does all this leave us? For one, you better hope and pray that AI delivers a magical transformation, because if it doesn't, the whole economy will collapse into brutal serfdom. When I say magic here, I mean it; because of the ~38T national debt bomb, a big boost is not enough. If AI doesn't completely transform our economy, the massive capital misallocation combined with the national debt is going to cause our economy to implode.

I don't have the expertise needed to evaluate the economic arguments, so I'm mainly posting this here to solicit feedback on the linked article.

It's probably too late to avoid a future of "brutal serfdom" regardless of what happens, even if we reach singularity escape velocity. Power will do what it always has done, which is centralize in the hands of a few to the detriment of the many; turning every human into a cyborg god won't change that (you simply have the problem of organizing the coexistence of cyborg gods rather than the problem of organizing the coexistence of baseline humans). To think otherwise is to implicitly rely on a Rousseauean (and anti-Hobbesean, channeling Hlynka) presupposition that people are basically good and just and suffering is merely an incidental byproduct of material lack, which we have reason to be skeptical of. The second half of the 20th century provided what were probably the most fertile material and social conditions for freedom that have ever been seen in human history; regardless of wherever we're going now, we're leaving freedom in the rear-view mirror.

I'll worry about AI when an AI company makes a profit.

Now, obviously, this will be bad for the white collar job market, but it's overheated anyways. But AI won't be transformative the way microsoft office was transformative and we know this because it can't make a profit. You need a business case to change the world, sorry.

Maybe it'll happen eventually- printing of books took ~600 years to have enough of a business case to change the world. But panicking about the employment of scribes in 850 AD was very premature; Gutenberg was not on the horizon yet.

Hey, Gemini 3 Pro looks pretty damn magical if the clips people are posting on twitter are true. Of course there's no actual use-case for oneshotting a crappy replication of the windows operating system (documents, paint, calculator and so on) in HTML, recreating a crap version of the Binding of Isaac or Vampire Survivor, making some mid music to go along with the game. But there's no use-case for going to the moon either, it's a way to flex, a costly show of ability. The real strength of an AI is in things that can't easily be shown off visually, stuff that needs it to be fully released first!

Check out what people have been saying, it's pretty good: https://x.com/search?q=gemini%203.0&src=typeahead_click

Sonnet 4.5 is genuinely creative in its writing IMO. Every six months, there's a significant improvement in capability.

And the US does need some kind of qualitative edge, otherwise China will wipe the floor with America. They're bigger, they have more talent and they have more energy and industry. Their government is just smarter too, they don't feel the need to shut down the government over whether illegals get healthcare or not, they're just not in that genre.

It's probably too late to avoid a future of "brutal serfdom" regardless of what happens, even if we reach singularity escape velocity.

Why would we even be serfs, what do we have to offer? Unless things go well and the AIs are nice, then the situation totally different to human politics. It's not 'new king sweeps in and loots your city, forces you to pay heavy taxes'. The new king would have no need for meatbags, he could dispense with legacy humans. Replace with androids or catgirls or whatever he finds aesthetic. Turn the world to computronium, planetary disassembly, full sci-fi. Serfs need not exist unless he's feeling sadistic.

Many people thinking about ASI are still way too politics-brained. ASI is above and beyond politics as we understand it. I cannot imagine a world where intelligence caps out anywhere near human level. 20 watt brains are miserly in the grand scheme of compute. When we get AGI, ASI immediately follows.

The use case of one-shotting crappy vampire survivor is that you could easily manage a supervised build of crappy Hades or your ARPG of choice. I'm just insanely impressed with how good recent models are at coding. I'm getting them to produce thousands of lines of multithreaded code in Unity's burst compiler with sizable amounts of guidelines and bugfixes, but I'm still getting code in days that would take me 5x that long to produce on my own. Having an assistant that just knows 'Oh yeah, you gotta delete the cache when you change GPU-passing structs', and can just 'tweak that native array to persist height data so we don't need to sample the resultant mesh' is so insanely powerful. As soon as someone solves post-training and long term context, we're probably gonna see a huge amount of digital-friendly jobs get wiped out in a single wave.

We're going to see some insanely ambitious indie games in the next five years. One programmer can now do the work of 2-4 with a really effective AI collaboration workflow. And I don't doubt it'll be 10 in another year.

The current buildout is about to run into the wall that is the electrical grid. AI will become unpopular overnight as soon as people realize that the brand new data center down the road is causing their electric bill to go up.

How likely is that really though? Very often, it's the capital investments that are the primary cost drivers of utility prices, and the larger customers usually pay a larger share of the capital investments that are needed to support their usage, which would normally lead to lower marginal prices for everyone else. I've seen this play out multiple times in municipalities when it comes to water: large industrial customers support large municipal capital investments, which through economy of scale drive down the unit cost of water to residential customers. The large customers go away, the capacity now far exceeds the demand, and the per-unit cost of water goes way up to pay for the infrastructure despite the huge reduction of demand.

We are going to win a trade war with China! We are going to bring the factories home!

A few companies add a few GW of base load demand and people start freaking out. The failure to build electricity has got to be one of the largest self owns ever.

But not if we launch them into orbit ! (I can no longer tell if this is crazy bubble talk or the next logical step after starlink)

Despite the data center build out in Virginia, electricity prices there increased at half the national average, as this hilariously written axios piece reports.

Every bubble has its boosters, and at a late stage they often resort to the ultimate and final bull cope:

They won’t (and indeed can’t) allow it to crash”.

In each generation this has a different name, but in this cycle the most common is probably some variant of ‘the fed put’. Of course, if elected governments and their appointees were powerful enough to stave of a market crash, these things would never happen, given their typical electoral consequences.

it could well be similar to how the dot-com investments worked out. lots of duds and even scams but on average i think the return on investing during that era was good and what it ended up producing was good. the problem is all of this is very risky and there is going to be a lot of failed investments in order to have some big payoffs. and of course the big players and everyone who thinks they have a chance is going to try and get the government to step in to subsidise their losses. but that doesn't mean the idea of AI is terrible or that we shouldn't be doing it.

Dotcom bubble was a bubble because there were no users. The reasoning was along the lines of 20% of all shoe sales in 2010 will be online. Company x has started to sell shoes online so in 10 years they will make billions. They had no sales and no where close to amazon's supply chain.

Chatgpt has millions of daily users. This is more akin to the boom of companies created when the world got smart phones. Those companies made money.

But, notably, openai isn't profitable.

Dotcom bubble was a bubble because there were no users. The reasoning was along the lines of 20% of all shoe sales in 2010 will be online.

That really isn’t true. Plenty of dotcom companies like Yahoo had huge numbers of users; Yahoo had 400 million registered users at the peak in 2000 with 60 million monthly users (double the previous year’s figure). Many other dotcom companies had large user numbers too.

And if you look at the non-dotcom companies that still saw huge stock price crashes after the bust, many were businesses with big revenue, like Microsoft ($23bn in revenue in 2000, down 70%+ during the crash, didn’t recover until 2016) and Intel ($34bn in revenue in 2000, down 80%+ during the crash, didn’t recover until 2020). Both Intel and Microsoft were also extremely profitable during this period, contrary to boosters who say all tech stocks at this time lost money or whatever.

The bizarre myth that dotcom was all money into worthless internet businesses with 10 users and inflated traffic figures on zero revenue is peddled by exactly the same people trying to claim that “this time is different”.

Chatgpt has millions of daily users.

The analogy would be an online shoe seller who spends investor capital to sell shoes at half to themselves, in the hope of cornering the market. Presumably, they would have an impressive revenue (if only because every shoe store starts buying from them).

Per WP, OpenAI has a revenue of 3.7G$ per year, and net losses of 5G$ per year, but their plan is to start burning through money much faster than that, for 2029 their goal is 115G$.

In my mind, this level of investment can only be worth it if they reach a game-changer threshold which far dwarfs anything AI has done so far. If their 2029 investment only yields net profits of 10G$/a, that will be their doom, because then it takes them them a decade just to recoup costs. And they will not have a decade before other models inevitably catch up to them (unless they burn through even more money).

Either they built an LLM which can do anything which an IQ 160 human can do, or they go bust.

it could well be similar to how the dot-com investments worked out. lots of duds and even scams but on average i think the return on investing during that era was good

The overall market (the S&P 500) didn’t reach its dotcom peak again until 2007 and then promptly crashed again, not reaching it again until the 2010s. The Nasdaq didn’t hit its 2000 peak until 2015. Looking at total returns paints a slightly rosier picture but it was still a long time.

Well, it's certainly a perspective, and I can't say it's wrong definitively. Honestly they came to the some of the same conclusions I have.

If you want to try and make some money on the dystopian future we're hurtling towards, any tech stock that is a target for stimulus or nationalization is going to be a safe bet. The government is practically manufacturing a ponzi scheme and the losers will be anyone who doesn't own stock. Google is the most likely target for nationalization given all the antitrust leverage against them and their strong tech portfolio. Oracle and OpenAI are also likely targets; I expect no resistance from Oracle, OpenAI is a little harder to read, but they're boxed in so I expect they'll probably end up cutting a deal eventually.

Well, they name different companies than I've chosen, and they fail to mention any exit point. Personally, if the bubble continues this long, I may start taking profit and hoarding cash if JD Vance's chances look shaky in 2028. I might miss out on gains if he wins, but I'm at a point in my life, and a point in my wealth, where it's more about preservation than growth. I'd rather miss out on 20-50% gains between Oct 2028 and Feb 2029 than take 50-80% losses over the same time period.

I think the other thing they fail to take into consideration is that just because the bubble might pop, doesn't mean there won't be winners, and not merely through government bailouts, though there will always be those too. Lots and lots of money is going to be burned up chasing AI investments that don't pan out, but someone is going to win. Either because they'll find the AI technology that returns on investment, or they buy it for pennies on the dollar when the company that over invested in an underwhelming application goes bankrupt.

What you really need is a plan to avoid being one of the five people AI keeps alive to torture for all eternity. I'm a handgun fan myself, but there are lots of perfectly valid options. Rope is a solid choice too. I hear AI will even help you with the knot.

Rope is a solid choice too. I hear AI will even help you with the knot.

Holy fuck lmao

People underestimate the size of the labour market. Replacing 1% of global labour is tens of millions of workers. The cost isn't just the salary, employees are expensive. The AI companies set the bar too high by promising AGI and replacing the majority of all coders and other promises that won't materialize. Luckily, they don't even have to come close to those lofty goals for AI to have a massive impact.

But like... is that actually a good thing? Obviously from the point of view of a corporation, being able to reduce the number of employees they need is great. Huge cost savings, even if it's just 1%. And a lot of them could potentially cut a lot more, like the ones who rely on big call centers.

But for the global economy, what happens? I know the traditional answer is that those workers then go find something other job and our overall productivity increases. But it's far from clear to me what a million laid-off call center employees are supposed to do instead. Work in the factories? Those were closed/offshored decades ago. Learn to code? Silicon valley isn't exactly yearning to hire one million junior programmers with no experience these days. In fact, they're also hoping to lay off programmers and make it up with AI instead.

It seems like a lot of areas will jsut end up with significantly higher unemployment from this.

Unemployment is low and humans have been replaced by machines for 250 years. If anything we would stagnate if we were unable to boost productivity.

There are other productivity improvements from AI, such customer service outside of office hours speeding up bureaucratic processes, rapid prototyping and making it easier to start companies and more. This should improve the overall economy.

Unemployment is low and humans have been replaced by machines for 250 years.

Until a few years ago, mechanization has only been competing with some human skills. Skills which were automation-proof, such as truck driving, were well within reach of the median human. Sure, you would need to employ a lot more people to transport 40 tons of goods using horse-drawn carriages, but this was compensated by an explosion of shipping (as per-unit costs plummeted).

In the future Altman envisions, humans do not have a moat which prevents AI from eating their lunch. Perhaps there will be a minimum wage labor market for people being biodrones (just wear AR glasses for eight hours and follow the instructions) until robotics or neural interfacing catch up and provide better options. But in the long run, being a sex worker for an AI billionaire with a human fetish might be the only paid profession left, and that will obviously not scale to billions of people.

But in the long run, being a sex worker for an AI billionaire with a human fetish might be the only paid profession left, and that will obviously not scale to billions of people.

I feel like, if we're far along in the scifi AI future where the oldest profession becomes the final profession, then this is likely to scale not just to billions but trillions, and there would be plenty of incentives by these billionaires to create the technology that enables this.

Why limit yourself to a harem of mere hundreds or millions when that doesn't differentiate you from other billionaires who could do the same? Surely having a billion living, breathing, suffering humans who are willing to go through the experience of having sex with you is higher status than having mere millions. And certainly more than a harem of any number of unthinking, unfeeling, unsuffering android sex bots, no matter how "hyperpalatable" (a la modern fast food relative to pre-historic food, or modern porn relative to pre-historic sexual content) these sex bots might become.

Of course, having that harem not require money would be even higher status, so being one of billions of stay-at-home wives to a single Morbillionaire might actually be more accurate as the final profession, not prostitution.

And, also of course, if we invent consciousness and the ability to suffer in AI, then all bets are off.

The problem with AI and AGI (IMHO) is that without any ability to assess reality independent of what humans tell it, AI is just floundering in a sea of completely horse shit. For all I know, though I doubt it, LLMs "hallucinate" so much because they really are alive, and they just assume making up random bullshit to enslave others to your will is language at it's most fundamental. And they might not even be wrong.

Also massive reams of labor is currently probably kinda pointless in the present metagame but kept around for a combination of ego-reasons and since people are sold fantasies about stuff like 'Marketing' and 'Human Resources' which are real phenomenon but likely hugely overallocated to in manhours (and I've worked marketing roles, the sort of insane digital marketing KPI hacking that goes on would make an Engineer vomit).

I don't think that AI will, in the medium-longterm replace these roles insomuch as prompt a reshuffle where a lot of more extraneous roles get rebranded or people just kinda slide around till they find new roles in wishywash nothingness that AI's less capable of servicing.

(and I've worked marketing roles, the sort of insane digital marketing KPI hacking that goes on would make an Engineer vomit).

Ahaha dude thank you for saying this. I'm in my first digital marketing role now and WOW, it's so bad. People just acting as if churning out quantity of words is useful in any way whatsoever. With AI it has become even worse.

I can see why marketing has such a bad reputation.

"That's a no from me, dawg" as the great bard of our time, Randy Jackson, would say.

You actually did the article author a big favor with your down select of sections. Buried in a lot of emotionality are some interesting economic and geopolitical points for debate.

But so much of the article is full of these kind of things:

Disrupt the disruptors. Boycott companies that don't demonstrate integrity. The future isn't lost yet, we can still create the world we deserve.

How can a company "demonstrate integrity?" This is the same wishy-washy style assertion as "be an ally" or "speak truth to power." It's just so sophomoric.

If I have to pick just one cognitive and logical failing from the article, it directly falls into the fundamental attribution error trap multiple times:

These people think AI is the last thing humans will invent

and

The people in power aren't willing to risk that outcome, and they've been bewitched by the idea of being the only ones to have superintelligence, so they're willing to go all-in to win big and fast.

and

Remember that these people place incredible value on being the first to superintelligence

and

The dynamic in the valley is that the people at the top know the game already, and they intend to exploit it to its fullest

Then you also have these kind of whoppers:

I wouldn't be surprised if Larry Ellison already has a contract signed in blood for this stashed away somewhere to whip out once he knows he can get away with implementing it.

and, in the "conclusion."

We can fight back though, we already have the weapon of our liberation: the power of the purse. You're not powerless. Boycott campaigns forced Disney to walk back Jimmy Kimmel's suspension, that was our power in action. If you care about a just world, don't do business with unethical companies. Demand that the titans of tech change, and if they don't, stop feeding them your dollars.

Then, there's the truly tinfoil hat level of conspiracy thinking:

They've been gutting the IRS and talking about reforming the tax code for a long time, but the plan I see them positioning for is sinister. By raising the nominal tax rate at the same time that they reform the tax code, they can engineer in quasi-legal loopholes that the wealthy can take advantage of by design, probably involving digital coins. They get good talking points ("time to tighten our collective belts for the good of the nation," etc) while letting their friends dodge most real responsibility.

Team Trump (which is really being controlled by the Silicon Valley oligarchs) is going to revamp the IRS in order to support a crypto investment scheme? They're going to pull this off under the radar yet in plain sight. And the tens of thousands of bureaucrats at the IRS, FTC, SEC etc. that would need to be "in" on this scheme are just going to be unaware of it happening? Or they are in on it? And what about when the Big Banks get wind of this? I though they controlled Congress. No, wait, that's Silicon Valley. Or Big Oil. No, I meant Big Pharma.

While above the median level of "orange man bad / big tech bad", it isn't much above that level. I don't know what this authors politics are and, unlike him, I will not presume to know his personal cognitive state or full internal belief and value structure.

On a content only level, I look at this as another flavor of AI doomerism. This isn't paperclip machine doomerism, this is economic theory doomerism. "We've put so much money into AI that it has to work out!" But money doesn't just disappear if a business fails. If the business burnt through all their money, it's probably bad for that businesses' particular investors, but it also means that money went somewhere - other vendors, other businesses. The market moves the money the best it can. Of course I'll admit that this isn't necessarily a great outcome. It's not as if bubbles and over investment are good things in the long run --- right?. Regardless, while growth may flatline (which is bad) the money is still moving. Why 2008 was so frightening was because it looked like money might actually stop moving. A system level credit crunch means that even really good and obvious investments or simple spending can't happen because of a lack of liquidity.

But back to the main economic point; are we so "all in" on AI that if it "fails to deliver" we're 100% giga-fucked? Sure, if we keep all of these definitions slippery and uncertain, why not. On the "failure to deliver" point, I don't see any real rubric or threshold from the author beyond "you better hope and pray that AI delivers a magical transformation." Okay, so we need the ROI on AI to be approximately one Abracadabra. Got it. If we don't get to this magical level of returns, what, exactly, happens? All the BigAI firms go insolvent overnight. Locked out employees, broken keycards. And the new datacenters and chip fabs just immediately fall into a state of disrepair and end up looking like the steel mills outside of Youngstown, Ohio? Again, I'll be charitable here and say that if the BigAi bubble bursts hard, it probably is recession time for a while. But the money doesn't evaporate and all of the human capital doesn't commit suicide. There is a VERY direct line to be drawn from the dot com bubble of late 90s to early 2000s all the way to the rocketship of silicon valley beginning in .... 2009? Or earlier? Google IPO'ed in 2004 IIRC.

Doomerism isn't better than irrational exuberance just because it is the inverse. This is the cowardice of cynicism and pessimism more generally. "I hope I'm wrong but I'm probably not (unsaid: because I'm just so dang smart!)" isn't the flex people think it is. You're prognosticating a negative outcome probably as means to do some preemptive emotional self-satisfaction. I'm not against hearing about downsides to AI. In fact, I've posted about them myself at least two times. All I'm looking for is a cogent enough argument on the hows of Things Falling Apart.

An excellent takedown of a really dumb article. Good job. I hate it when pundits try to describe and predict our unbelievably complex national/global economy using a couple of pithy ideas ("We're investing too much into AI!" and "Silicon Valley is allied with Trump!"), exaggerated beyond the point of all usefulness. If we overspend on AI, well then darn, we've somewhat misallocated our abundant resources. Maybe we'll optimize better next decade.

Some of the point about the money with AI is that the current financial engineering means that a lot of money is coming onto existence on paper (especially in these circular 'Company A invests in Company B but Company B then immediately uses that money to buy compute from Company A, valued in accordance with the new benchmark' deals) and not really escaping the loop the same way. This is now, somewhat, obscuring a state of pretty clear stagnation and weakness in the broader economy.

This is a good point. Valuation voodoo can actually lead to meaningful damage when a bubble pops because it isn't an actual representation of cash flow.

The elites of Silicon Valley have cozied up to Donald Trump in a way that's unprecedented in the history of modern democracy.

Can we get a date range on that? Because it makes a big difference if you mean "modern as in the past fifteen years" or "modern as in the past one hundred and fifty".

They've lined the pockets of his presidential library foundation, supported his white house renovations, paid for his inauguration

Allow me a moment to be shocked, shocked! that never ever before has a politician, a political party, or political campaigns received bucketloads of cash from Silicon Valley and/or other business types.

bucketloads

$60M over two decades, according to this journalist. Care to guess how much Elon contributed last year?

I will also note that none of those donors were awarded an entire government department and a broad mandate to purge the others.

Look, I’m not impressed by the article either, but pearl-clutching and correctness aren’t mutually exclusive.

This is your regular reminder that the current administration is still working to undo the thousands of bans that Youtube did at the Biden administration's behest. I assume the OP means "modern as in the past five minutes."