This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I've all of the sudden seen AI blackpilling break out into the normie space around me. Not so much about FOOM, and paperclipping, or terminator scenarios, but around the sudden disruptive nature, and especially around economic upheaval. Not exactly sure why. Veo3 has been part of it.
For example, coworkers suddenly aware that AI is going to completely disrupt the job market and economy, and very soon. People are organically discovering the @2rafa wonderment at how precariously and even past-due a great deal of industry and surrounding B2B services industries stand to be domino'd over. If my observation generalizes, that middle class normies are waking up a doompill on AI economic disruption, what is going to happen?
Let's consider it from 2 points of view. 1 They're right. and 2. They're wrong. 1. is pretty predictable fodder here - massive, gamechanging social and economic disruption, with difficult to predict state on the other side.
But is 2 that much less worrisome? Even if everyone is 'wrong', and AI is somehow not going to take away 'careers', people in mass worrrying that it's so will still manifest serious disruption. People are already starting to hold thier breath. Stopping hiring, stopping spending, running hail mary's, checking out.
Somehow, it's only senior management who doesn't realize the impact. (They keep framing 'If we can cut costs, we'll come out on top, instead of following the logical conclusion, if everyone stops spending the B2B economy collapses.) - I have a nontechnical coworker, who has recently recreated some complex business intelligence tool we purchased not long ago using readily available AI and a little bit of coaching. He had an oh shit moment, when he realized how cannibalized the software industry is about to get. The film industry seems about to completely topple, not because Veo3 will replace it immediately, but because, who's going to make a giant investment in that space right now?
I suspect the macro economic shock is going to hit faster than most are expecting, and faster than actual GDP gains will be made, but maybe I'm just an idiot.
Presumably, if one wanted to, one could just firewall the main API servers. The big players are well known and with the possible exception of full-size DeepSeek, local models are not powerful enough to be very useful.
I’m not in favour of it, but I don’t think there’s anything stopping a majority voting for this. The only reason AI hasn’t been stomped on is the arms race and the fact that overwhelmed first world countries like the UK see it as the key to getting back in the black. Neither of this are immutable facts of the universe.
AI is the boot, it's going to be doing all the stomping. Microsoft, Amazon, Google, Facebook, Twitter are some of the most powerful companies in the world, they have a gravity well that pulls everyone else behind them. Shut down AI, what does that do to your stock portfolio? Your pension? What does that do for your reelection campaign, is the other guy going to get the algorithm on his side, millions in his warchest? How do you coordinate against AI when all major internet forums are looking to AI as a revenue source?
Or the 20 million people spending hours a day on character.ai, they're not going to let their wAIfus and husbandos go without a fight.
Not to mention that everyone else in business has some kind of interest in AI. The manufacturers want to automate their factories and improve their logistics, services companies want to boost productivity.
And the arms race, as you mention, DARPA, the Pentagon and leading lights in the Chinese Communist Party. That alone is an insoluble problem for decels, what do you say to the paranoia of American strategists? Without a technological advantage, the US doesn't stand a chance against China. They're certainly not going to let China get ahead. And China is not going to stop, it's clearly identified as a key technology to advance in. The public in China love AI, they're very optimistic about it.
Governments couldn't care less about implementing unpopular policies, mass migration for one. Or ending the death penalty. Or invading foreign countries for dubious reasons. If they see it as a core priority, they'll make it happen regardless of what people think. AI is almost certainly far more seductive than any of these things, with far stronger institutions backing it. I'm very bearish on decels having any success whatsoever. Remember PauseAI? Basically nothing happened. It was a squib, hundreds of billions in capital was redeployed to rapidly advance AI in 2023, the exact opposite of what they wanted.
I'm thinking of the
case, where say 70% of people become unemployed or suffer a sharp reduction in status. I don't like mass migration either, or the repeal of the death penalty, but the opposition to those is ~50% of the population max and most of those are pretty wishy washy about it. Governments hate disruption more than anyone, if too much happens too fast I can entirely see the government just bringing the hammer down, like China did with Ma. There's nothing technologically inevitable about cloud-based AI remaining available. And once it looks like one side of the China/America divide might start dialling this stuff down, I can well imagine their opposite number gratefully following suit.
In short, government with unanimous popular backing is still the biggest beast out there. IF it comes to the kind of unemployment figures above, I think AI companies will bend the knee or be broken. Obviously, if things remain as they are, the future is much more murky.
If unemployment rises to 70%, then AI can also be used for combat power and war economy work.
Imagine a swarm of AI-equipped drones, faster and more coordinated than anything in Ukraine today. Imagine the ground-based robots they're trying out but with a machinegun on top: https://x.com/XHNews/status/1921201829066797357
Automated trucks for logistics, all coming from automated factories. That's all eminently possible with 70% unemployment, plus more exotic stuff like satellite swarms spying on everybody in real time, decapitation strikes with novel nerve agents we can't even detect.
How is a human military going to fight that, especially when AI is going to be deeply embedded in their communications? Perhaps a government or sections from a government will merge with a leading AI company or nationalize them earlier in the game but I can't see how they'd successfully shut them down without rendering themselves globally irrelevant. If they wait until 70% of people are unemployed, they might just get crushed.
What do you do if 70% of people are made obsolete? Shut down AI and send them to do useless work? Put tariffs on AI-made products overseas? Seems like delaying the inevitable.
Neither superpower wants to slow down, Trump's America explicitly wants to win the AI race with Stargate while China has allocated considerable effort to developing AI. It's bipartisan in America, Biden was also keen to restrict GPUs leaving the US. I don't think there's any anti-AI faction in China at all, I'm not aware of a single evil AI in the entire Chinese cultural corpus. We haven't even stopped the 'randomly develop gain of function megadeath viruses for no good reason' arms race after a megadeath virus leaked, so what are the chances of stopping the 'immense power and wealth' race after it gives out immense power and wealth?
There's AI and there's AI. People detection is a simple matter which you can do on-chip. Anything like
in the next 5-10 years, like automated coding or automated logistics, is going to be heavily relying on a handful of APIs (approx 4 now) provided by a handful of companies. China could shut down LLMs in China tomorrow if it wanted to - firewall OAI and Anthropic, close down Deepseek. Boom, done. America would have a slightly harder time but it's basically straightforward.
Neither wants to, yet. But if the societal disruption starts to become uncomfortable, they can and they may well. I'm not talking about evil AI, I'm talking about obvious and destabilising social disruption. More than immense wealth and power, governments like stability. China and America are quite capable of running private military AI research on things like YOLO whilst mutually deciding that giving public/corporate society access to AGI is too disruptive to tolerate.
More options
Context Copy link
More options
Context Copy link
70% sure, maybe. But what happens if it's 'just' 2008 levels of sudden disruption? And then a small stagnant window before another dive. I am more worried about falling into a series of local minima, where the immediate 'solutions' get us into a worse scenario.
In some respects 70%+ emplyment disruption, or a skynet scenario could be better, in creating a clear, wide consensus on the problem and necessary reaction. I am more worried about a series of wiley cyote getting over a cliff before he realizes it, falling, then repeating as he tried to get ahead of the next immediate shift.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link