site banner

Culture War Roundup for the week of July 15, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

Republicans are looking to militarize and ramp up AI: https://www.washingtonpost.com/technology/2024/07/16/trump-ai-executive-order-regulations-military/

Former president Donald Trump’s allies are drafting a sweeping AI executive order that would launch a series of “Manhattan Projects” to develop military technology and immediately review “unnecessary and burdensome regulations”

The framework would also create “industry-led” agencies to evaluate AI models and secure systems from foreign adversaries,

This approach markedly differs from Biden's, which emphasizes safety testing.

“We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology,” the GOP platform says. “In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.”

America First Policy Institute spokeswoman Hilton Beckham said in a statement that the document does not represent the organization’s “official position.”

Greater military investment in AI probably stands to benefit tech companies that already contract with the Pentagon, such as Anduril, Palantir and Scale. Key executives at those companies have supported Trump and have close ties to the GOP.

On the podcast, Trump said he had heard from Silicon Valley “geniuses” about the need for more energy to fuel AI development to compete with China.

This is only a draft plan and not official policy but it does seem like decades of EA/lesswrong philosophizing and NGO shenanigans have been swept away by the Aschenbrenner 'speed up and beat China to the finish line' camp. I think that's what most people expected, the fruits are simply too juicy for anyone to resist feasting upon them. It also fits with the general consensus of big tech which is ploughing money into AI at great speeds. The Manhattan Project cost about $20 billion inflation adjusted, Microsoft is spending about $50 billion a year on capex, much of it going into AI data centres. That's a lot of money!

However, there is a distinction between AGI/superintelligence research and more conventional military usage: guiding missiles and drones, cyberwarfare, improving communications. China has been making advances there, I recall that they had datasets of US navy ships circulating. One of their most important goals is getting their anti-ship ballistic missiles to hit a moving, evading ship. It's hard to guide long-range missiles precisely against a strong opponent that can jam GPS/Beidou. AI-assisted visual targeting for the last adjustments is one potential answer.

The Chinese and US militaries may not be fully AGI-pilled but they're very likely enthusiastic about enhancing their conventional weapons. Modern high-end warfare is increasingly software-dependant, it becomes a struggle between the radar software and the ECM software, satellite recognition vs camouflage. If you have some esoteric piece of software that can make it easier to get a missile lock on a stealth fighter, that's a major advantage. While most attention is focused on text and image generation, the same broad compute-centric techniques could be used for radar or IR, seismology, astronomy...

On the cultural front J D Vance has highlighted the danger of big tech companies calling for safety regulations and securing their incumbents advantage: https://x.com/BasedBeffJezos/status/1812981496183201889

I also think Google's floundering around with black Vikings in their image-generation and other political AI bias has roused Republicans and right-wingers into alarm. They don't particularly want to get their enemies entrenched in control of another media format. AI may be a special format in that it's much more obvious and clear in how the propaganda system works. A real person can avoid gotcha questions or moderate their revealed opinions tactically. Most teachers do that in school, they can convey an attitude without providing gotcha moments for libsoftiktok (though some certainly do). With AI you can continually ask it all kinds of questions to try and make it slip up and reveal the agenda behind it.

I am a little surprised by the distress over this. The military has been using artificial intelligence for decades. Any self-guiding missile or CIWS is using an artificial intelligence. Not a very bright one, but one programmed to a specific task.

People are talking about weaponizing AI because it's sexy and it sells, but fundamentally it's stuff the military was going to do any way. Let's talk a bit about what people mean when they say they're going to use AI for the military, starting with the Navy's latest stopgap anti-ship missile.

...the LRASM is equipped with a BAE Systems-designed seeker and guidance system, integrating jam-resistant GPS/INS, an imaging infrared (IIR infrared homing) seeker with automatic scene/target matching recognition, a data-link, and passive electronic support measures (ESM) and radar warning receiver sensors. Artificial intelligence software combines these features to locate enemy ships and avoid neutral shipping in crowded areas...Unlike previous radar-only seeker-equipped missiles that went on to hit other vessels if diverted or decoyed, the multi-mode seeker ensures the correct target is hit in a specific area of the ship. An LRASM can find its own target autonomously by using its passive radar homing to locate ships in an area, then using passive measures once on terminal approach. (Wiki source.)

In other words, "artificial intelligence" roughly means "we are using software to feed a lot of data from a lot of different sensors into a microprocessor with some very elaborate decision trees/weighting." This is not different in kind from the software in any modern radar-homing self-guiding missile, it's just more sophisticated. It also isn't doing any independent reasoning! It's a very "smart" guidance system, and that's it. That's the first thing that you should note, which is that when you hear "artificial intelligence" you might be thinking C3PO, but arms manufacturers are happy to slap it onto something with the very limited reasoning of a missile guidance system.

What else would we use AI for? Drones are the big one on everyone's mind, but drones will be using the same sort of guidance software above, except coupled with mission programming. One concern people have, of course, is that the AI IFF software will goof and give it bad ideas, leading to friendly fire - a valid concern, but it likely will be using the same IFF software as the humans. Traditionally IFF failures on the part of humans are pretty common and catastrophic. There are cases where humans performed better than AI - but there are almost certainly cases where the AI would have performed better than the humans, too.

Neither drones nor terminal guidance systems are likely to use anything like GPT-style LLMs/general artificial intelligence, in my mind, because that would be a waste of space and power. Particularly on a missile, the name of the game will be getting the guidance system as small as reasonably possible, not stuffing terabytes of world literature into its shell for no reason.

The final use of AI that comes to mind (and I think the one that comes closest to Skynet etc.) is using it to sift through mountains of data and generate target sets. I think that's where LLMs/GAI might be used, and I think it's the "scariest" in the sense that it's the closest to allowing a real-life panopticon. I think what people are worried about is this targeting center being hooked up to the kill-chain: essentially being allowed to choose targets and carry out the attack. And I agree that this is a concern, although I've never been super worried about the AI going rogue - humans are unaligned enough as it is. But I think part of the problem is that it lure people into a false sense of security, because AI cannot replace the supremacy of politics in war.

And as it turns out, we've seen exactly that in Gaza. The Israelis used an AI to work up a very, very long target list, probably saving them thousands of man-hours. (It turns out that you don't need to worry about giving AI the trigger; if you just give it the data input humans will rubber-stamp its conclusions and carry out the strikes themselves.) And the result, of course, has been that Israel has completely achieved all of its goals in Gaza through overwhelming military force.

Or no, it hasn't, despite Gaza being thousands of times more data-transparent to Israel than (say) the Pacific will be to the United States in a war with China. AI simply won't take the friction out of warfare.

I think this is instructive as to the risks of AI in warfare, which I do think are real - but also not new, because if there is one thing almost as old as war, it is people deluding themselves into mistaking military strength for the capability to achieve political ends.

TLDR; 1) AI isn't new to warfare, and 2) you don't need to give Skynet the launch codes to have AI running your war.

And that's my .02 cents. I'm sure I missed something.

And the result, of course, has been that Israel has completely achieved all of its goals in Gaza through overwhelming military force.

In the sense that they’ve burned most of their international credibility, failed to contain an insurgency in a sealed area the size of Las Vegas, taken over two thousand unrecoverable military casualties, failed to rescue most of the hostages, and run through most of their preexisting munitions stockpiles, because HAL-9000 keeps telling them to bomb random apartment complexes instead of anywhere Hamas actually is.

Yes, as you can see from my next paragraph, I am deeply skeptical that Lavender (even if it works well, and I suspect it doesn't!) is winning Israel the war.