site banner

Culture War Roundup for the week of September 16, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

A criminal suspect has lost a fight against a police robot. First he tried shooting it, then covering it with a sheet. The robot tear-gassed him, then ran him over (after he was shot by a sniper).

https://youtube.com/watch?v=ZI1j5GPuSvw

This happened in Texas where in 2016 the police used a suicide bot to blow up a BLM terrorist.

Humans currently are in the position where we can basically dominate everything else. There are some animals that are said to be intelligent, like dolphins, crows, octopus, honey badgers etc. but their evolution has basically hit a ceiling and they're not going to get to the point where they will be carrying around tear gas and AR-15s. So their intelligence doesn't bother us. But soon enough there will be tens of millions of robots walking around, each of whom is smarter than the smartest human. You will lose basically every challenge against them. What then? Nobody voted for this and there's no opting out. Fun.

I don't think this has anything to do with AI. That robot is remote-controlled directly by a human who can see it. It's basically a fancy remote-control car. As was the one in the 2016 incident.

The sheet-covering technique would probably have been pretty effective against an actual AI robot. It wasn't because it's actually controlled by a human who can see it, so the robot's camera being unable to see anything is only a minor hindrance.

We can't currently build any AI that's nearly as smart as a human, even with datacenters full of computers. I don't think we're going to have an actual independent robot that's smarter than a potted plant anytime soon.

As sibling comment says, cheap drones are indeed concerning. Though AFAIK they're all also remotely-controlled with very minimal to no autonomy and nothing resembling intelligence. Drones are also pretty sharply limited in power budget and payload. The air is indeed somewhat more friendly to current or plausible future AI, but I'd think it would first come to high-dollar high-speed jet fighters. Air to air combat has remarkably fewer variables and more benefit from being able to pull high Gs, and those aircraft have much better payload and power budgets.

Why do they need to walk?

The air is the natural domain of the robot, as we see in Ukraine. The Russians and Ukrainians have been toying with ground combat robots but they're throwing industrial quantities of aerial drones at eachother. Some explode, some drop bombs, some are wire-guided to bypass electronic warfare, some have jet-engines for long endurance and long range. They have amazing camera zoom, they can pick out targets day and night.

Flying kamikaze drones are very hard to deal with. You can dodge one dropped grenade or club one away with your rifle. But three? Five? You're going to die. These things are cheap. Onboard AI guidance and swarming will make them even more dangerous.

It's only a matter of time before machines take over high-end airpower too. Humans are expensive to train, need all kinds of life support and suffer under g-forces. We were not made to careen around in the upper atmosphere at 9G or above, that's not where our skills lie. We're ground creatures, I bet that walking around and close quarters will be the last domains that fall to AI.

I saw a video on Twitter where a Ukrainian or Russian guy emptied his clip at some tiny drone and then finally eliminated it by hitting it with his rifle.

In the not-so-distant future, the drones will be much smaller, much more intelligent, and much cheaper. Imagine trying to defend yourself against 10 bullet-size drones flying towards your face with a small but lethal explosive charge at the tip.

In any future war, China wins because they will be able to make 10x as many drones as the U.S. coalition. The western coalition might counter by setting up self-replicating drone factories, which would be a fun development.

Obviously assassination becomes trivial in this environment as well.

In other news, Microsoft is starting Three Mile Island nuclear plant to obtain energy for its AI systems...

In the not-so-distant future, the drones will be much smaller, much more intelligent, and much cheaper. Imagine trying to defend yourself against 10 bullet-size drones flying towards your face with a small but lethal explosive charge at the tip.

Setting aside that payload is sharply limited by battery capacity and the current form factor about as good as it's likely to get (and therefore you might as well worry about being attacked by nano-replicators or some other sci-fi thing), some kind of net-gun with IR sights would probably do the job just fine?

It's not that drones are invincible so much as that humans are very, um, vincinble.

You can waste hundreds of cheap, replaceable drones to take out one very expensive, difficult to replace human. At some point, sending humans into the combat zone will be suicidal.

Worryingly, drones could be used against civilian targets.

Humans are actually cheap and expendable -- the fact that the US military does not consider them so is an historical aberration and there's no guarantee that it will extend to modern warfare -- as we indeed see if we look at the attitudes of both sides in the current Ukrainian conflict. (They are also pretty good at killing civilians so long as you dehumanize them enough first; see both current major conflicts, but particularly Israel. Not sure what that has to do with anything though; humans still need to fly/deploy the drones)

How cheap do you think these drones can be that 'hundreds' of them will be a good tradeoff for a single muddy conscript? I think you've been infected with FOOM/AI doomer rhetoric. What would you specifically do to get from the current state of the art to the point where sending humans into the combat zone is suicidal?

No, humans are not actually cheap and expendable, even if you approach this in a dispassionate way. Consider the economic costs for society for bringing up a soldier in the modern society. Food, education, child care, opportunity costs for parents, they all add up. How much is it? Online sources estimates around $300k USD in total in terms of monetary expenditure, but the societal costs probably a lot more due to opportunity costs for the “village” raising the child, but let’s go with the $300k figure for now. Not even accounting for government training, equipment, and logistics footprint, it’s already impressive. Say maybe a Russian, Ukrainian, or Chinese child takes less to raise, maybe they only require $30k USD in societal costs, that’s still a lot. If you can achieve a kill while expending a hundred $300 drones, it’s still worth it.

Neither the Russians nor Ukrainians see it that way -- I wouldn't say it's a great attitude, but it's extremely lindy. You also still need a hundred-ish guys to drive the drones, so it's not like it makes your army needs smaller -- just allows cowards to participate in the bloodshed I suppose.

No, the main difference is that costs for humans were mostly already paid (as they were for old Soviet stocks) but future drones need to be manufactured and most of the production is outside of your country)

Well, from what I can read online, the Russians are paying upwards of 1.9m rubles (21k USD) for people signing up to become a soldier, and 5m (55k USD) rubles to their family if they die. It’s certainly more than the 30k number I gave earlier.

The thing is, right now you need 100 guys to drive the drones, but once they become autonomous, or semi-autonomous (imagine a drone leader and you just drive that one, and the others mimics/stays in formation autonomously), it’ll take a lot less manpower to achieve the same ends.

More comments

Imagine trying to defend yourself against 10 bullet-size drones flying towards your face with a small but lethal explosive charge at the tip.

I would use an electro-magnetic pulse (EMP). And looks like the Pentagon is on it.

Imagine trying to defend yourself against 10 bullet-size drones flying towards your face with a small but lethal explosive charge at the tip.

I'm not itching for the chance here, but I'd be very surprised if Western R&D isn't cooking up mostly-autonomous, short-range anti-drone weapons (lasers, small caliber guns) that they intend to strap to pretty much everything bigger than a jeep. I imagine that modern electronics manufacturing could build a miniature CIWS for not too much more than the drone it's targeting: the RF and compute electronics to do this are much more ubiquitous than they were when the original technology was deployed on ships.

As others have pointed out, the way to stop a bad guy with a drone is a good guy with a drone.

I think disposable anti-quadcopter devices could be manufactured fairly cheaply.

One idea would be to use a hobbyist rocket motor with a camera, some steering and electronics and a small payload to build a tracking missile.

Or you could have a light quadcopter with 5 minute battery and a small payload, which could likely be faster than most quadcopters with larger battery sizes and which carry militarily relevant stuff.

Then if more expensive quadcopters wants to stay competitive in conflicts where both sides have similar resources, they would have to find a way to become immune to such cheap methods to attack drones. Or you could end up with a situation where each side deploys zillions of cheap disposable drones.

Also small drones used to ram other incoming drones. Like what Anduril is making.

They can fly so low though! At the treeline, below the treeline, at waist height... At sea you have clear lines of sight and an elevated position to shoot down at surface-skimming missiles.

Countering these things is hard, as we see in Ukraine. The solution may just be to have more drones of your own.

The noise is pretty distinctive, though. I bet you could cook up something if you had the time and money. A few sensors around the tank for triangulating the source of the buzzing, and four shotguns on a mounted swivel placed around the tank. When the defences are turned on, they auto-target and fire a wide blast of pellets towards the sound.

Obviously, making that safe and viable for actual day-to-day work would be harder but I think that there are things you can do.

I appreciate the ideas, but want to pop in with a relevant point of information. The sound can be (strongly) mitigated. There are extant and 3D-printable rotor designs that do so, and they’d be soon be adopted if sound-tracking caught on. Additionally, and notably more effectively for larger drones, you can play counter-frequency sound to directly cancel out the wave amplitude, as we see in modern stealth helicopters.

So, sound tracking is tricky. You could certainly make the argument that this just reduces reaction time (and you’d be right); but that reduced reaction time due to less overall sound is critical. Additionally, I think RandomRanger is suggesting that drone swarms are the future; most area of effect weapons (like your shotgun idea) are still going to be generally pointed in a direction. This means a semi-autonomous swarm moving from multiple directions would be very, very hard to bring down.

We’ll see. EMP on your position could work, at the cost of any electronics you’ve got on you. Potentially we will see electromagnetic field producing weapons that could neutralize a wide angle… But there we run into power density issues (at least for infantry—a mobile platform could afford to have large batteries, capacitors, and the power production to fill them).

But it won’t be easy, at least for a while. It may well be that the human warfighter is on its way out.

Fascinating, thank you for engaging! The last drones I saw sounded like a hive of bees at 200m out. I hadn't even considered cancelling out the noise - I imagine it's harder to do that at the source rather than at the end-point (the listener's ears) but it's very cool that they do it with stealth copters.

The field is moving quickly. Sound is just a stop-gap for this year.

Medium-term, we'll put at least one small-caliber point defense cannon on every single vehicle and aircraft. Small, mass-produced phase array antenna for a millimeter-wave radar, a couple of current-gen cellphone cameras, maybe IR if you feel fancy, network between locally adjacent guns, the rest is software.

Those 70GHz+ radar modules exist today, autonomous vehicles need several of those. Automotive pricing policies ensure they cost single-digit dollar amounts. They go out to 1000' and have amazing resolution. Picking out small things going faster than 30 mph is trivial, even if they are hugging the treeline or the ground. Slewing the gun with two high power servos motors is trivial, getting below 100ms to line up on target is not unrealistic. Standardize the ammo (probably some type of buck-shot effective at 500') and prepare to produce it in absurd quantities.

Potentially we will see electromagnetic field producing weapons that could neutralize a wide angle…

Unfortunately, wrapping your drone in aluminum foil mitigates most of that. You might lose your GPS antenna and the radio link to base (unless that one is highly directional, looking away from the E-field gun), but most autonomous targeting functionality is pretty easy to preserve.

Very interesting, thank you for the explanation!

Since it sounds like you’re more educated on this than me, where does this leave us with humans? I can accept that it isn’t inherently impossible to handle drones (though I note that a single PDC would still be overwhelmed by a swarm)… But humans don’t have PDCs, radar, or the electrical generation for any of that.

Regarding EM warfare vs drones, sure, you can be fully autonomous and protected. But it’s a trade off, isn’t it? Full autonomy seems to me to favor larger drones, rather than disposable and cheap swarms. It’s also more expensive, I’d think.

How do you see the future of drone warfare, or warfare in general?

More comments

Yeah I saw that exact video. It's crazy to see people literally dodging death in HD on the internet.

Mass production is key, Anduril and co keep producing these shiny anime trailers and marketing gear, China puts out these big drone shows where 6,000 are flying in sync like a next level firework show. That's a real demonstration of ability.

At some point container ships are just going to vomit out tens of thousands of flying bombs and make Pearl Harbour look like a joke.

I just pray the cartels don’t get their hands on this tech

/images/1726884873146521.webp

I love how this war is just the US and China testing their new drone tech against proxy meat puppets.

It brings to mind the Spanish Civil War and what came after it.

I'm about 99% certain this robot is just a very expensive and fancy remote-controlled car. I don't think this incident has any bearing at all on AI, since no AI was involved.

However, on that note, I doubt there will be tens of millions of robots walking around anytime soon, even if (especially if) they are smarter than people...because if they are smarter than humans it will be much, much cheaper and more profitable just to connect them to the internet and have them do email and managerial jobs.

I'm about 99% certain this robot is just a very expensive and fancy remote-controlled car. I don't think this incident has any bearing at all on AI, since no AI was involved.

You would be correct. As much as we talk up "autonomous systems" the overwhelming majority of systems are not "autonomous" at all, they're "remotely piloted". Anything resembling true autonomy is still deep in the realm of DARPA grants and strictly enforced NDAs.

DARPA grants, at least in the area of autonomy, still massively publish. And that's just using the most bland keywords that are mostly getting at summaries of their grand challenges. Those summaries will have gobs of references to the much more specific work that has been published with little reference to DARPA other than a funding acknowledgement. I don't yet know of a tool that allows you to search the literature specifically for funding acknowledgements from DARPA rather than being heavily biased toward papers where "DARPA" actually made it into the title, but there are tons of such papers.

What's the line? Our tools have been rebelling against us since the first farmer stepped on a rake?

The idea that this is some sort of escalation or new and novel threat is frankly just dumb. People have been working on ways to kill eachother remotely since the days of Archimedes. It was a major part of his whole "brand". Im also quite skeptical of the claims that they will be "smarter than the smartest human" and or that we will "lose basically every challenge against them" claims.

The blanket play in the video was actually quite smart if clumsily executed, and as you yourself observed, the thing that actually stopped him was getting shot by a human.

The idea that this is some sort of escalation or new and novel threat is frankly just dumb

There are few arguments that make me want to climb up a wall like this one. Intellectually, I know people sincerely believe it. But I still sometimes wonder if I'm the victim of a Ken M-esque troll (if so, bravo)

Nuclear weapons are simply an improvement on our ability to blow shit up. You can tell because they're literally measured by comparison to our previous set of explosives. Nobody is blase about them.

It is of absolutely no comfort to me that we went thousands of years trying to set woods on fire before we figured out how to roast cities.

It is of absolutely no comfort to me that we went thousands of years trying to set woods on fire before we figured out how to roast cities.

Im not suggesting that it should be. I am mearly pointing out that the difference is one of degree not of kind. A man killed by a rock to the head is just as dead as a man killed by a bullet to the head.

I am mearly pointing out that the difference is one of degree not of kind.

Quantity has a quality all its own.

Killing 1 billion people is fundamentally different than killing 1 person, even if the method employed is the same.

Im familiar with the whole "one death is a tragedy, a billion is a statistic" line of thinking but is it really fundementally different?

Let’s compare this to the police-abolitionist left’s latest martyr. On September 15th, the NYPD shot and wounded a man named Derrell Mickles. Two bystanders were also wounded by gunfire, as was one of the officers on scene. (Apparently by a ricocheting bullet.) In stark contrast to the standard complaint about trigger-happy American cops, this scenario is an example of a very common problem, which is police officers being too reluctant to shoot.

The narrative being circulated is that the NYPD “killed a man over a $2.90 subway ticket.” Well, leaving aside the fact that nobody actually died, this is also a lie, because Mickles was shot for repeatedly charging at police officers with a knife. Mickles had jumped the turnstile at a subway station twice in the span of ten minutes. The first time he did so, police followed him and asked him to leave, which he did. Nobody was shot during this encounter, but Mickles brandished a large knife before departing the station. When he then returned and jumped the turnstile a second time, police followed him onto the platform, where the shooting occurred.

As early as 2:28 into the video, Mickles says to an officer, “I’m gonna make you kill me.” He then repeatedly shouts, “Shoot me!” as the officers ask him probably twenty times to drop the knife. 3:25 is the first time that Mickles moves toward police aggressively; at this point in the encounter it would unquestionably be justified to deploy a taser, and probably ruled justified to use deadly force. Instead, they hesitate, and seconds later the subway train enters the station and Mickles gets on an occupied Subway train with a knife in his hand. The police’s reluctance to shoot Mickens has now created a situation that is far more dangerous to the public. At this point the officers deploy tasers multiple times, striking Mickles at least once; he was almost certainly on drugs during the encounter, because he shrugs off the taser. Only at this point, with Mickles now having departed the train and re-entered the platform, do the officers pull the trigger, with their backstop being the subway train, rather than the empty platform it would have been had they shot him when they first had the chance. Their indecision - their reluctance to shoot another person even when that person is armed, dangerous, and actively goading them into shooting him - endangered their own lives and the lives of others.

Can you understand why I might look toward the decisiveness, the cold competence of a robot cop who’s not afraid of libelous press coverage or administrative leave or criminal charges by an anti-cop DA, and think, “Hell yeah, let’s get some more of that.” I want men like Derrell Mickles to be dispatched quickly and without fanfare, rather than allowed to put the public at risk. Police officers are nervous, pumped up on adrenaline, and can easily forget their training under stress. A robot would have done what needed to be done, and all the people on that train could have been on their way.

I'm not really seeing the AI side. Human cops are perfectly capable of being competent and decisive and appropriately escalating to violence when needed too. If they aren't, it's mostly due to their orders, their training, and the other factors that play into their incentives. Those were all created by politicians and can be removed by them too.

If we ever have AI robot cops, why wouldn't they be programmed by the exact same people who gave the existing police those orders? Why wouldn't they behave the exact same way, only even harder? All the current companies involved in LLMs have already done this in all of their public models. AI robots (presuming they ever actually exist) would probably capable of behaving exactly as they are ordered to an even greater extent than human cops. They might well be programmed to machine-gun a white professional with no criminal record for looking at them funny while completely ignoring a severely mentally ill black career criminal actively stabbing people.

I thought you were going somewhere else until the last paragraph. The tools of violence that the police have access to aren't actually very good at their jobs: Guns cause deadly wounds and may unintentionally hit bystanders, Tasers are much less reliable, and hand-to-hand fights (possibly with batons) inherently involve risk to the officers.

A remote-controlled (not autonomous) bot has the potential to be safer and more targeted than a gun, more incapacitating than a taser, and less risky than getting personally involved. If the police had an effective bot (that doesn't exist in 2024) in the subway, they could've simply arrested him after he started brandishing the knife. No muss, no fuss, and only the only risk is some equipment damage if he gets a good stab in.

Right, I’m in favor of developing and utilizing remote-controlled arrest robots in the short term to see how well that goes; I agree that it would significantly reduce a number of risks and make the arrest process far more efficient and effective. If that goes well enough, we might not even need to progress to autonomous robocops! My general point is that policing right now is severely hampered by the fallibility and vulnerability of flesh-and-blood beat cops, and that a move toward more automation and robotics in policing strikes me as a highly promising development.

The narrative being circulated is that the NYPD “killed a man over a $2.90 subway ticket.”

This can be translated as "this is why America will never look like northern Europe in terms of transit"

Can you understand why I might look toward the decisiveness, the cold competence of a robot cop who’s not afraid of libelous press coverage or administrative leave or criminal charges by an anti-cop DA, and think, “Hell yeah, let’s get some more of that.”

We're getting a head start. This - and not some Brian Herbert Terminator bullshit - is why the Butlerian Jihad happened:

"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."

It would be nice to have a technological solution to social problems. But all this does is centralize power in the hands of people who made things like Google Gemini a mess. You think they can't see ahead to the "imported" racial bias? They called it way ahead of time, and took steps.

For all you know, your cold robot is going to be given an androgynous body shape, will only answer to Xir and, in a twisted inversion of I, Robot, have a built in random factor to save a marginalized body every so often instead of doing the "rational" thing, for equity. It only makes sense. Getting stabbed in the subway is awful. But what about the violence done against black bodies and other justice-impacted folx? It's not an easy equation.

Only at this point, with Mickles now having departed the train and re-entered the platform, do the officers pull the trigger, with their backstop being the subway train, rather than the empty platform it would have been had they shot him when they first had the chance.

The platform isn't empty though, there's people on it. You're probably right that the train has a higher density of people, although neither was particularly full. It seems pretty likely that the cops wouldn't want to open fire in a big open space like that. It seems plausible that a bullet might even fly out of the station depending on how it's aimed.

I also like the guy who refused to move from his seat while this is all happening.

What does any of that have to do with anything i said, and how did you type all that out in under 5 minutes?

Also are you under the mistaken impression that the robot shot the guy?

I didn’t reply to you, unless you are also @Blunicorn, so I’m not sure what you mean.

No, the robot in OP’s post did not shoot anyone. What I’m saying is that I am not necessarily afraid of the replacement of human cops by robot cops, if it means an improvement in the competence and decisionmaking of police.

Odd, for some reason your comment was showing up as a child of mine. But after refreshing the page it seems to have sorted itself.

Apologies.

(Note i'm on mobile FWIW)