site banner

Culture War Roundup for the week of December 30, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

Vegas VBIED and NJ Drone Crossover Event

https://youtube.com/watch?v=xglaXVtQcis?si=ysIxFOPjPZdtHVOG

Nothing is better than a good crossover episode and it appears that the latest twist in the Las Vegas Trump Hotel bombing continues from last years cliffhanger NJ Drones episode.

The Shawn Ryan Show (B list independent media podcast) released an episode today that was an interview with Sam Shoemate (D List Instagram account that highlights military corruption and malfeasance). Between Jan 29-30, Sam was contacted by someone alleging to be Matt Berg with an urgent request to pass along his info to Shawn Ryan, Pete Hegseth, and Fox News. Sam was in contact with the alleged Mr. Berg on signal and ultimately received an email claiming that Mr Berg was on the run, escaping to Mexico, and that the USG was hot on his trail and potentially trying to kill him. Fortunately for Mr Berg, he had a Vehicle Bourne Improvised Explosive Device (VBIED) which apparently had held them off. This email also made at least two explosive claims.

  • The drones spotted over NJ in December were Chinese kit equipped with gravitic propulsion systems. They were a “show of force” by China, tasked with SIGINT and ISR, similar to the spy balloons from a few years ago. Mr Berg alleges that the US and China are the only countries that have this tech and that its an extremely dangerous situation, for obvious reasons. The implications being that we both have this future-tech that presents a MAD scenario.
  • That the USG committed some war crimes in Afghanistan in 2019 where we bombed obvious civilian targets and then covered it up. There is some link to the CIA, DEA, and DOD and the US covered this up and got away scott free. There are lots of other details about this but this is the jist of it.

Its unclear to me which of the two claims were the bigger deal to Mr Berg. He’s obviously distressed about the China drones, but he spends a lot of time on the war crimes as well.

Back to the story – Sam, the recipient of the email, writes this off as an unverifiable crackpot and sits on it. Until the news breaks on Jan 2 that Mr Berg blew himself up under very strange circumstances at a Trump Hotel in Las Vegas. It appears from what I am seeing right now that the mainstream media and Las Vegas sheriff has confirmed that email that Shawn Ryan published is from Matt Livelsberger. Even if they didn’t, there are plenty of details that made this seem to be extremely likely.

I think we can table the war crimes issues for the moment and just focus on the China drones.

This email, in an of itself, certainly doesn’t prove anything about the origins of the drones, but it certainly is a curious claim. Mr Bergs military MOS would potentially put him in a position to be read in on advanced US drone programs. There are a number of people in the independent media world who have made the case that the USG has been running black physics programs and cracked gravity 70 years ago. People usually respond to this with ridicule and note that something like this could never be kept secret. Personally, I just add this to the growing list of leakers and dot connectors that indicate that the UAP phenomenon is at least in part, terrestrial black projects that have access to what we would otherwise call science fiction technology.

If gravity has been cracked, it potentially means that other wild stuff like zero point energy is also on the table. What other energy sources would be able to explain UAPs the size of small cars that fly with anti-gravity drives. Tech like this would be extremely dangerous for obvious reasons. Reasons that would explain the secrecy. This would also explain why the USG changed their stance on this topic in the last 5-10 years. China has caught up with whatever we’ve been doing for a long time.

What do you all think of this? Hoax? Crazy person? Legit whistleblower? There are ton of threads to pull here. Is Mr Berg even dead? Did he fake his death in order to get a big spotlight on this?

If the US has gravitic secret technology, why are they buying all these F-35s?

In the 1970s Carter cancelled the B-1 bomber program because he was keen on the upcoming development of stealth bombers. Republicans hammered him for being weak on defence and eventually reinstated it... Anyway, if the US had an incredible gamechanging technology like this, they wouldn't be spending so much on conventional aircraft. There would be signals and portents.

If the US has unmanned fighter technology, why are they buying all these F-35s?

Pilot mafia. Top Gun Maverick is a great film, that's what USAF officers want to be doing. They don't want nerds sitting at a desk stealing their glory.

Hahaha, pilot mafia throttling the antigravity program so they can still buzz the control tower is...not one of the craziest theories I've seen.

(The truth is though that I don't think unmanned aircraft are yet in a position to replace manned aircraft, and I'm not sure they will be able to fully barring, basically, AGI.)

I'm not singling you out here because I hear this a lot and I wonder... what is it that pilots do that an AI can't that compensates for the training expense and kinematics costs of having a pilot? The pilot can't do damage-control on the plane mid-flight. They don't pick out targets, the sensors achieve lock on. They're not tactically superior, that's been proven with dogfighting simulations even between equally performing jets. It's all fly-by-wire these days, their muscles aren't necessary.

I guess a human might be better at the ethics of 'do we bomb this truck or not, given how close it is to civilians?' But again humans have high variance and it's not clear that this is so.

what is it that pilots do that an AI can't that compensates for the training expense and kinematics costs of having a pilot?

Here is what I think the answer is: recognizing jamming and adapting immediately to new uncatalogued threats or situations. You're correct that at this point modern fighters are basically fusing a human with "AI" so the question of what can humans do better anyway? is a very valid one.

Modern jamming is extremely good. It's hard for human pilots to tell when they are being jammed. [Edit: or at least that's the point of modern jamming/EW/deception programs such a NEMESIS as I understand it, but, to be clear I've never been in a position to personally test this, so take all of this with a grain of salt, bearing in mind that I just read about this stuff for fun.] In fact I think there is a decent chance that it's over for the radar-only bros in the Next Big War, both because of jamming and because using your radar makes you a big ole' target. (F-22 hardest hit!) But I think a human has at least a chance to recognize and understand that he is being jammed even if his fancy computer and his radar does not. That's not really true if you just have the fancy computer and the radar: if the 200 radar targets that appear on your screen suddenly already passed your hardware and software's anti-jamming features, your computer is going to think they are real. A pilot will know that they aren't. Now, as AI gets better and better, this will be less of a problem - maybe you can do deep learning on jamming, maybe you can put Claude in the cockpit and he would realize the 200 targets are fake.

TLDR; we already have AI in the cockpit now and it can almost certainly get fooled by modern jamming/ECM, it's good to have another set of eyeballs in the cockpit.

Secondly, new threats or situations. You touched on this a bit inasmuch as a human might be able to parse an ambiguous ethical situation better than an AI (although I agree, high variance) but consider a situation where the trained tactics fail. Let's take a hypothetical: air war with China, four planes leave the carrier, one comes back. It turns out that towed decoys aren't effective against the latest Chinese air-to-air missile, and the only reason one guy came back was because he goofed up the decoy deployment like an idiot and ended up maneuvering radically to survive the engagement. Now if you had had AIs, none of the planes would have come back. And, worse, you would have had no idea what happened to them, because you were operating on a mission and in an environment without any communication ability over the combat zone. (This is something else that is nice about pilots: they eject from the plane, and they float, so you can recover them a bit more easily than you can a black box at the bottom of the sea).

Now you have to tell all of your pilots "don't deploy decoys, you're going to have to make some very specific maneuvers to defeat the Chinese A2A missile threats." Said and done. And if you have ~AGI computers, you can tell them the same thing too. But if you have anything a bit dumber, you're going to have to rewrite their software on the fly to defeat the new threat. And it's going to suck if all of your software engineers aren't on the boat. (It would also suck if your adversaries got a hold of your codebase, or reverse-engineered it with their own AI, and used it to instruct their AI fighters on how to pwn your AI fighters every single time. Not exactly possible with high-variance humans!)

TLDR; it's not good to have new functionality gated behind civilian software engineers stateside during a time of war. (To be fair, I think the Pentagon recognizes this and is working on in-housing more coders.)

Now, imho, none of this means that AI is useless or that drone aircraft are useless in a peer war. But I suspect that this (and the fighter mafia) is why you're seeing things like the "unmanned wingman" approach being adopted (and not only in the States). The "unmanned wingman" approach basically lets you build aircraft with cutting edge AI and launch them right into combat, but because you aren't taking pilots out of the loop entirely, you'll still retain the flexibility and benefits of having an actual human person in the combat environment.

Maybe that won't be necessary - maybe everything will all go according to plan. But I don't think the AI is quite there yet.

Interesting points. In the back of my mind I was thinking that maybe AI aircraft would be more tactically flexible since you can change up their training in a quick update though I can see how it would also be bad if you had software leaks. But the F-35 software has already been leaked to China half a dozen times, they even have gotten some Chinese made parts into the supply chain.

Also one hopes that they'd put visual cameras on the plane. They already do I think, F-35 pilots have AR that lets them see through the plane I believe.

Even then, I still expect that the unmanned aircraft's advantages in price, quality and scale would be enormous. It wouldn't be 4 fighters going out on that mission, you could have 12 or 16 because training fighter pilots is inherently costly and slow. You would have smaller, faster and more agile aircraft, without human limitations. Whatever crazy dodging a human could do, the machine would easily surpass in terms of g-forces. Each fighter would have the crushing reflexes of a machine and that ruthless, ultra-honed AlphaGo edge of having spent a trillion hours in simulation evolving superior kills.

You could afford to lose those jets on risky missions - even suicide missions if you decided the gains were worth it.

But the F-35 software has already been leaked to China half a dozen times, they even have gotten some Chinese made parts into the supply chain.

YEP! And an additional concern is that if you had any backdoors in a an AI aircraft to enable it to be remotely controlled, it would be vulnerable to a cyberattack...that could impact 100% of the airborne fleet at once. But I'd be surprised if (on the flip side) we put a fleet of drone aircraft up that couldn't be manned remotely, in case the AI went wonky for whatever reason.

Even then, I still expect that the unmanned aircraft's advantages in price, quality and scale would be enormous. It wouldn't be 4 fighters going out on that mission, you could have 12 or 16 because training fighter pilots is inherently costly and slow. You would have smaller, faster and more agile aircraft, without human limitations. Whatever crazy dodging a human could do, the machine would easily surpass in terms of g-forces. Each fighter would have the crushing reflexes of a machine and that ruthless, ultra-honed AlphaGo edge of having spent a trillion hours in simulation evolving superior kills.

Well, this sounds good, but it's worth considering a few things:

  1. I am open to correction on this, but airframes and associated costs, not pilots, are the constraining factor in aviation. (Easy sanity check: do squadrons have more pilots than airframes? Yes. Do airframes require more maintenance and downtime? Also yes ...probably, Air Force guys gotta hit the golf course I guess...) Removing pilots doesn't remove the logistics footprint, and it doesn't make aircraft dramatically cheaper, which is the pain point. Fighter aircraft are sexy and lots of people want to fly them. I agree that in a high-attrition war training pilots could be a bottleneck, but even then we're likely also hitting aircraft production bottlenecks. If all of our aircraft get shot down, we will still have spare pilots left over. Now, at the point where we start getting robotic logistics, I agree that those advantages start to scale.
  2. Humans don't actually weigh all that much relative to an aircraft. The F-35 is 29,300 pounds empty and nearly 66,000 at max takeoff. Figure 200 pounds for a human operator, 200 pounds for an ejection seat, however much else you want for life support – even if you estimate a human adds a whole ton to the equation, you're looking at, what, 6% dry weight and 3% fully loaded weight? Sure, every little bit counts. I'm just saying it's probably not a miracle.
  3. It's true that robots won't black out from high-G maneuvers, and this will give them an edge. But it's also true that pilots are capable of doing things that they aren't supposed to do, like "flying the aircraft in such a way to do warp the airframe." There are structural limitations to these things, and human pilots are very capable of surpassing them (much to the chagrin of everyone else in the logistical chain.) Removing pilots from the aircraft won't magically make them capable of doing sick dogfighting skills; they still have worry about things like "will this snap my wings off." This, incidentally, raises another point in favor of human pilots – robots presumably will not violate NATOPS to gain an important advantage in a dogfight.
  4. Missiles (which are basically AI-enabled suicide drones) already surpass manned fighter aircraft in terms of ability to pull Gs, but manned fighter aircraft are still capable of defeating them kinematically. Replacing pilots with computers won't likely change this, either.
  5. The expensive parts of an aircraft aren't things like "the ejection seat," it's things like the radar or extremely bespoke metallurgy research for high-performance engines that presumably a purely unnamed force will still need to procure.

Each fighter would have the crushing reflexes of a machine and that ruthless, ultra-honed AlphaGo edge of having spent a trillion hours in simulation evolving superior kills.

Sure. My concern about this is in part because I don't think it will produce high variance. If you make your machines really deterministic, then I think outcomes become more binary, which means you have less opportunities for feedback if those outcomes are binary in a way that you don't like. Machines are extremely predictable and this is not necessarily a good thing. [And if you read the stuff about AI beating pilots in a dogfight it was, IIRC, because it was willing to take head-on gun shots, which aren't preferred by human pilots because it's risky to be nose-on to another fighter aircraft for collision-avoidance purposes. That's interesting – and particularly very relevant, for what is expected to be a small portion of future air combat – but if they've tested them without a human in the loop in a complex "realistic" air combat scenario I haven't heard. Doesn't mean it hasn't happened, though!]

The other thing – and honestly, this might be more relevant than the technical capabilities – is that there will be political resistance to outsourcing decision-making entirely to a machine. At what point do you want a machine to be making decisions about whether to shoot an aircraft with a civilian transponder? Even if machines can make those decisions, people will feel more comfortable knowing the important decisions are being made by someone who can be held accountable (and also that a software glitch won't result in all aircraft with civilian transponders being targeted.)

One of the concerns I have about any program that is predicated on being able to communicate with base is that that may be risky or prohibited in a future hostile air environment. This applies to loyal wingman programs and to any sort of drone that's supposed to be able to call back home. This is an entire tangent I could make a lot of ill-informed speculation on. But the TLDR is that if you think you might operate in an environment where you can't call home and there are certain decisions you think your pilots might need to make that you don't want drones to have to make, you'll be needing pilots.

(To be fair, in real life the pilots would typically get ground control sign-off on these sorts of decisions if possible. But if your plan is to let ground control make important decisions, imho you're looking at a fancy remotely-piloted aircraft. And I think that's the direction we are going, at least in part – humans make important engagement decisions, the loyal wingmen will carry them out.)

You could afford to lose those jets on risky missions - even suicide missions if you decided the gains were worth it.

Yep! That's the point of stuff like the loyal wingman program, the jets are "attritable." Same with the optionally-manned aircraft, where you can remove pilots if you assess the mission is very risky. And I think this is a good idea: it hedges bets against weaknesses in AI while opening the door to utilizing them to their fullest potential. I'm not anti-drone, I just don't think the AI is ready for the quantum leap that removing humans from the picture entirely would represent, and it might never be entirely barring AGI-like capabilities.

Previously I said that I didn't think unmanned aircraft were ready to replace manned aircraft, but let me add a bit more nuance – I do think that unmanned aircraft are ready to supplement manned aircraft. I think moving to a world where perhaps we have fewer manned fighters makes sense in the future, possibly now. I think loyal wingman programs are, at a minimum, worth experimenting with. Perhaps in a future generation, we'll be able to take humans out of anything resembling fighter aircraft and move them back to manned control centers, perhaps flying or perhaps on the ground (or perhaps we'll replace aircraft with munitions entirely – there's a point where cheap enough cruise and ballistic missiles make a lot of aircraft pointless). My guess (again, as per loyal wingman) is that we'll see pilots moved back from the frontlines of air combat where possible. I suspect part of the move to this will be precipitated or accelerated, not by AI technology, but by laser technology. New technology may end up making fighter aircraft as we know them obsolete as a class in the future.

But unless we're able to incorporate a pretty intelligent AI into an aircraft, I think that replacing aircraft with AI will look a lot more like "replacing all aircraft with missiles" – which, again, may make sense at a certain point. But it will probably mean that the aircraft we will be employing – again, barring ~AGI – won't be a 1:1 replacement for the capabilities of modern fighters, they will be employed in a different way. Maybe we'll see manned fighter aircraft retained for politically sensitive things like air policing missions, but not for relatively straightforward (and risky!) tasks such as deep penetration strikes on set targets.

If you're curious enough about this I can try to run down an actual pilot of my acquaintance and ask him for his thoughts on our exchange.

More comments