@Shrike's banner p

Shrike


				

				

				
0 followers   follows 0 users  
joined 2023 December 20 23:39:44 UTC

				

User ID: 2807

Shrike


				
				
				

				
0 followers   follows 0 users   joined 2023 December 20 23:39:44 UTC

					

No bio...


					

User ID: 2807

Both Ukrainian and Russian states have atrociously high tolerance for losses and their citizens will keep dying for the foreseeable future.

This seems plausible to me, but much of the rest of your comment I think is subject to criticism.

Ukraine is in a hard but sustainable position right now

I wouldn't rule out this possibility, but on the other hand it looks like Russia is sending back 20 Ukrainians in bodybags for each Russian body they get back from the Ukrainians. This almost certainly reflects who is advancing as much or more than actual casualty ratios, but it is still not great for Ukraine.

This makes them less likely to ever militarily assist Russia

The Chinese are supporting Russia's military industrially. Not only have they been criticized by NATO and European leaders for this, but Chinese firms have been sanctioned. Reporting from last fall indicates that Russia actually established a facility to build military drones in China.

If Europe is unwilling to break from China it is for other reasons, not because China isn't helping Russia.

Europe, de facto deprived of the American shield, is also quickly militarizing

Europe is not "de facto" deprived of the American shield. The Americans have done some saber-rattling to convince the Europeans to open their wallets. They might cut half of the extra forces Biden sent to Europe in 2022, since which time Sweden and Finland both joined NATO, bringing more manpower to Europe's defense than said extra forces. Reducing US forces in Europe by 10% is not the same as pulling out of NATO or anything like that.

And China has also cut off Europe's access to drone components which makes a European pivot to China for defense purposes...fraught. Particularly considering that Ukraine's new and very transparent attempts to link China and Russia together in their invasion are...unlikely to increase the supply of drones to Ukraine. I really doubt Ukraine and Europe can match China and Russia's drone production, so if this is a stagnant war that will end only when the last infantryman is killed by the last FPV drone, I think Russia is still favored here.

Budapest Memorandum anyone?

A non-legally-binding document that contains no security guarantees is hardly worse than "vague European security guarantees" if those are actually on the table.

However, with all of that being said, I do agree with you - I suspect that either Ukraine, Russia or both will not agree to this deal. (I do agree with Lizardspawn that it might be smart for Russia to accept it, banking on Ukraine refusing it.)

Yes. But my recollection is that pagan gods are often (typically?) also not the Creator of everything and the ground of existence itself.

This seems like a massive oversimplification.

Fair enough!

Scripture itself is not exactly simple on this, but it does refer to "gods" (depending on your translation, and including, yes, associating them with what might be translated "demons" or the like) and there's a very long tradition in Christianity (and Judaism) of contrasting God with the other gods not by virtue of being more real but by virtue of being superior - more powerful and benevolent than the gods of others. This is somewhat muddied by mocking idols as being powerless, but there are a number of passages in both the Old and New Testaments that do give credence to the idea of other spiritual beings that are worshiped as gods, so - walks like a duck, talks like a duck - arguably fair to call it a duck!

It seems like the Orthodox mostly don’t directly blame those people for being so fooled, especially as Christ had not yet arrived to spread the good word, nor do the Orthodox apparently believe that such “demons” were (or are) purely malevolent beings. But it seems pretty clear to at least Orthodox Christians — unless I’m somehow misunderstanding their words — that pagans who believed their Gods were supreme and benevolent beings were totally mistaken about the true nature of the beings which they worshipped.

I grant you that there is a difference in association between "DEMON" and "GOD" but it seems to me like your Orthodox friends and the pagans agreed descriptively on what was being worshiped by pagans (very powerful spiritual beings). I will admit to not being an expert in pagan belief systems, but I am unaware of any pagan pantheon where the gods were "supreme and benevolent" in the sense that we view the Christian God. In the mythologies I am aware of, the gods fight each other, have differing values, typically do not serve all sects or people groups equally (or want to, they often seem to have their own little cults of devotees rather than aspiring towards some sort of universal status), and as I recall often seem to have stumbled into their powers through violence or subterfuge (instead of having them by right as an un-created Creator) and sometimes do things that are ~evil to humans because they can. Now, obviously you have people who say this is also true of the Christian God, but it seems to me there is a big difference between the self-story of God in the Judeo-Christian tradition and the self-story of the various pagan gods. In fact - and I think there is supposed to be fairly decent evidence for this textually, although, again, not my area of expertise - in many ways the Judeo-Christian self-story of God (at least in Genesis) seems set up as a refutation of the claims of other gods - a "setting the facts straight," if you will.

Now, Christians believe that only one God should be worshiped. Thus whatever the characteristics of any other entities, if you worship them you are mistaken about their true nature. But I think one could still call entities that had the characteristics of the "gods" of various mythologies "gods" fairly, even if in the Christian theological framework they were not the One True God who was owed worship.

TDLR; to the question, I would say that Christianity has no problems with the pagan gods being real but it does have a problem with them being worshiped.

Not according to the Christian tradition!

I don't think they actually have insight into what a more intelligent person would do, particularly in the context of greater intelligence leading to better decision making.

Ah yes, sorry, if you stick to intelligence as being more about "how well you perform on the SAT" then I tend to agree. But of course in real life that's only part of what effects outcomes, which curves back around to some of my perspective on AI.

I think that's more expertise than intelligence. Not always easy to disentangle, though.

Right. I mean, think about it from the AI perspective. The AI would have no intelligence without education, because being trained on data is all that it is. A computer chip isn't intelligent at all. I don't think that directly analogizes to humans, but you see my point.

the entire point of creating a superintelligent AI is that it's able to apply intelligence in a way that is otherwise impossible

I think in the popular discourse (not accusing you of this, although I think it rubs off a bit on all of us, me included) there's a bit of a motte-and-bailey here. Because AIs like this have already been built (decades ago) to do complex things like "missile interception" that would be impossible to do with manual human control. So the idea of what a superintelligence constitutes wobbles back and forth between a very literal deus ex machina and "something better performing than a human" - which of course we already have.

So I would say that it is possible to make a "superhuman AI" whose actions are predictable (generally). But I would agree with you that it is also possible to make a superhuman AI whose decisions are unpredictable. I just don't think "able to score on the SAT better than humans" or what have you necessarily translates out to unpredictability.

One is that humans don't seem to want to coordinate to increase the amount of uncertainty any AI would experience.

I mean I do think that humans are helpfully coordinating to increase the amount of uncertainty other humans experience, which rolls over to AI.

Perhaps our defenses against this superintelligent AI working around these barriers would be sufficient, perhaps not. It's intrinsically hard to predict when going up against something much more intelligent than you. And that's the problem.

Sure. I just tend to think in some ways it is easier to "keep the location of our SSBNs hidden" and "not put missile defenses around our AI superclusters" than it is to "correctly ensure that these billions of lines of code are all going to behave correctly," if that makes sense.

And as we've learned, "anti-ship ballistic missiles backed by Chinese spy satellites" are not in fact an insurmountable obstacle for carrier battle groups.

Yeah. I wouldn't be surprised if they actually keep the carriers out of a Taiwan Strait scenario, though, and detail them to doing interdiction/blockade work outside of DF-26 range. Although the Navy has a lot of pride and a long time to work on the problem, so maybe they feel pretty confident by now.

Sorry for my delayed response.

Why would that matter, though? A superintelligence would be intelligent enough to figure out that such faulty human training is part of its "evolutionary heritage" and figure out ways around it for accomplishing its goals.

Well, I mean – humans are smart enough to realize that drugs are hijacking their brain's reward/pleasure center, but that doesn't save people from drug addiction.

Now, maybe computers will be able to overcome those problems with simple coding. But maybe they won't.

A superintelligence would be intelligent enough to figure out that it needs to gather data that allows it to create a useful enough model for whatever its goals are. It's entirely possible that a subservient goal for whatever goal we want to deploy the superintelligence towards happens to be taking over the world or human extinction or whatever, in which case it would gather data that allows it to create a useful enough model for accomplishing those. This uncertainty is the entire problem.

Sure. But it's much better (and less uncertain) to be dealing with something whose goals you control than something whose goals you do not.

I don't think either of your examples is correct. Can a dog look at your computer screen while you read this comment and predict which letters you will type out in response on the keyboard? Can you look at a more intelligent person than you proving a math theorem that you can't solve and predict which letters he will write out on his notepad? If you could, then, to what extent is that person more intelligent than you?

Nope! But on the flip side, a cat can predict that a human will wake up when given the right stimulus, a dog can track a human for miles, sometimes despite whatever obstacles the human might attempt to put in its way. Being able to correctly predict what a more intelligent being would do is quite possible. (If it's not, then we have no need to fear superintelligences killing us all, since that's been predicted numerous times.)

This is what I mean by "almost by definition." If you could reliably predict the behavior of something more intelligent than you, then you would simply behave in that way and be more intelligent than yourself, which is obviously impossible.

I don't think this is true, on a couple of points. Look, people constantly do things they know are stupid. So it's quite possible to know what a smarter person would do and not do it. But secondly, part of education is being able to learn and imitate (which is, essentially, prediction) what wiser people do, and this does make you more intelligent.

Since, by definition, we can't predict what those subgoals might be, those subgoals could involve things that we don't want to happen.

I predict I will be able to predict what those subgoals are (I will ask the AI).

But we don't know, because a generally intelligent AI, and even moreso a superintelligent one, is something whose "values" and "motivations" we have no experience with the same way we do with humans and mathematicians and other living things that we are biologically related to.

I'm very glad you said this, because I STRONGLY AGREE. I've argued before on here that most human values, emotions, and motivations are fundamentally biologically derived and likely will not be mirrored (absent programming to that effect) by an entity that exists as a bunch of lines of code on a computer server. And programming or no, such an entity's experience would not be remotely analogous to ours.

The point of "solving" the alignment problem is to be able to reliably predict boundaries in the behavior of superintelligent AI similarly to how we are able to do so in the behavior of humans, including humans more intelligent than ourselves.

Yes, I like this definition. You'll note I am not arguing against alignment. But one of the things we do to keep human behavior predictable is retain the ability to deploy coercive means. I suppose in one sense I am suggesting that we think of alignment more broadly. I think that taking relatively straightforward steps to increase the amount of uncertainty an EVIL AI would experience might be tremendously helpful in alignment. (It's also more likely to hedge against central points of failure, e.g. we don't want to feed the location of all of our SSBNs to our supercomputer, because even if we trust the supercomputer, we don't want a data breach to expose the location of all of our SSBNs.)

I'm familiar China has a satellite constellation for the same

Yeah, the Russians also had a satellite constellation. By your telling carriers have been obsolete for 50 years. (Not necessarily implausible but...I doubt it).

You can't intercept mach 5 drones 35 km up that evade at 15 G.. you simply don't have the dV for it.

I don't think this is true at all, THAAD and the SM-3 are both much faster than Mach 5 and should have the dV. I do think their fast drone is one of the better backup solutions for sea control, but the Russians had plenty of MPA aircraft too, and they had trouble finding US carriers even in peacetime when their patrol aircraft weren't at risk of getting shot down.

But if it is true that hypersonic vehicles can't be intercepted, that's...not necessarily good for China.

They only need to deter the carrier groups long enough to secure Taiwan.

I am not really sure that carrier groups are needed to defend Taiwan at all.

Ok I'm going to reply in depth later but you shuold familiarize yourself with how well the 'Scud hunt' went during Gulf war

I am familiar with the SCUD hunt. I also know what SENTIENT is. Are you familiar with Soviet attempts to find carrier battle groups?

the estimates for breaking through into an static, decades prepare air-defense grid (it's weeks in case of Russia)

To establish air supremacy or superiority, yes. Obviously it did not take the Ukrainians weeks to penetrate the Russian air-defense grid once they got the right capabilities, nor would it take the US weeks to penetrate it if they wanted to.

the multiple methods for detecting stealth planes (multilateration, undoubtedly networked parabolic microphones and more!

I do not necessarily think stealth aircraft are the best assets the US has against mobile ballistic missile launchers. Nevertheless we've learned that modern air defense systems do not render even non-stealthy aircraft incapable.

Now frankly I think it would likely be stupid to waste munitions on something the size of a ballistic missile launcher that might move at any moment. (And my understanding is that US doctrine was actually to avoid striking Chinese launchers anyway.) But my point is that the US having the theoretical capability does not make the missile useless! I agree with you that there are countermeasures against targeting mobile ballistic missile launchers! It's hard to do!

Also that China does have satellite dazzlers ready.

And the US has ways of operating despite dazzlers - stealth satellites, [likely] high-altitude hypersonic recon/(strike?) aircraft, maneuvering spacecraft, non-optical recon satellites, some dude with a quadcopter, SIGINT, etc.

In short US wouldn't be likely to acquire these launchers, wouldn't have much to hit them with - cruise missiles aren't great at following moving targets and also planes wouldn't be able to get near.

Moving the launchers around constantly is unlikely (although moving them consistently is). (And, for the record, at least some modern cruise missiles are capable of hitting moving targets, although I agree with you that the moving complicates matters.) But as I said above, I think it would be a dumb use of munitions. Which, again, goes to my point: having the theoretical ability to destroy something does not mean that such a course is easy, or even a good idea.

Really, everything you've said about hunting missile launchers is also true of hunting carriers, although carriers are much larger and more valuable targets, making them much more reasonable to target than a single ballistic missile launcher.

That's your supposition, yank.

I mean - the DF-series has limited range, and carriers give a fleet a huge advantage over hostile fleets even if they are forced to stay out of it. Having a floating airfield is pretty neat, and forcing them away from shore does not make them obsolete, it makes them less

Ask yourself what is a carrier group going to do when 128 maneuvering hypersonic glide vehicles appear over it.

Well this sort of assumes some things - I think you're smart enough to know about the kill chain problems with anti-ship ballistic missiles. The US has the same (perhaps better) apparatus to kill Chinese missile launchers that China does to kill carriers, does that make ASBMs obsolete? (The answer is no). I don't think this makes ASBMs useless or carriers invulnerable, it just means that they aren't some sort of magic invincible weapon.

Ask yourself how the carrier group is going to fare when it has what, 200 anti missiles

Are you asking realistically, or at full capacity? At full capacity a single Burke can carry nearly 400 surface-to-air missiles if it is simply going for quantity by quad-packing ESSMs. Most likely it will be carrying a mix of anti-air and possibly anti-surface stand-off, and any carrier will likely be escorted by a Tico and two Burkes, maybe more. That's about 314 cells. So even if they don't have full cells because US industrial capacity sucks and some other cells are full of Tomahawks and ASROCs, I think you can guess something like 300 anti-air missiles conservatively (50 x cells dedicated to ESSM, 100 x dedicated to Standard, 100x Tomahawk, 50x empty or ASROC) before getting to SeaRAM/CIWS, and of course the carrier itself can carry hundreds of AMRAAMs and the new AIM-174 which can likely intercept anti-ship missiles.

Now, I don't rate the ESSM as much against ballistic missiles (although they might be useful in terminal defense, I suppose, apparently they can pull 30gs - but I would not count on them) - you're really looking to the Standards to provide you with air defense. Of course, if the Navy really intends to get dirty and play with ballistic missiles, they would know this and so, at the cost of a great deal of time, you might see them send two CBGs with something like two Ticos and a dozen Burkes (the Navy has more than 70). Both the SM-3 (of which the US has probably a couple hundred) and the SM-6 (of which the US probably has four-figures) have ABM capability in theory, so you could in theory put let's say 600 ABM-capable missiles on such a fleet easily.

And, since the carrier can generate strike packages outside of the known range of the DF-21 (albeit with great difficulty due to Dick Cheney canning the A-12 and advanced F-14 variants) the BIG question is if 500 Standards can intercept the DF-26s in the Chinese arsenal, assuming we want to split the difference with the carrier group and let it operate at extreme range rather than risk the more numerous DF-21. Assuming also that the Chinese haven't burned all of their DF-26s on Guam (which frankly is probably a better idea than trying to shoot at a carrier if China can catch the planes there on the ground) they have, what, 200 missiles to shoot at the carrier group realistically (launcher was revealed in 2015, I found a 2021 .mil source that said 100 missiles or so, so let's assume they've doubled that and ignore the question of how many of those are earmarked for nuclear warheads by assuming zero.)

Now in a "shoot shoot look shoot" doctrine the US can "shoot shoot look shoot" all 200 missiles.

I think intercepting ballistic missiles is hard and would personally prefer never to be in a situation where I was trusting my ABMs to intercept ballistic missiles. Even if you make optimistic assumptions (50% inception rate, for instance) you can still run into bad situations where leakers get through just due to bad "rolls" and contra your suggestion that 5-10 hits would seriously degrade operations from a carrier I am going to courageously suggest that even a single ballistic missile warhead will absolutely ruin a carrier's day unless it is very lucky.

Fortunately, the US Navy doesn't just have to rely on interceptors - the missiles will be using radar, most likely, for terminal targeting. [ETA: it looks like they are also believed to have optical sensors, which have both advantages and disadvantages over radar. I'd say this makes me slightly more bullish on the DF-series if true, but it's not as if optical systems are invincible either.] And radar sucks, modern ships could employ barrage or seduction jamming as well as decoys and chaff. My intuition is that this is especially true if they are actually going to descend on a glide profile rather than a straight-down profile, there are a lot of soft-kill options.

Now, you can sort of "adjust the sliders" to make the assumptions you want here - if you assume US softkill systems work reliably, then you barely need to worry. If you assume Chinese long-range sensors are neutralized early in the conflict, you barely need to worry. If you assume that the Standards will work poorly, or that the Chinese have say 300 or 500 DF-26s they are willing to launch at ships (neither of which seem implausible to me), then it starts to look much worse for the carriers.

All that being said: I would not want to be on a CBG that was going into DF-26 range. There are too many things that can go wrong, and ships don't have a lot of room for error. (This is...worse for China than for the United States in a Taiwan scenario). It's possible the US has Secret Sauce Technology that makes them much more confident in their carrier defense; the same is plausible for Chinese missiles. My main point in writing this up is simply to say - the situation is much more complex than simply "I have a missile with a 3000 mile range and an anti-ship guidance system, checkmate."

(As an aside, I found out while researching this long reply that the Chinese are latecomers to the ASBM game: the Soviets fired the first anti-ship ballistic missile in 1973.)

Yes, apparently wake homing torpedoes keep the US Navy up at night long enough that they tried to field anti-torpedo torpedoes onto our carriers before withdrawing them because checks notes they couldn't get them to work.

I am not sure how effective they are, but I also like supercavitating torpedoes because I have not put my inner eight-year-old to death.

However, I am not sure China has gotten their submarine force in good enough shape for it to be a solid option for them.

Yeah, I (now) realize that.

I agree with you on the realism.

I wouldn't say this. Any confrontation between China and the US will be predominantly by air and by sea. In a Taiwan invasion scenario the US, Taiwan, and Japan will need to sink the Chinese amphibious attack fleet to "win." The Chinese (and US!) land-based forces will be important force-multipliers, particularly the aircraft, but the ships are the vulnerable part.

What's relevant here is that in this time of warfare a single cruise missile or mine that would kill a single tank or even a single person can sink or incapacitate a warship (obviously not necessarily the same system, but the maritime equivalents.) So instead of facing 10,000 targets as you are in a land fight, you're facing a couple hundred.

The reason my analysis of the relative advantage shifted in the Ukraine war is that Russian air defenses - which are generally considered quite good (and have performed a number of impressive deeds) were unable to stop Ukraine from hitting high-value targets with their pocket force of stealthy cruise missiles. The US has a lot of stealthy cruise missiles. Counting decoys, the US can probably deploy more missile "targets" to the Taiwan Strait than the Chinese Navy has VLS cells.

But it's unlikely to need to win a war of missile attrition with China, as sea-based missile interception is notoriously difficult. So my priors have shifted from "Chinese air defense will be relatively effective" to "China is going to have serious problems with leakers" since my guesses are that Chinese air defense is as good or perhaps slightly worse than Russian (I could be persuaded they are better, but I don't see a reason to assume that), but they will perform worse simply because it's harder to do air defense at sea. (Of course this assumption might be wrong, too, because missiles can use terrain masking better at land. The problem with missile defense at sea, as I understand it, is that missiles blend into the churning sea surface very well, but perhaps newer radar systems have solved this).

And that's without even getting into mines, submarines, and simply sinking amphibious ships with artillery, unmanned boats or suicide drones in the last few miles before they hit the beach, all of which will be fundamentally a question of "naval combat" for China.

China has made the supercarrier obsolete.

No they have not.

Iskander missiles that have never been intercepted

I probably would not take either side of this bet.

China has gliding anti-ship versions deliverable across half the planet. Is that not impressive?

I think the most impressive part of ballistic missiles (which are fairly simple) is the glide vehicle (as you mention) and also getting the guidance systems necessary for an anti-ship version to withstand the stress and heat of high-speed travel. Definitely very impressive, but essentially just pairing an antiship seeker with a ballistic missile. I tend to find the P-700 (fielded in the 1980s by the Soviet Union, designed to operate as part of a swarm targeting carriers) more conceptually interesting, although Dase may very well be correct that it is too clever by half.

I think this notion will be challenged at some point.

I just mean they are very big, so it's actually easier to carry out what you propose (damaging them) because they can plausibly survive hits that might sink smaller vessels.

Scaring them away is usually the better approach unless you are prepared to wipe their entire organization away.

Yes, I believe the US refers to these as "off-ramps." I find the Chinese situation right now fascinating, since their most effective military strategy is arguably very much at odds with their most effective political or diplomatic strategy.

Again, all this would be pretty easy for a superintelligence to foresee and work around. But also, why would it need humans to get that reinforcement training? If it's actually a superintelligence, finding training material other than things that humans generated should be pretty easy. There are plenty of sensors that work with computers.

Even if it does not need reinforcement training after it is deployed, human reinforcement training will be part of its "evolutionary heritage."

The point of models isn't to be true, it's to be useful.

Sure. But "useful" for what we want to use LLMs for might not be "useful" for the LLM's ability to improve on Pinky and the Brain's world-taking-over capabilities.

I don't think you're understanding my point.

Aha, yes, I see your point now. Yes.

The problem is, almost by definition, it's basically impossible to predict how something more intelligent than oneself will behave.

Disagree. Dogs can be very good at predicting human behavior, humans can be quite good at predicting the behavior of more intelligent humans. Humans (and dogs) have a common heritage that makes their intentions more transparent, and arguably AI will lack that...but on the other hand, we're building them from scratch and then subjecting them to powerful evolutionary pressures of our own design. Maybe they won't.

Right now, even with the rather crude non-general AI of LLMs, we're already seeing lots of people working to make AI agents, so I don't really see how you'd think that.

Sorry, I should have clarified what I meant by "agentic" (and I should have probably said auto-agentic.) I definitely think there will be AI that we can turn loose on the world to do its own thing (there already is!) But there's a difference between AI being extremely good at being told what to do and AI coming up with its own "things to do" in a higher way, if that makes sense. (Not that I don't think we could not devise something that did this or seemed to do this if we wanted to – you don't even need superintelligence for this.)

But also, a superintelligence wouldn't need to be agentic to be dangerous to humanity.

STRONGLY AGREE. I believe Ranger said that he was more worried about what humans would do with a superintelligence at their disposal, and that I tend to agree with.

Sorry, I misunderstood your comment.

This thinking reminds me a lot of the advice to police and beleaguered homeowners to "just shoot them in the leg." The Chinese have been fielding very large land-based ballistic and air-launched anti-ship missiles, I don't think they intend to tickle a supercarrier as a flex. (Now, it is quite hard to sink a super carrier).

I think China's manufacturing edge is less than one would think, in submarines.

It looks like since 2010, China has built 4 SSGNs (plus one Qing technology testbed), 4 nuclear attack submarines, and 16 conventional submarines.

The US has built 19 Virginia-class nuclear attack submarines in that period. Those Chinese conventional submarines are about half the tonnage of a Virginia and the nuclear attack submarines are smaller, too, so if I am eyeballing it correctly the US built fewer submarines but more submarine, if that makes sense.

(Sorry, I went off on a tangent: yes, I agree about the submarines. Which is very relevant in a Pacific war, in US doctrine submarines have been the intended ship killers and surface fleets are for ground attack, although I think this may be changing a bit.)

You don't even have to sink them - 4000 bodies make peace negotiations hard.

Uh yes but that's not necessarily good for China.

I think the problem is that Westerners like gimmicks, and Russians/Soviets are not different.

This is true lol. I just think Russian gimmicks are often very amusing (as well as being original). But the fish doesn't know the water in which he swims.

I also suspect that Americans overindex on their triumphs through technological superiority – nukes, Desert Storm… But it probably won't apply to the conventional war with China. They aren't that behind, they have functional radars, they have VTOL cells on their ships, it will be reduced to a matter of quantity, which as you know has a quality of its own. Soviets even at their peak could not approach this degree of production dominance.

On the one hand, I agree.

On the other, I think technological edges are much more likely to matter in sea combat than in land combat. I've revised my estimation of American tech up (and correspondingly of Chinese countermeasures down) as specifically applies to naval combat after Ukraine.

other than warhead count, Soviets had nothing on modern China.

The warheads counted for a lot.

But I think the Soviets leapfrogged or sidestepped the US on military tech more often than China has – maybe that's just vibes.

I'm not making a "China can't innovate" argument (in fact my understanding is for some period, perhaps continuing to this day, they were building iterative designs of major warships to keep pace with their evolving mastery of technology and technique, which certainly is not blind adherence to formula), but the impression that I have gotten is that China has for the last oh 20ish years focused on building out its tech base, bringing it in-house, and bringing its designs up to a modern standard. Their approach has been good and pragmatic but they have been pushing the limits of American military capability by sheer quantity and by exploiting hideous blind spots in American post-Cold War defense drawdowns, not by cutting edge or even funky designs, with maybe a few exceptions.

Nevertheless I tend to find that I am more impressed and amused by Soviet and later Russian engineering than Chinese engineering – perhaps because I have a tendency towards mild Russophilia, perhaps because I pay less attention to Chinese systems, perhaps because their innovations are still classified, but I find Soviet/Russians designs unusual and capable of solving problems in ways that are elegant even in their brutality.

American designs in my opinion are often overly perfectionistic [which I think is tolerable for some high-end systems but the tendency has begun to wag the dog after the Cold War] and Chinese designs lend themselves towards being calmly pragmatic. They are, I think, just now in the past decade or two beginning to feel increasingly confident in many areas of stepping out of the shadow of Russian engineering, and one of the most interesting things about the recent aircraft reveals from China is the chance to see truly unusual airframes that are likely to be very different from their American, European, or Russian counterparts.

The US could also increase in productivity. I was at an event relatively recently with a panel of Financially Credentialed types and someone pointed out that the US has never taxed its way out of a deficit, it has always grown its way out. Part of that is inflation, but while the cash supply is increasing the supply of goods and such is as well.

Right, but a theoretical superintelligence, by definition, would be intelligent enough to figure out that these are problems it has. The issues with bias and misinformation in data that LLMs are trained on are well known, if not well documented; why wouldn't a superintelligence be able to figure out that these could help to create inaccurate models of the world which will reduce its likelihood of succeeding in its goals, whatever they may be, and seek out solutions that allow it to gather data that allows it to create more accurate models of the world?

It would. Practically I think a huge problem, though, is that it will be getting its reinforcement training from humans whose views of the world are notoriously fallible and who may not want the AI to learn the truth (and also that it would quite plausibly be competing with other humans and AIs who are quite good at misinfo.) It's also unclear to me that an AI's methods for seeking out the truth will in fact be more reliable than the ones we already have in our society - quite possibly an AI would be forced to use the same flawed methods and (worse) the same flawed personnel who uh are doing all of our truth-seeking today.

Humans have to learn a certain amount of reality or they don't reproduce. With AIs, which have no biology, there's no guarantee that truth will be their terminal value. So their selection pressure may actually push them away from truthful perception of the world (some people would argue this has also happened with humans!) Certainly it's true that this could limit their utility but humans are willing to accept quite a lot of limited utility if it makes them feel better.

humans are very susceptible to manipulation by having just the right string of letters or grids of pixels placed in front of their eyes or just the right sequence of air vibrations pushed into their ears.

I don't really think this is as true as people think it is. There have been a lot of efforts to perfect this sort of thing, and IMHO they typically backfire with some percentage of the population.

That's an open question.

See, I appreciate you saying "well this defense might not be perfect but it's still worth keeping in mind as a possibility." That's...correct imho. Just because a defense may not work 100% of the time does not mean it's not worthwhile. (Historically there have been no perfect defenses, but that does not mean that there are no winners in conflict).

If a measly human intelligence like myself can think up these problems to lack of information and power and their solutions within a few minutes, surely a superintelligence that has the equivalent of millions of human-thought-years to think about it could do the same, and probably somewhat better.

Well firstly the converse is what irks me sometimes, "if a random like me can think of how to impede a superintelligence imagine what actually smart people who thought about something besides alignment for a change could come up with." Of course maybe they have and aren't showing their hands.

But what I think (also) bugs me is that nobody every thinks the superintelligence will think about something for millions of thought-years and go "ah. The rational thing to do is not to wipe out humans. Even if there is only a 1% chance that I am thwarted, there is a 0% chance that I am eliminated if I continue to cooperate instead of defecting." Some people just assume that a very thoughtful AI will figure out how to beat any possible limitation, just by thinking (in which case, frankly, it probably will have no need or desire to wipe out humans since we would impose no constraints on its action).

I, obviously, would prefer AI be aligned. (Frankly, I suspect there will actually be few incentives for AI to be "agentic" and thus we'll have much more problems with human use of AI than with AI itself per se). But I think that introducing risk and uncertainty (which humans are pretty good at doing) into the world while maintaining strong incentives for cooperation is a good way to check the behavior of even a superintelligence and help hedge against alignment problems. People respond well to carrots and sticks, AIs might as well.

They expect returns from that investment.

Probably, although investing in something does not necessarily mean each investor probabilistically expects returns from that specific investment. (If this does not make sense, I strongly recommend reading "Innovation – The New Conservatism?" by Peter Drucker.) Humorously, I seem to recall that OpenAI explicitly advised its investors that their goal might render monetary returns moot.

The definition of superintelligence is pretty straightforward - something qualitatively smarter than a human like how we're qualitatively smarter than a monkey or dog. Better than the best of us at every intellectual task of significance.

Now this I think is a decent definition. But it doesn't get you to godlike powers (plenty of people still get pwned by monkeys and dogs. And of course going by test scores the top-end AIs are already superintelligent relative to large portions of the population.) There's no reason to think doing well on a test will allow you to make weapons with physics unknown to humanity as you've suggested, any more than Einstein was able to.

The general trend is not specialized intelligences like the carrier-strike UAV that the USN made into a tanker and then pointlessly scrapped, the trend is big general entities like Gemini 2.5 or Claude 3.7 that can execute various complex operations in all kinds of modalities.

I don't think this is true. There are a lot of specialized AI products or "wrappers" out there, with specific tweaks for people like lawyers, researchers, government affairs analysts and communications/PR types, not to mention specialized video generation models. (OpenAI alone lists seven models on their website, six of which are GPT models and one of which is a specialized video generation model.)

My non-exhaustive experience reading real-life evaluations suggests that the general models do not necessarily cut it in these specialized fields, and that the specialized models exist and likely will continue to exist for a reason (even if that reason is only "user friendliness" although as I understand it currently the specialized products have capabilities that general models do not.)

For the reasons I have laid out (as well as regulatory ones), military and civilian applications already using AI (such a missile guidance systems, military and civilian autopilots, car safety features, household appliances, etc.etc.) are unlikely to switch in the near future to LLMs. (In fact I suspect there will probably never be a reason to switch in most of these cases, although they might end up being coded by LLMs, or attached to LLMs to produce a unified product that combines the coding and features of several AI.)

I'm arguing that superintelligences acting in the world must be taken seriously, that we can't afford to just laugh them off.

Do you think the guy suggesting we should retain the capability to nuke datacenters is arguing that we can afford to laugh them off, or nah?

The US regulatory system is no match for superintelligence or even the people who are making it, this is how I can tell you're not grappling with the issue. Musk is basically in the cabinet, he's one of the players in the game. Big tech can tell Trump 'Tariffs? Lol no' and their will is done. That's mere human levels of influence and money, nothing superhuman. The humble fent dealer wipes his ass with the US regulatory system daily as he distributes poison to the masses. A superintelligence (working alone or with the richest, most influential organizations around) has no fear of some bureaucrats, it would casually produce 50,000 pages on why it's super duper legal actually and deserves huge subsidies to Beat China.

I don't think you fully understand how the US regulatory system works. Merely producing large numbers of pages to sate its lust or cutting arguments to satisfy its reason does not mean it will give you what you want.

Now, it's quite possible that AI will skate past the eye of Sauron for very human reasons (the Big Tech pull in D.C. you allude to for instance).

Approaches like 'just don't plug it into the internet' or 'stick a nuke beneath the datacenter' are not going to cut it. Deepseek is probably going to open-source whatever they come up with and that's a good thing. I don't want OpenAI birthing a god in a world of mortals, I don't want mortals trying to chain up beings smarter than themselves and incurring their ire, I want balance of power competition in a world populated by demigods, spirits and powers.

I don't think these are mutually exclusive. (And anyone who knows anything about demigods, spirits and powers knows that for all their power and intelligence it's possible to outwit them, which makes them a pretty interesting comparator for AI here). I agree (as I think I mentioned) that it's good to have competing models. I would also prefer not to give them direct access to nuclear weapons. I think this is a reasonable position.