Thank you! I will admit to not being very familiar with all of that.
What I do find interesting about Ireland was how relatively little violence separatists had to engage in to succeed. There were relatively few deaths - it's been a while, but I seem to recall looking up the per capita homicide rate and finding that it was lower than in a major US metro area at the same time (although the IRA favored bombs, which tend to maim many more than they kill, so one could argue that looking at deaths is understating the violence.) If the Quebecois were able to get something meaningful that seems like another data point in that direction.
I agree with most of this, but I think one of the primary failure modes of the "those guys are evil and want to kill me" line of reasoning is that, within a single modern day society, those other guys can't kill you.
I mean - I agree that people overfit the "those guys are evil and want to kill me" model, but I absolutely don't agree that violence is a thing of the past.
I don't know what a modern resolution to this type of schism looks like.
Well, it can look like full-scale socio-cultural domination by one side by measures short of disorganized violence, until the other side is either decisively beaten or practically driven extinct or at least underground by social pressure and/or state action ("legitimate violence.") The Civil Rights Movement achieved this sort of victory (at least for a time - perhaps not for all time, however).
But I think modern western societies are fully capable of flipping to violence to resolve the problem instead. Ireland's independence was a successful use of this; less successful attempts, such as by Puerto Rican or Quebecois separatists, didn't manage to garner enough support and critical mass and are thus not really remembered as anything besides some low level terrorist violence.
The inability to properly simulate the world as individual actors with different views and values even when they're allied together into groups is one major cause of this.
To extend this a little bit, though, if we're going to care about proper modeling, I think it's important to understand that there are forces pushing individuals towards tribalism besides idiocy.
I think there's this idea that tribalism is, basically, ancient grug-style lizard-brained thinking, and that in more recent and temperate times we've learned that trade and cooperation is better through the use of our ascended reasoning faculties, but due to the lizard hindbrain, or something, tribalism keeps rearing its ugly head.
However this story is mostly nonsense. About as far back as you can go in the historical record, you find that people were extremely well aware that trade and cooperation were extremely lucrative. Archeologists are continually surprised at the length and depth of ancient trade networks. The ability to model minds that have different values and priorities to search out modes of cooperation isn't a new innovation; it's a very old pattern of human behavior.
Tribalistic thinking exists because sometimes the other tribe actually does want to kill you.
And when this happens, switching over to a simplified model of "those guys are evil and want to kill me" is useful because modeling other minds requires a lot of cognitive bandwidth and your primary cognitive concern right now is to get and not get got.
Furthermore, switching back out from that model to a cooperate-trade-understand mentality as soon as you start winning and the other guys show up and say "we're not that different, you and I, don't all our mothers love us? This is all a misunderstanding, we just want to trade and live peaceably" is a profound failure to model other minds, because that's the oldest trick in the book and if you don't model the possibility that the other guy is actually evil and still wants to kill you and part of his evil murderous intent extends to "lying to you" you're a sucker.
Sometimes, tribalistic idiot types overfit the "evil and wants to kill me" explanation, dismiss the possibility that it was all a big misunderstanding, and perpetuate conflict unnecessarily. Sometimes, ascended galaxy-brained intelligent times underfit the "evil and wants to kill me" explanation, eagerly agree that it was all a big misunderstanding, and are promptly killed by their evil enemies.
The highest path of wisdom is to understand others so thoroughly that you can understand when the enemy actually wants to kill you and when they genuinely want to find the path to peace. But this means that the wise person sometimes sounds no different than the naive cooperators...and that sometimes he sounds no different than the idiotic tribalists.
Finally, I think this model suggests why sometimes really smart people seem to become tribal idiots (or naive cooperators). If you put a ton mental work into understanding the other side, it's easy to just sit on your laurels and turn your mind towards winning the war/seizing the peace. As you point out, "the other side" is almost always comprised of different groups, and it is almost always in flux. This means that an accurate understanding of the other side, or your own, if not updated, can quickly become woefully insufficient. But building and maintaining an accurate model is time-consuming work, and it's no surprise that many smart people do not have the time or inclination to do so.
Only if USA goes to war over Taiwan, for which the case is weak.
If the United States doesn't, it likely kicks off a regional nuclear arms race. The US is relatively keen to avoid this for numerous reasons. I am not predicting that the US will go to war, but it has reasons to do so.
And 60 k casualties is optimistic in a war near the shores of China.
Setting aside for a moment the fact that the US could plausibly fight such a war a surprisingly long way from Chinese shores, the weird thing about sea wars is that they can be very low casualty compared to land wars. The Chinese could sink every destroyer in the US arsenal with 100% casualties and they'd only kill about 24k Americans. Sinking ten aircraft carriers instead would get them to about 33k; they might achieve similar numbers by killing every single servicemember on Guam during a conflict. If we look at a case where the US takes severe, possibly war-losing losses (say 50% of crew are killed aboard 2 carriers, 8 destroyers, 6 submarines, plus four-digit losses on the ground and 300 aircraft in the air) the final tally could still end up with fewer than 10,000 American deaths. I'm not making a predictive argument here, and I could certainly see the numbers going much higher, just pointing out how very low the personnel density is in an air-sea war compared to ground conflict. (If you look at World War Two as a comparison, on a quick Google it looks like around 60K Naval personnel were killed, about 20% of the losses in the Army/Army Air Force.)
Now, losing even a single carrier with all hands would be extremely high casualty relative to the War on Terror but I would not be surprised at all if the US could fight and win a war against China and take fewer casualties than in Vietnam.
I don't know why everybody has drunk the Kool aid that somehow lack of Taiwan semiconductors is a death blow.
If I had to guess, it's because "let the Taiwanese have the best superconductors" was a deliberate maneuver by the US to contain the CCP, and this narrative is part of that maneuver and will be used to get buy-in for actions taken to prevent China retaking Taiwan.
There is no invasion that will cause problems.
If things so really out of hand we could be looking at the destruction of vast parts of the world industrial system, not just in Taiwan, but also in China, plus the global disruption of sea trade. They're called World Wars for a reason!
Not a China hand at all, but, bouncing off of your point, a genuine question: does Xi really need to trump up corruption charges if you just want to bring in fresh talent? Seems like it would be much easier and less embarrassing to just say "why don't you retire." I realize that might not work on people who are trying to cling onto power, but you'd think you'd only need to purge one or two of them successfully to make the point. Purging people after that suggests (at least to me) concerns about either their power or trustworthiness.
What's interesting to me is that de-arresting someone is a crime (obviously) that can be made to stick if you catch the people doing the de-arresting, but conspiring to de-arrest someone is also presumably a crime, and given that the laws being enforced in this context are federal laws, I imagine there's a federal conspiracy statute that can be leveraged, possibly against the coordinators even if they aren't actually participating in the "de-arrests."
I wonder if part of the goal of running these ICE operations publicly is precisely to invite this sort of behavior and then roll up as many people as possible.
Interesting link, thank you for dropping it.
From what I understand there is basically an entire playbook or script on nonviolent civil disobedience – people have figured out how to get their point across (and get the "cops arrest peaceful mom" pictures) while at the same time minimizing the odds that things turn actually violent by peacefully surrendering to the cops, not resisting arrest, possibly even notifying the police of their intentions ahead of time, etc.
I have not followed the protests in Minnesota closely but from what I have seen I do not think that playbook is being followed. Whether that's due to untrained "normies" turning out or the instructions of protest coordinators I do not know but if these are being coordinated (which does seem to be true to a significant degree) and the coordinators are deliberately choosing more escalatory tactics that's very telling in my mind.
One could also check and see if Congress signed off on something like this. And as it turns out, they signed off a huge expansion of the ICE budget! Perhaps it was used in ways they didn't anticipate, but it's true that not only did Trump run on an immigration enforcement platform, but Congress also supported immigration enforcement.
Yes; Obama was President in 2013.
And he did so while managing not to kill any American citizens.
Technically untrue in a darkly funny way (an ICE agent shot his supervisor and was subsequently shot and killed by another ICE agent).
The Obama administration did detain and possibly wrongfully deport American citizens, as well.
That seems likely, but I could see the rationale being that you want to make a lasting impression.
The political pressure on jurisdictions that aren't cooperating with ICE might be much more relevant, though.
what about enforcing immigration laws necessitates a mass of poorly trained officers
I freely confess that I don't know what the view looks like from the inside, but I sort of suspect a lot of the way ICE is being used is to create political pressure/optics.
I wonder if it would be much more efficient (to say nothing of much less optically problematic) to just send a few guys in plainclothes to pick up each dude ID'd as illegal. I sort of suspect that "running around in camo and plate carriers" is either the idea of people in ICE who think it is cool, or the idea of admin higher-ups who think that creating a scene like that is necessary to intimidate would-be illegals and deter illegal immigration. But part of me suspects that quietly and efficiently deporting massive numbers of illegals is in its own way scarier and more deterring than these highly visible scenes, if run at high volume for a sustained period of time.
Really interested to know if there is anyone here who can speak to that though.
We have individual rights in the United States, you don't respond to societal-wide problems by violating the rights of individuals for no reason in isolated incidents.
From a purely strategic perspective, say you believe that Trump's people are a few years away from turning the US into a dictatorship. In that case, wouldn't you want to be violent before they consolidate all power and not after ?
The optics on who shoots first matter a lot, and I think people understand this intuitively. The South probably did more harm to its cause than good by shooting first during the Civil War.
Nuclear is the only real weapon system that has truly had a strongly bounded development ceiling. We're still making better missiles, airplanes, drones, boats, ect.
The military does not actually just pursue boundless development. Budgets have to be justified, and if a threat is not assessed to be present then development will not be pursued. For this reason the government often foregoes or even loses capabilities simply for budgetary concerns. The F-22, for instance, faced budgetary scrutiny after the fall of the Cold War because the Soviet threat no longer existed, and when it was procured, it was without an IR sensor (which was arguably short-sighted: they are now integrating IR pods onto the Raptor). And even that procurement decision was made because the Raptor was assessed to be a less risky, more mature design (the F-23 was superior in many respects). Most of the ships the US has now were essentially the cheaper options: the Virginia class was a budget Seawolf, the Tico cruisers were meant to be the low-end of a high-low mix that never came about, the Burke class has been dramatically expanded in capability beyond what was originally intended, as I understand it, rather than spend more money to build a ground-up capability. The retirement of the F-14 left the Navy without a fleet interceptor and it's only been recently that the Rhino has been able to pick up the slack with improved AMRAAMs and the air-launched Standard, and even now the Rhino is likely dramatically less capable than an upgraded F-14 would have been (at least in payload, range, and sensor power, although the F/A-18 has a lower RCS).
Basically, the general rule in the military is not to procure stuff simply because it is good. You procure stuff to defeat a very specific threat. You won't be able to go to the military and say "intelligence is good, spend trillions" - you need to be able to justify the cost.
You get better airplanes, drones, boats and missiles faster because one of the most bottle necked inputs to improvements is intelligence.
If the US government was willing to throw cash at the wall to improve intelligence across the board, they could raise service member's salaries dramatically. They don't. The smart kids all go to Wall Street or Silicon Valley, although the Navy's nuclear submarine program and the NSA are able to scrape a few off to fulfill a few niche roles.
That's not to say that I disagree with you (and obviously the military is very interested in and will use artificial intelligence) but it's to hammer home my point: military procurement is not about procuring the best systems, it is about finding a compromise between cost and effectiveness. If the US military was given a blank check every time the opportunity to completely dominate the world arose, we would have had no-kidding battleships in orbit for about half a century now (and unlike the question of AGI/superintelligence/etc. the question of putting a nuclear powered nuclear propelled battleship in orbit is primarily an engineering one; the math is all worked out and has been "solved" for decades).
Again, because I think this bears repeating: if you go to the US government and you say "I can protect and ensure US hegemony for the foreseeable future practically guaranteed, it will just cost half a trillion dollars" there's every reason based on historical performance to think they will instead spend half a billion on "good enough" systems that can kick the can down the road. While intelligence might be different from past weapons systems, it seems unlikely that that difference will change the approach of the government procurement process (or the approach of the free market, for that matter).
I've described why other weapons systems do not have unbounded development, thus it is not special pleading.
Sure, and perhaps we can agree to disagree on the extent to which your arguments about the distinctions were persuasive. I am open to believing that intelligence is qualitatively different than other weapons systems, but I think the idea that the selection pressures (if you will) that apply to other systems won't apply to it is silly. Hopefully that distinction makes sense.
You keep doing this thing where I talk about how more powerful AI is obviously more useful for existential inter-government rivalries
I've also pointed out that in traditional weapons systems that are useful for inter-government rivalries aren't subject to unbounded growth either. You've replied by special pleading for "intelligence." Now, you define intelligence as the ability to manipulate the environment to your will. (We can set aside for the moment the fact that this is an atypical definition of intelligence - it suggests that a 200 IQ paraplegic is much much less intelligent than a newborn.)
Very well - various world governments have passed on massive intelligence advantages in the past for reasons as trivial as budgetary decisions or as silly as unfounded environmental concerns pushed by fringe ecological groups. Why will AI intelligence different from, to pick just one obvious example, the intelligence [ability to manipulate the environment to your will] provided by nuclear power?
It is trivially better for smiting your enemies
If we go with your definition of intelligence, that is probably true. But by that definition, AI is very far from being meaningfully intelligent. On the one hand, I like your definition because it highlights one of the things I have been banging on about AI here at the Motte; on the other hand, I dislike it because it doesn't let me dissect the divergence from "good at tests" and "good at power" - two things which are often conflated and, in my opinion, should not be.
Bombs aren't developed unboundedly because increased explosion yield has a relatively low upper bound of being useful at all.
And there's no reason to think that the government wouldn't also believe that after a certain point, intelligence would either be actively harmful or not worth the extra effort required to get it.
More intelligence is simply better
A position which is asserted commonly as if needing no defense despite the fact that this does not seem to be true of our own species, at least in the evolutionary sense.
Correct.
Now I am not saying that unbounded development is impossible and I (would argue that I) take alignment fairly "seriously" but if you look at other weapons systems we don't see unbounded development there. So our expectations should be that future weapons development will continue along similar lines. Not that that is the only course but that it is the most likely course.
You could counter-argue that over a long-enough timeline improbable events become likely, which is fair enough. But of course that is true of essentially all existential risks and does not imply that existential risk from AI is especially likely relative to other existential risks.
Does that make sense?
My theory? This is your theory!
I agree that an edge in AI does not translate over to dominating your rivals.
MAD works because of second strike capabilities
You don't necessarily need second strike if you have launch on warning.
AI capabilities continuously enable dominance of your rivals.
And this is why we will never have an AI that becomes more powerful than the US government wants it to be. You see, the US has had a continuous advantage in AI technology over its enemies since the 1950s, with an escalating advantage in the 1980s. Over the past four decades of continuous advantage, it has continually dominated its rivals, and will use its continuing edge to retard hostile AI development to exactly the level of development that it deems acceptable. This also means that the US will be able to safely stop developing AI well before reaching the area where AI is dangerous, since it can simply decide to retard the progress of hostile AIs using its considerable AI capability advantage in such a way as to leave its own AI capabilities considerably more powerful. You're already living in the world with the Maxim Gun, it's just not polite to advertise it.
Or...perhaps an advantage in AI isn't everything.
Mutually assured destruction doesn't hold for AI
Sure it does. AI, as currently constituted, is more vulnerable to MAD than governmental bodies, not less.
states would like to have dominance in the area.
There's a difference between dominance in nuclear weapons and more powerful nuclear weapons.
- Prev
- Next

While there absolutely were cases in the past that involved wholesale slaughter, my understanding is that a lot of "old" prehistoric type violence was also "a PR move," - often very ritualized and fairly safe, with focus on glory and not necessarily lethality. For instance, the point of "counting coup" was specifically not to kill the enemy. I don't really think it's correct to suggest that the "old" way of doing things was "high lethality" and we've slumped into a newer "low lethality" culture; rather I think the type of violence that occurs depends a lot on the specifics of a culture, situation, technology level, etc.
This I think is absolutely correct (and insightful).
See, I'm just not sure this hasn't happened. (I'm also not sure that's really correct of new religions but that's a different question.) The violence might involve more ritual, more PR, less violent, but I don't think it's correct to say that either the BLM-era protests or the recent anti-ICE protests were entirely nonviolent. And you can trace the violent strain in contemporary American left-wing thinking back further, at least to the extremely violent 1970s "Days of Rage" if not before.
More options
Context Copy link