@Shrike's banner p

Shrike


				

				

				
0 followers   follows 0 users  
joined 2023 December 20 23:39:44 UTC

				

User ID: 2807

Shrike


				
				
				

				
0 followers   follows 0 users   joined 2023 December 20 23:39:44 UTC

					

No bio...


					

User ID: 2807

What's interesting to me is that de-arresting someone is a crime (obviously) that can be made to stick if you catch the people doing the de-arresting, but conspiring to de-arrest someone is also presumably a crime, and given that the laws being enforced in this context are federal laws, I imagine there's a federal conspiracy statute that can be leveraged, possibly against the coordinators even if they aren't actually participating in the "de-arrests."

I wonder if part of the goal of running these ICE operations publicly is precisely to invite this sort of behavior and then roll up as many people as possible.

Interesting link, thank you for dropping it.

From what I understand there is basically an entire playbook or script on nonviolent civil disobedience – people have figured out how to get their point across (and get the "cops arrest peaceful mom" pictures) while at the same time minimizing the odds that things turn actually violent by peacefully surrendering to the cops, not resisting arrest, possibly even notifying the police of their intentions ahead of time, etc.

I have not followed the protests in Minnesota closely but from what I have seen I do not think that playbook is being followed. Whether that's due to untrained "normies" turning out or the instructions of protest coordinators I do not know but if these are being coordinated (which does seem to be true to a significant degree) and the coordinators are deliberately choosing more escalatory tactics that's very telling in my mind.

One could also check and see if Congress signed off on something like this. And as it turns out, they signed off a huge expansion of the ICE budget! Perhaps it was used in ways they didn't anticipate, but it's true that not only did Trump run on an immigration enforcement platform, but Congress also supported immigration enforcement.

Yes; Obama was President in 2013.

And he did so while managing not to kill any American citizens.

Technically untrue in a darkly funny way (an ICE agent shot his supervisor and was subsequently shot and killed by another ICE agent).

The Obama administration did detain and possibly wrongfully deport American citizens, as well.

That seems likely, but I could see the rationale being that you want to make a lasting impression.

The political pressure on jurisdictions that aren't cooperating with ICE might be much more relevant, though.

what about enforcing immigration laws necessitates a mass of poorly trained officers

I freely confess that I don't know what the view looks like from the inside, but I sort of suspect a lot of the way ICE is being used is to create political pressure/optics.

I wonder if it would be much more efficient (to say nothing of much less optically problematic) to just send a few guys in plainclothes to pick up each dude ID'd as illegal. I sort of suspect that "running around in camo and plate carriers" is either the idea of people in ICE who think it is cool, or the idea of admin higher-ups who think that creating a scene like that is necessary to intimidate would-be illegals and deter illegal immigration. But part of me suspects that quietly and efficiently deporting massive numbers of illegals is in its own way scarier and more deterring than these highly visible scenes, if run at high volume for a sustained period of time.

Really interested to know if there is anyone here who can speak to that though.

We have individual rights in the United States, you don't respond to societal-wide problems by violating the rights of individuals for no reason in isolated incidents.

From a purely strategic perspective, say you believe that Trump's people are a few years away from turning the US into a dictatorship. In that case, wouldn't you want to be violent before they consolidate all power and not after ?

The optics on who shoots first matter a lot, and I think people understand this intuitively. The South probably did more harm to its cause than good by shooting first during the Civil War.

Nuclear is the only real weapon system that has truly had a strongly bounded development ceiling. We're still making better missiles, airplanes, drones, boats, ect.

The military does not actually just pursue boundless development. Budgets have to be justified, and if a threat is not assessed to be present then development will not be pursued. For this reason the government often foregoes or even loses capabilities simply for budgetary concerns. The F-22, for instance, faced budgetary scrutiny after the fall of the Cold War because the Soviet threat no longer existed, and when it was procured, it was without an IR sensor (which was arguably short-sighted: they are now integrating IR pods onto the Raptor). And even that procurement decision was made because the Raptor was assessed to be a less risky, more mature design (the F-23 was superior in many respects). Most of the ships the US has now were essentially the cheaper options: the Virginia class was a budget Seawolf, the Tico cruisers were meant to be the low-end of a high-low mix that never came about, the Burke class has been dramatically expanded in capability beyond what was originally intended, as I understand it, rather than spend more money to build a ground-up capability. The retirement of the F-14 left the Navy without a fleet interceptor and it's only been recently that the Rhino has been able to pick up the slack with improved AMRAAMs and the air-launched Standard, and even now the Rhino is likely dramatically less capable than an upgraded F-14 would have been (at least in payload, range, and sensor power, although the F/A-18 has a lower RCS).

Basically, the general rule in the military is not to procure stuff simply because it is good. You procure stuff to defeat a very specific threat. You won't be able to go to the military and say "intelligence is good, spend trillions" - you need to be able to justify the cost.

You get better airplanes, drones, boats and missiles faster because one of the most bottle necked inputs to improvements is intelligence.

If the US government was willing to throw cash at the wall to improve intelligence across the board, they could raise service member's salaries dramatically. They don't. The smart kids all go to Wall Street or Silicon Valley, although the Navy's nuclear submarine program and the NSA are able to scrape a few off to fulfill a few niche roles.

That's not to say that I disagree with you (and obviously the military is very interested in and will use artificial intelligence) but it's to hammer home my point: military procurement is not about procuring the best systems, it is about finding a compromise between cost and effectiveness. If the US military was given a blank check every time the opportunity to completely dominate the world arose, we would have had no-kidding battleships in orbit for about half a century now (and unlike the question of AGI/superintelligence/etc. the question of putting a nuclear powered nuclear propelled battleship in orbit is primarily an engineering one; the math is all worked out and has been "solved" for decades).

Again, because I think this bears repeating: if you go to the US government and you say "I can protect and ensure US hegemony for the foreseeable future practically guaranteed, it will just cost half a trillion dollars" there's every reason based on historical performance to think they will instead spend half a billion on "good enough" systems that can kick the can down the road. While intelligence might be different from past weapons systems, it seems unlikely that that difference will change the approach of the government procurement process (or the approach of the free market, for that matter).

I've described why other weapons systems do not have unbounded development, thus it is not special pleading.

Sure, and perhaps we can agree to disagree on the extent to which your arguments about the distinctions were persuasive. I am open to believing that intelligence is qualitatively different than other weapons systems, but I think the idea that the selection pressures (if you will) that apply to other systems won't apply to it is silly. Hopefully that distinction makes sense.

You keep doing this thing where I talk about how more powerful AI is obviously more useful for existential inter-government rivalries

I've also pointed out that in traditional weapons systems that are useful for inter-government rivalries aren't subject to unbounded growth either. You've replied by special pleading for "intelligence." Now, you define intelligence as the ability to manipulate the environment to your will. (We can set aside for the moment the fact that this is an atypical definition of intelligence - it suggests that a 200 IQ paraplegic is much much less intelligent than a newborn.)

Very well - various world governments have passed on massive intelligence advantages in the past for reasons as trivial as budgetary decisions or as silly as unfounded environmental concerns pushed by fringe ecological groups. Why will AI intelligence different from, to pick just one obvious example, the intelligence [ability to manipulate the environment to your will] provided by nuclear power?

It is trivially better for smiting your enemies

If we go with your definition of intelligence, that is probably true. But by that definition, AI is very far from being meaningfully intelligent. On the one hand, I like your definition because it highlights one of the things I have been banging on about AI here at the Motte; on the other hand, I dislike it because it doesn't let me dissect the divergence from "good at tests" and "good at power" - two things which are often conflated and, in my opinion, should not be.

Bombs aren't developed unboundedly because increased explosion yield has a relatively low upper bound of being useful at all.

And there's no reason to think that the government wouldn't also believe that after a certain point, intelligence would either be actively harmful or not worth the extra effort required to get it.

More intelligence is simply better

A position which is asserted commonly as if needing no defense despite the fact that this does not seem to be true of our own species, at least in the evolutionary sense.

Correct.

Now I am not saying that unbounded development is impossible and I (would argue that I) take alignment fairly "seriously" but if you look at other weapons systems we don't see unbounded development there. So our expectations should be that future weapons development will continue along similar lines. Not that that is the only course but that it is the most likely course.

You could counter-argue that over a long-enough timeline improbable events become likely, which is fair enough. But of course that is true of essentially all existential risks and does not imply that existential risk from AI is especially likely relative to other existential risks.

Does that make sense?

My theory? This is your theory!

I agree that an edge in AI does not translate over to dominating your rivals.

MAD works because of second strike capabilities

You don't necessarily need second strike if you have launch on warning.

AI capabilities continuously enable dominance of your rivals.

And this is why we will never have an AI that becomes more powerful than the US government wants it to be. You see, the US has had a continuous advantage in AI technology over its enemies since the 1950s, with an escalating advantage in the 1980s. Over the past four decades of continuous advantage, it has continually dominated its rivals, and will use its continuing edge to retard hostile AI development to exactly the level of development that it deems acceptable. This also means that the US will be able to safely stop developing AI well before reaching the area where AI is dangerous, since it can simply decide to retard the progress of hostile AIs using its considerable AI capability advantage in such a way as to leave its own AI capabilities considerably more powerful. You're already living in the world with the Maxim Gun, it's just not polite to advertise it.

Or...perhaps an advantage in AI isn't everything.

Mutually assured destruction doesn't hold for AI

Sure it does. AI, as currently constituted, is more vulnerable to MAD than governmental bodies, not less.

states would like to have dominance in the area.

There's a difference between dominance in nuclear weapons and more powerful nuclear weapons.

Nuclear weapons are useful for intergovernmental conflict, but the trendline in government development, deployment, and research has cut away from more powerful nuclear weapons.

I think most governments have similar incentives for, well, aligning AI to be powerful enough to succeed at its tasks, but not so powerful as to be uncontrollable.

I'll let you argue the point with other people who have a stronger opinion on what textbook narcissism is.

Personally, I don't really think it matters (except possibly for his personal well-being) - his behavior is either good or bad, helpful or unhelpful, honorable or dishonorable, etc. Whether or not he meets some diagnostic criteria is of secondary importance. I don't, for the record, tend to agree with a lot of the way he's handled the Greenland affair.

But I also think most people forget Ellsberg's warning to Kissinger:

you will feel like a fool for having studied, written, talked about these subjects, criticized and analyzed decisions made by presidents for years without having known of the existence of all this information, which presidents and others had and you didn’t, and which must have influenced their decisions in ways you couldn’t even guess.

But of course there's a caution there not only for the outsider (us, or most of us I reckon), but also for the insider:

The danger is, you’ll become something like a moron. You’ll become incapable of learning from most people in the world, no matter how much experience they may have in their particular areas that may be much greater than yours.

"Treaty on Russian–Ukrainian Mutual Peace"

I got a chuckle out of this (it took me a second).

My overarching concern has been that his 'movement' is so tied up in his ego it isn't clear if it CAN move on to anyone else once he's out of office, and that will be a major problem if there's no clear successor.

I think this is a huge potential issue, and I will note that it's an issue regardless of whether or not Trump is a good person or a bad person or an evil and vile person or a sort of mediocre person. Strong personalities are not a substitute for strong institutions.

I think it is dangerously tempting for the right to overestimate their victories given that a few strong personalities have swung to their side. Don't get me wrong, it is always good to have great men on your side. But even the best kings pass away.

I've got to say, sometimes it is pretty funny being on a board where two of the abiding topics of concern are, distilled down a bit, "high IQ people being wiped out by lower-IQ people" and "high IQ AI wiping out lower-IQ people."

Anyway, there's obviously not a direct correlation between intelligence and existential risk. Creatures with an IQ of 0 on a scale of 1 - 100 for intelligence are in a far less precarious position, existentially speaking, than creatures with an IQ of 100 (us). Intelligence is only an imperfect proxy measurement for power and power is what generates existential risk.

Middle ground plateaus aren't particularly likely

I don't think "the government decides to pump the breaks on AI development after it becomes powerful enough to control the populace but before it becomes to powerful to be controlled" is a particularly unlikely outcome (accepting for the sake of argument that such an uber-powerful AI is possible).

The man just wrote a public angry letter to the PM of Norway because he's mad about the Nobel prize committee

It was not a public letter, unless there was another letter I was unfamiliar with.