@FCfromSSC's banner p

FCfromSSC

Nuclear levels of sour

34 followers   follows 3 users  
joined 2022 September 05 18:38:19 UTC

				

User ID: 675

FCfromSSC

Nuclear levels of sour

34 followers   follows 3 users   joined 2022 September 05 18:38:19 UTC

					

No bio...


					

User ID: 675

Here you go. Or perhaps that's not a "serious manner"? Do you disagree that the BBC and the social consensus it represents is deeply hostile to America's Red Tribe?

Have you ever considered that you are a little bitch?

This is not acceptable. The rest of your post is fine, but you are being deliberately inflammatory.

You have no notes either way on your moderation log. I get that you are using the insult for dramatic effect, and so I am giving you a warning. Do not post like this in the future, or you will receive a ban.

You are welcome to reject the inevitability of extinction. You are not welcome to use your rejection of extinction to claim divine right to getting everything you want the way you want it. If you need things from other people, resources, cooperation, whatever, you have to actually negotiate for them, not declare that they do what you want or else they're damning all humanity.

I am more worried about current power allocation than I am about hypothetical hostile super intelligent AGI. Maybe I'm wrong to think that, but given that the current AI safety alliance does not see a place in the future for me and mine anyway, it doesn't seem like I've got much of a choice.

You cannot “live within the lie” of mutual benefit through integration when integration becomes the source of your subordination.

This is straightforwardly true. The problem is that it runs the other way also. The political problem facing Red Tribe has been obvious for some time:

  • We have to win a conflict against Blue Tribe, or we will be ruled for the foreseeable future by people who hate us.
  • We have to fund our side of this conflict out of our own pockets.
  • Blue Tribe funds themselves out of our tax money.
  • Blue Tribe is allied with the Blue-Tribe analogues in pretty much every Euro country, most of which are also funded to a considerable degree out of our tax money.
  • Those allied Blue-Tribe-analogues have already won their tribal fight in their home countries, so completely that their operations are effectively uncontested
  • Those Blue-Tribe-analogues interfere directly in our domestic politics in ways that give our Blue Tribe additional considerable advantages.
  • Those Blue-Tribe analogues have repeatedly and obviously broken some of the rules we care about the most, and have been openly and quite effectively coordinating to help Blue Tribe break those same rules.

As the man says, Integration became the source of our subordination. European governments have been actively cooperating with Blue Tribe to close the door on us and our values for at least the last decade. We have already been fighting them for at least the last decade. There is very little hope that this will change, and there is very little observable value in maintaining a situationship that will never, ever break to our advantage.

The multilateral institutions on which middle powers relied— the WTO, the UN, the COP—the architecture of collective problem solving — are greatly diminished.

Yeah, that's sort of the point, isn't it? Why do I want this "architecture of collective problem solving" stronger, when in fact a lot of the "collective problems" it "solves" appear to involve my tribe's continued existence?

I am not sure who's going to be American ally in WWIII now.

How about we sit WWIII out? I for one am not particularly interested in seeing the sons of my friends and family fed into a droneswarm Armageddon.

Five years ago, even two years ago, it was taken as obvious that we (meaning primarily the US) were going to fight a major war with China and/or Russia. How does the above shift the probabilities, in your estimation? Do you think the crackup of the previous Rules-Based order makes an imminent fight with China less likely or more?

The obvious problem is economics. Does this end the Dollar as reserve currency? Does this crash the global economy? Are we Americans going to get super poor forever? ...I've been thinking about writing a post, collecting some of the economic predictions made here in the runup to the 2024 election, and comparing them to what's happened since, with comparison and contrast to the economic predictions about Brexit. To boil it down, I note that the economic predictions and even current assessments seem fundamentally unreliable, that the previous order seemed obviously unsustainable, and that the risk is worth it given the current trajectory.

I do not want America to rule the world, especially not if the version of "America" that rules is a Blue Tribe that has secured itself permanent unaccountable power. Even if it were my tribe ascendent, the value seems quite limited. I do not want to be subjugated by the Chinese, but I do not want to fight a major war with them either, and my assessment is that as of a year ago, pretty much everyone in this forum considered such a war to be an obvious inevitability. And for what? I do not want my country to be poorer, but I note that our previous economic model seemed to have very obvious problems that only ever got worse, and the only solution anyone could even begin to imagine was to keep doing the same things even harder, as pressure built toward an inevitable blowout.

I wanted change. This is change. It is scary and somewhat horrifying change... but it's not obvious what the alternative was supposed to be, and what seem to me to be plausible guesses seem worse.

All available evidence indicates that you and all your descendents will someday die no matter what anyone does. All available evidence indicates that humanity will go extinct, and that extinction being soon is a distinct possibility, again no matter what anyone does.

I am not building AI. I am pointing out that Yudkowsky's proposed solution seems both very unlikely to work and also very likely unnecessary for a whole host of reasons, and that there appears to me approximately zero reason to play along with his schemes. I am not gambling with your life, or that of your descendants. You do not get to stack theories a hundred layers high and then declare that therefore, everyone has to do what you say or be branded a villain.

I say Yudkowsky demands unaccountable power, because it is obvious that this is, in fact, exactly what he's demanding. Neither he nor you get to beg out of the entire concept of politics because you've invented a very, very scary ghost story.

Neither Yudkowski nor yourself are the first humans to discover that "living" requires amassing unaccountable power. Time is not used well under a rock.

In any case, I hear Pascal also has a pretty good wager.

My determination to close off the effect zone would depend on my assessment of the probabilities firstly that such a lockdown could be effected, and secondly the probabilities of apocalyptic destruction from other sources. If lockdown seems unlikely to work, and also there are numerous other, similar threats, then it seems to me I might better spend my time using the time I have well.

Groups of humans such as the united states are able to blow up a target from so high up in the air that you can't see where the bomb was launched from. A medieval king couldn't even fathom defending from this sort of attack.

And yet, humans have figured out how to defend against this sort of attack, to the point that we decisively lost the war in Afghanistan.

If you'll allow me to quote myself:

Coin-op payphones granted, there's something to Gibsonian cyberpunk, something between an insight and a thesis, that sets his work apart from the stolid technothrillers of Clancy and company. Something along the lines of "technology is useful, not merely because they have a rock and you have a gun, but because it inherently and intractably complicates the arithmetic of power." His stories are built on a recognition that people are not in control, that our systems reliably fail, that our plans are dismayed, and that far from ameliorating these conditions, technology only accelerates them.

"AI Safety" operates off a fundamentally Enlightened axiom that chaos and entropy can, with sufficient intelligence, be controlled. I think they are wrong, for roughly the same reasons that all previous attempts to create perfect order have been wrong: reality is too complicated.

I am not arguing that AI can't kill us all. I'm pretty sure we can kill us all, and I think the likelihood of us doing so is considerable.

Yudkowsky does not want to rule you, he just wants to keep you, or anyone including himself, from massing billions of dollars worth of compute and using it to end humanity.

He wants to invent a new category of crime with global jurisdiction and ironclad, merciless enforcement. I am 100% on board, provided that it is me and mine given exclusive control of the surveillance and strike capabilities needed to enforce this regime. Don't worry, we'll be extremely diligent in ensuring that dangerous AI is suppressed.

It seems to me that there is a long tradition of smart people coming together an inventing new and not distantly in the past foreseen weapons and technologies.

There's also a long tradition of smart people "forseeing" weapons that aren't physically possible.

There's also a long tradition of smart people failing to recognize that weapons or other tech can stagnate due to basic physical laws.

"Maybe the AI will figure out how to hack the simulation" or "maybe the AI will kill us all in the same second with hypertech nanobots" are not scenarios that we can plan for in any meaningful way, but much AI safety messaging uses them as examples. They do this because they are worried about out-of-context problems, and want to handle such problems rationally. But the core problem is that out-of-context problems cannot in fact be handled rationally, because our resources are finite and the out-of-context possibility space is infinite.

They argue that Superintelligence will give the AI an unbridgeable strategic advantage, that intelligence allows unlimited Xanatos Gambits, but this doesn't in fact appear to be true. Planning involves handling variables, and it seems obvious to me that variables scale much, much faster that intelligence's capacity to solve for their combinations. And again, we can see this in the real world now, because we have superintelligent agents at home: governments, corporations, markets, large-scale organizations that exist to amplify human capabilities into the superhuman, to gather, digest and coordinate on masses of data far, far beyond what any human can process. And what we see is that complexity swamps these superintelligences on a regular basis.

And there is of course just the more mundane issue of a sufficiently advanced AI that is merely willing to give cranks the already known ability to manufacture super weapons could be existential.

You frame this as though we are in some sort of stable environment, and AI might move us to an environment of severe risk. But it appears to me that we are already in an environment of severe risk, and AI simply makes things a bit worse. We are already living in the vulnerable world; the vulnerabilities just aren't perfectly-evenly distributed yet.

Meanwhile, "AI Safety" necessarily involves amassing absolute power, and as every human knows, I myself am the only human that can be truly trusted with absolute power, though my tribal champions might be barely acceptable in the final extremity. I am flatly unwilling to allow Yudkowksy to rule me, no matter how much he tries to explain that it's for my own good. I do not believe Coherent Extrapolated Volition is a thing that can possibly exist, and I would rather kill and die than allow him to calculate mine for me.

Where do these diminishing returns kick in?

Within the human scale, at the point where Von Neumann was a functionary, where neither New Soviet Man nor the Thousand Year Reich arrived, where Technocracy is a bad joke, and where Sherlock Holmes has never existed, even in the aggregate.

Or maybe you mean to application of intelligence, in which case I'd say just within our current constraints it has given us the nuclear bomb, it can manufacture pandemics, it can penetrate and shut down important technical infrastructure.

We can do all those things. Can it generate airborne nano factories whose product causes all humans to drop dead within the same second? I'm skeptical.

It seems to me that it does, yes. If your intelligence scales a hundred-fold, but the complexity of the thing you want to do scales a billion-fold, you have lost progress, not gained it. The AI risk model is that intelligence scales faster than complexity and that hard limits don't exist; it's not actually clear that this is the case, and the general stagnation of scientific progress gives some evidence that the opposite is the case. It seems entirely possible to me that even a superintelligent AI runs into hard limits before it begins devouring the stars.

Now on the one hand, this doesn't seem like something I'd want to gamble on. On the other hand, it's obviously not my choice whether we gamble on it or not; AI safety has pretty clearly failed by its own standards, there is no particular reason to believe that "safe" AI is a thing that can even potentially exist, and we are going to shoot for AGI anyway. What will happen will happen. The question is, how should AI doomsday worries effect my own decisions? And the answer, it seems to me, is that I should proceed from the assumption that AI doomsday won't happen, because that's the branch where my decisions matter to any significant degree. I can solve neither AI doomsday nor metastable vacuum decay. Better to worry about the problems I can solve.

With an arbitrarily large amount of intelligence deployed to this end then unless there is something spooky going on in the human brain then we should expect rapid and recursive improvement.

...Or unless intelligence suffers from diminishing returns, which actually seems fairly likely.

you can make your own black powder, and your own cannons to shoot it out of.

Do you oppose the use of public resources to subsidize their lifestyle? Can you actually prevent public resources from being used to subsidize their lifestyle? Or is this just policy arbitrage, where we appeal to atomic individualism or social unity, whichever is convenient at the moment?

But in the same way that prediction markets help to reveal true beliefs, free economic markets reveal true preferences.

Would you agree that most poor people have a revealed true preference to invest most of the money they receive into credit card payments and similar fees, and that the people who receive those fees are benevolent actors working tirelessly to help such poor people live their very best life?

If not, I'm curious as to why you view the market as "revealing true preferences" in the one case and not the other.