FCfromSSC
Nuclear levels of sour
No bio...
User ID: 675
Neither Yudkowski nor yourself are the first humans to discover that "living" requires amassing unaccountable power. Time is not used well under a rock.
In any case, I hear Pascal also has a pretty good wager.
My determination to close off the effect zone would depend on my assessment of the probabilities firstly that such a lockdown could be effected, and secondly the probabilities of apocalyptic destruction from other sources. If lockdown seems unlikely to work, and also there are numerous other, similar threats, then it seems to me I might better spend my time using the time I have well.
Groups of humans such as the united states are able to blow up a target from so high up in the air that you can't see where the bomb was launched from. A medieval king couldn't even fathom defending from this sort of attack.
And yet, humans have figured out how to defend against this sort of attack, to the point that we decisively lost the war in Afghanistan.
If you'll allow me to quote myself:
Coin-op payphones granted, there's something to Gibsonian cyberpunk, something between an insight and a thesis, that sets his work apart from the stolid technothrillers of Clancy and company. Something along the lines of "technology is useful, not merely because they have a rock and you have a gun, but because it inherently and intractably complicates the arithmetic of power." His stories are built on a recognition that people are not in control, that our systems reliably fail, that our plans are dismayed, and that far from ameliorating these conditions, technology only accelerates them.
"AI Safety" operates off a fundamentally Enlightened axiom that chaos and entropy can, with sufficient intelligence, be controlled. I think they are wrong, for roughly the same reasons that all previous attempts to create perfect order have been wrong: reality is too complicated.
I am not arguing that AI can't kill us all. I'm pretty sure we can kill us all, and I think the likelihood of us doing so is considerable.
Yudkowsky does not want to rule you, he just wants to keep you, or anyone including himself, from massing billions of dollars worth of compute and using it to end humanity.
He wants to invent a new category of crime with global jurisdiction and ironclad, merciless enforcement. I am 100% on board, provided that it is me and mine given exclusive control of the surveillance and strike capabilities needed to enforce this regime. Don't worry, we'll be extremely diligent in ensuring that dangerous AI is suppressed.
It seems to me that there is a long tradition of smart people coming together an inventing new and not distantly in the past foreseen weapons and technologies.
There's also a long tradition of smart people "forseeing" weapons that aren't physically possible.
There's also a long tradition of smart people failing to recognize that weapons or other tech can stagnate due to basic physical laws.
"Maybe the AI will figure out how to hack the simulation" or "maybe the AI will kill us all in the same second with hypertech nanobots" are not scenarios that we can plan for in any meaningful way, but much AI safety messaging uses them as examples. They do this because they are worried about out-of-context problems, and want to handle such problems rationally. But the core problem is that out-of-context problems cannot in fact be handled rationally, because our resources are finite and the out-of-context possibility space is infinite.
They argue that Superintelligence will give the AI an unbridgeable strategic advantage, that intelligence allows unlimited Xanatos Gambits, but this doesn't in fact appear to be true. Planning involves handling variables, and it seems obvious to me that variables scale much, much faster that intelligence's capacity to solve for their combinations. And again, we can see this in the real world now, because we have superintelligent agents at home: governments, corporations, markets, large-scale organizations that exist to amplify human capabilities into the superhuman, to gather, digest and coordinate on masses of data far, far beyond what any human can process. And what we see is that complexity swamps these superintelligences on a regular basis.
And there is of course just the more mundane issue of a sufficiently advanced AI that is merely willing to give cranks the already known ability to manufacture super weapons could be existential.
You frame this as though we are in some sort of stable environment, and AI might move us to an environment of severe risk. But it appears to me that we are already in an environment of severe risk, and AI simply makes things a bit worse. We are already living in the vulnerable world; the vulnerabilities just aren't perfectly-evenly distributed yet.
Meanwhile, "AI Safety" necessarily involves amassing absolute power, and as every human knows, I myself am the only human that can be truly trusted with absolute power, though my tribal champions might be barely acceptable in the final extremity. I am flatly unwilling to allow Yudkowksy to rule me, no matter how much he tries to explain that it's for my own good. I do not believe Coherent Extrapolated Volition is a thing that can possibly exist, and I would rather kill and die than allow him to calculate mine for me.
Where do these diminishing returns kick in?
Within the human scale, at the point where Von Neumann was a functionary, where neither New Soviet Man nor the Thousand Year Reich arrived, where Technocracy is a bad joke, and where Sherlock Holmes has never existed, even in the aggregate.
Or maybe you mean to application of intelligence, in which case I'd say just within our current constraints it has given us the nuclear bomb, it can manufacture pandemics, it can penetrate and shut down important technical infrastructure.
We can do all those things. Can it generate airborne nano factories whose product causes all humans to drop dead within the same second? I'm skeptical.
It seems to me that it does, yes. If your intelligence scales a hundred-fold, but the complexity of the thing you want to do scales a billion-fold, you have lost progress, not gained it. The AI risk model is that intelligence scales faster than complexity and that hard limits don't exist; it's not actually clear that this is the case, and the general stagnation of scientific progress gives some evidence that the opposite is the case. It seems entirely possible to me that even a superintelligent AI runs into hard limits before it begins devouring the stars.
Now on the one hand, this doesn't seem like something I'd want to gamble on. On the other hand, it's obviously not my choice whether we gamble on it or not; AI safety has pretty clearly failed by its own standards, there is no particular reason to believe that "safe" AI is a thing that can even potentially exist, and we are going to shoot for AGI anyway. What will happen will happen. The question is, how should AI doomsday worries effect my own decisions? And the answer, it seems to me, is that I should proceed from the assumption that AI doomsday won't happen, because that's the branch where my decisions matter to any significant degree. I can solve neither AI doomsday nor metastable vacuum decay. Better to worry about the problems I can solve.
With an arbitrarily large amount of intelligence deployed to this end then unless there is something spooky going on in the human brain then we should expect rapid and recursive improvement.
...Or unless intelligence suffers from diminishing returns, which actually seems fairly likely.
you can make your own black powder, and your own cannons to shoot it out of.
Do you oppose the use of public resources to subsidize their lifestyle? Can you actually prevent public resources from being used to subsidize their lifestyle? Or is this just policy arbitrage, where we appeal to atomic individualism or social unity, whichever is convenient at the moment?
But in the same way that prediction markets help to reveal true beliefs, free economic markets reveal true preferences.
Would you agree that most poor people have a revealed true preference to invest most of the money they receive into credit card payments and similar fees, and that the people who receive those fees are benevolent actors working tirelessly to help such poor people live their very best life?
If not, I'm curious as to why you view the market as "revealing true preferences" in the one case and not the other.
That seems like an extremely bad question to ask. Do you interrogate all your moral intuitions off a similar framing, starting with what you wish was true and working from there? And note that you are treating "poor" and "unfortunate" as philosophical primitives, states that simply exist ex nihilo.
Suppose I assert that all humans deserve justice. How does this interact with your "how much would I want the poor and unfortunate to get, in a vacuum where it's no skin of mine or anyone else's nose"? Because my understanding is that what some humans deserve from justice is swift, merciless death.
The specific speech that brought the question to mind was Alexander's purported speech to his mutinous army at Opis. A neat parallel to your own choice, it seems.
I feel both these examples are quite distant, and that I have seen and heard many examples of leaders or prominent men being noted for addressing hostile audiences in circumstances of significant danger, and nonetheless persuading the audience by their appeal. Unfortunately, I can't recall them; as with our two examples here, it would be interesting to see what elements of shared culture people appeal to under duress, and assess whether those elements are meaningfully shared under current conditions.
The point is that happiness does not derive from material circumstances, in opposition to the underpinnings of the argument that all people "deserve to be happy", contrasted with "every person deserves to be as happy and safe as they can accomplish themselves". I'm not sure the latter is the precise wording I'd nail my flag to, but the former seems profoundly untrustworthy and dangerous.
My concern is that WhiningCoil does not recognize that all else being equal it is always good, rather than neutral, for sentient beings to have nice things.
It seems to me more likely that they recognize that all else is, in fact, never equal, never has been, and likely never will be.
Solzhenitsyn figured out how to be happy in a death camp. Some Ukrainians in the Holodomor figured out how to be happy while they and their families were intentionally starved to death. These apparent historical facts appear to me to support @WhiningCoil's model of happiness, and undermine the one you are presenting.
This world. 14th Amendment, baby. You don’t get to pick one line from the Constitution and ignore the rest.
Why not? Everyone else does, and whatever objections you and I might muster have clearly failed.
To be clear, I do not endorse the assessment described above. I do not believe that "American" is a boundary that can be effectively drawn on racial or ethnic lines. Unfortunately, that agreement is downstream from my assessment that "American" is not a boundary that can be effectively drawn at all.
I think this is a pretty good effort at defining "American culture", and do not believe that I could do better.
Suppose you are confronted by an angry and possibly violent mob of Americans. Which of these features you have listed would you appeal to in attempting to talk them down and convincing them to disperse? That is to say, which of these features provide serious, reliable traction on an interpersonal level?
Talking down angry mobs is something notable leaders have needed to do many times throughout history, and generally "culture" is what has allowed them to do it. Do you believe you are describing that sort of culture above?
Good lord no it didn't.
I watched it happen. I lived through it happening. The GWOT drove me into the Blue Tribe for a decade, and I only returned when the existing Red establishment was driven out in turn. 2000s republican leaders now mostly vote democrat.
As for the destruction of America...
If anything, since it became a bipartisan thing to criticize it ought to be a unifying factor, right?
We don't have to appeal to theory when we can observe what actually happened. The GWOT burned the Reagan coalition to the ground and supercharged progressivism. Progressive overreach has, in turn, destroyed the nation. The Constitution is dead. Our system of government is pretty clearly dead. Tribal values are now mutually-incoherent and -intolerable, and the stress of tribal conflict is blowing out what institutions remain to us one after another. Reds and blues hate each other, wish to harm each other, and are gleefully seeking escalation to subjugate each other. This process takes time, but the arc is not ambiguous, and neither is where it leads. At some point in the next few years, it will be Blue Tribe's turn to wield federal power, and Red Tribe's turn to resist it, and at that point, if not sooner, things will get significantly worse. It is insanity at this point to think either that the tribes are going to coordinate a halt to the escalations, or that our society can survive another decade of accumulated escalations. The peace is not going to last.
But also, intervening in Iran doesn't have to involve an invasion and occupation. That is learning.
As we have previously discussed, Libya also did not involve an invasion and occupation.
You appear to be assuming that the general population of Iran is some sort of generic huddled mass, yearning to breath free, that the problem is just the Mullahs and if we sweep the mullahs out of the way Iran magically transforms into Michigan. But Iran is not Michigan; at this point, even Michigan is not Michigan. Iran's current government are not alien space invaders, but rather Iranians who emerged from the population of Iran, and are thus at least somewhat representative of the sort of leadership that population produces. The Shah was an Iranian leader who operated torture dungeons. He was overthrown by Iranian Muslim communists(?), who... then also operated torture dungeons. Why do you believe that radical change in the government will produce a totally new sort of government, when it did not do so previously?
Your confidence that an intervention likely leads to a better situation for all involved is contradicted by recent experience, which you are dismissing out of hand. I have no reason to believe that "this time, it will be different", because it has not in fact been different any of the previous times. I do not care that the mobs are crying out for our aid; mobs cry out for lots of things when such appeals are obviously in their immediate interest, but that does not mean what they are crying out for today is a reliable indicator of their future preferences, and intervention has a grim track record.
I am not questioning whether we can bomb a second-tier power. I am questioning whether bombing will do any good, with the full knowledge that if I and people like me consent to bombing, and things go sideways, next we will be arguing over whether we should bomb them more, or maybe send just a few troops, and then just a few more. I note that the US and Israel "dominated a second-tier power" less than a year ago, and yet here you are, demanding we bomb them again. Did we not dominate them hard enough last time? If so, why are you claiming that this current domination will succeed where the previous domination failed?
I think any objective observer who isn't suffering from Iraq Syndrome or a committed isolationist can see this is a good case for it.
Any observer who does not suffer from "Iraq Syndrome" is not thinking objectively. The GWOT destroyed the Republican party as an institution, and arguably destroyed America as a nation. It was ruinously expensive by every possible measure, for little to no perceivable benefit. Those responsible have taken no accountability and have suffered no consequences, and there is not even the slightest reason to be confident that Lessons have been Learned. And that was before we entered a fundamental revolution in military affairs, wherein it is questionable whether our comically expensive military is actually capable of surviving, much less dominating.
You should not need to stick your dick in a blender three times (four? Five?) to learn not to do that, but apparently some people need to go all the way down to the angriest inch.
What does the Alternate history look like if America stays entirely out of World War I? It's hard for me to imagine things working out worse than they did in our timeline. Is it enough change that WWII doesn't happen, or ends up as the West vs the Commies?
Again, I think there's a strong case to be made that our current position is pretty similar to 1910 or so, for a whole variety of reasons. I think we should try to lean hard into isolationism this time around, not least from observing how WWI and WWII went for the sclerotic, unwieldy empires that rolled into them. Modelling our current choices off WWII history is like a 55-year-old morbidly-obese former athlete with a bad back and a bum knee thinking he can throw down like he did when he was an 18-year-old in peak condition. We should be considering our future more from the perspective of Tsarist Russia or the Austro-Hungarian empire, not from that of a vital, highly cohesive, highly motivated state gifted with secure borders and unlimited, untapped natural resources.
I suppose you believe we should have stayed out of WWII as well.
The standard narrative on WWII is vulnerable to the complication that in defeating one set of horrifyingly evil tyrannies, we made alliance with and gave away half the planet to another set of at least equally-horrifyingly-evil tyrannies, who we then could not prevent from decimating, immiserating and enslaving the half of the planet so ceded, which we then had to spend the next two generations containing at ruinous cost, and whose ideology fatally poisoned our own nations. I think it's pretty easy to be happy that we crushed Hitler and the Imperial Japanese, while still noting that the outcome was not something we should see as the way we want to do business going forward.
Further, World War II appears to be a pretty contiguous outgrowth of World War I, where we "made the world safe for democracy" in a way that appears to have laid the groundwork for incalculable ruin over the subsequent century.
I think we are, at this moment, enjoying the twilight of our own Belle Epoque. The lessons that I draw from the last century is that fighting to impose control over the whole world is utterly unworkable, hubristic and ruinous. We have more than enough problems at home; we cannot afford to fix all the problems of the wider world.
- Prev
- Next

All available evidence indicates that you and all your descendents will someday die no matter what anyone does. All available evidence indicates that humanity will go extinct, and that extinction being soon is a distinct possibility, again no matter what anyone does.
I am not building AI. I am pointing out that Yudkowsky's proposed solution seems both very unlikely to work and also very likely unnecessary for a whole host of reasons, and that there appears to me approximately zero reason to play along with his schemes. I am not gambling with your life, or that of your descendants. You do not get to stack theories a hundred layers high and then declare that therefore, everyone has to do what you say or be branded a villain.
I say Yudkowsky demands unaccountable power, because it is obvious that this is, in fact, exactly what he's demanding. Neither he nor you get to beg out of the entire concept of politics because you've invented a very, very scary ghost story.
More options
Context Copy link