@To_Mandalay's banner p

To_Mandalay


				

				

				
1 follower   follows 0 users  
joined 2022 September 06 04:16:49 UTC
Verified Email

				

User ID: 811

To_Mandalay


				
				
				

				
1 follower   follows 0 users   joined 2022 September 06 04:16:49 UTC

					

No bio...


					

User ID: 811

Verified Email

Because if so, I'd bet there's a good chance he can come up with a sequence of words that he can utter that would either cause you not to want to throw him in the piranha tank

I highly doubt it.

My problem is not so much the "AI jedi mind tricks its way out of the box" idea as the "AI bootstraps itself to Godhood in a few hours" idea.

There are a lot of cruxes in this scenario. Do the humans have no ability to vet these DNA sequences for themselves? Do they bear no suspicious similarities to any currently existing viruses? How are the AI waifus brainwashing these labworkers into opening a vial of deadly virus? Is everyone in the lab taking orders directly from the AI?

and each generation built off the last.

Not really, at least until very recently. The life experience of the average human being was just about static for thousands of years before the industrial revolution. Humans may have become a little more intelligent over the centuries but they didn't suddenly become several hundred times more intelligent c. 1750.

Do you think it matters if it takes the malign AGI a day or a century to dismantle the Solar System?

Yes I think it matters if I die tomorrow or in 100 years.

Infect 20 people and send them on impromptu business trips to every major populated landmass.

How does the AI infect these twenty people and how does it send them on these trips?

progress there is, to some extent, made by 'smart people thinking, scribbling, and talking'.

A lot rides on "to what extent?" It's not clear to me that if you just 'sped up' the brains of top mathematicians or physicists by 10x or 50x or whatever it would actually cause a commensurate explosion in scientific breakthroughs.

It's not the idea of a supervirus I have a problem with so much as the idea that once AI reaches human level it will be able to conceive, manufacture, and efficiently distribute such a supervirus in short order.

To an extent, it's just 'do what we're doing now, but better and faster'.

If you want to do something 'better' or 'faster' you have to do it differently in some way from how it was being done before. If you are just doing the same thing the same old way then it won't be any better or faster. So an intelligence would have to make war, do politics, economics, etc. in a different way than humans do, and it's not clear that "just be smarter bro" instantly unlocks those 'different' and scarily efficient ways of making war, doing politics.

It is difficult to answer this question empirically but the only real way to do so would be to look at historical conflicts, where it's far from clear that the 'smarter' side always wins. Unless you define 'smarter' tautologically as 'the side that was able to win.'

To vastly outclass humans in 'technological development, politics, economic productivity, war, and general capability' I think an AI would actually need to have an advantage in any given problem.

Also, humans have been a little bit smarter than monkeys for a couple hundred thousand years at least, and yet we didn't go to the moon until my dad was twenty years old. It's clear that just being a little smarter than monkeys doesn't mean you're going to the moon next Tuesday, there's something more to it than that. Likewise, being a little bit smarter than humans doesn't necessarily mean you're going to be disassembling the solar system for atoms tomorrow.

However, if instead of the 200 IQ genius, you get something like a full civilization made of Von Neumann geniuses, thinking at 1000x human-speed (like GPT does) trapped in the box, would you be so sure in that case?

Well, I don't know. Maybe? I must admit I have no idea what such a thing would look like. My problem isn't necessarily the ease or difficulty of boxing an AI in particular, but more generally the assumption in these discussions which seems to be that any given problem yields to raw intelligence, at some point another, and that we should therefore except a slightly superhuman AI to easily boost itself to god-like heights within a couple seconds/months/years.

Like here, you say, paraphrasing, "a 200 IQ intelligence probably couldn't break out of the box, but what about a 10,000 IQ AI?" It seems possible or even likely to me that there are some problems for which just "piling on" intelligence doesn't really do much past a certain point. If you take Shakespeare as a baby, and have him raised in a hunter-gatherer tribe rather than 16th-century England, he's not going to write Hamlet, and in fact will die not even knowing there is such a thing as written language, same as everybody else in his tribe. Shoulders of giants and all that. Replace "Shakespeare" with "Newton" and "Hamlet" with "the laws of motion" if you like.

I'm not convinced there is a level of intelligence at which an intelligent agent can easily upgrade itself to further and arbitrary levels of intelligence.

(As a caveat, I have no actual technical experience with AI or programming, and can only discuss these things on a very abstract level like this. So you may not find it worthwhile engaging with me further, if my ignorance becomes too obvious.)

Left-wing anti-patriotism is a pretty old phenomenon, though I'm not generally very patriotic myself so I don't view it as a really bad thing.

During the Spanish Civil War "¡Viva España!" was a strictly fascist battle-cry and might have gotten you shot on the left-wing side. The Bolsheviks very early on were openly contemptuous of Russian national identity, and the USSR was meant to be the nucleus of a world socialist state, with its localization in Russia purely incidental (notice that the very term 'USSR' contains no geographical identifiers). This changed in later years with the USSR being hollowed out and worn as a skin suit by Russian Empire II.

I am someone with little to no technical no-how but my intuitive sense having played around a little with all these models is that the leap from 3 to 4 didn't seem nearly as massive as some of last winter's hype would have suggested.

Every discussion I've ever had with an AI x-risk proponent basically goes like

"AI will kill everyone."

"How?"

"[sci-fi scenario about nanobots or superviruses]"

"[holes in scenario]"

"well that's just an example, the ASI will be so smart it will figure something out that we can't even imagine."

Which kind of nips discussion in the bud.

I'm still skeptical about the power of raw intelligence in a vacuum. If you took a 200 IQ big-brain genius, cut off his arms and legs, blinded him, and then tossed him in a piranha tank I don't think he would MacGyver his way out.

This comment has made me think a bit harder about my assumptions. Perhaps such a spiteful disposition is more common than I had previously believed.

Okay, but my only point is that I don't think it makes a real difference for better or worse whether China creates AGI or whether the US does, whereas a lot of people think it does make a real difference.

he also won't create a paradise with an AGI button.

You sure? It's not like he'd have anything to lose. Creating a paradise for everyone wouldn't detract from the slice of paradise available to Putin and his buddies. Would he really decline out of sheer spite?

«About 1.2% of U.S. adult men and 0.3% to 0.7% of U.S. adult women are considered to have clinically significant levels of psychopathic traits».

But it has to be a special kind of psychopath. Psychopathy only implies a lack of empathy and an antisocial personality. Such a person might deny paradise for others for personal gain, but they would have no reason to do so out of sheer spite, unless they were also particularly sadistic psychopaths.

I think the problems of such a future age would be so divorced from those of the present day that it would be difficult to predict from their present-day positions and motives and histories whether Sam Altman or Xi Jingping or anyone else would be particularly likely abuse the keys to the kingdom.

If the AI is so smart couldn't it easily disempower humanity without even going through the effort of killing us all?

Lesser AI systems, yes. But superintelligence, if both the doomers and optimists are to be believed, will have such a power that it will be capable of creating a paradise, in which case it doesn't matter who gets it, because that's what every human on earth would ask it to do with the exception of a few lunatics who are vastly unlikely to be the ones in the position to make the decision. Or it will kill everyone because we couldn't figure out how to get it to follow instructions, in which case it doesn't matter who gets it.

I also find the "WHAT IF CHINA/RUSSIA/ETC. GET THERE FIRST?" arguments extremely silly.

With a the exception of a tiny number of particularly unhinged sadistic psychopaths (the number is probably roughly epsilon), the vast vast majority of people are going to press the "create paradise" button not something so parochial as the "make my country hegemonic forever" or the "kill all the ethnic group I don't like" button. Even people who are, right now, merciless hardnosed Machiavellians would press the paradise button since they could do so without any trade-off for themselves or their in-group.

what is an anti-AI US going to do about it?

Start a nuclear war, according to Yud (in case the OP was vague I absolutely don't agree that "create totalitarian world government to stop AI" is in any way a good idea). Otherwise the US would have to convince Chinese leadership that AI research is tantamount to pressing a suicide button.

Was a bit surprised to see this hadn't been posted yet, but yesterday Yudkowsky wrote an op-ed in TIME magazine where he describes the kind of regime that he believes would be necessary to throttle AI progress:

https://archive.is/A1u57

Some choice excerpts:

Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs. And so they all think they might as well keep going. This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.

The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for anyone, including governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

if its presence in the CW thread needs justifying, well, it's published in a major magazine and the kinds of policy proposals set forth would certainly ignite heated political debate were they ever to be seriously considered.

"Yudkowsky airstrike threshold" has already become a minor meme on rat and AI twitter.

So we have one argument from the past about one group of people that you say was wrong

I didn't say it was wrong, assman did, more or less:

What makes his thesis “Bio”leninism is that in the 19th-20th century, society was less egalitarian and there were people being oppressed who would have otherwise been successful without these barriers which made socialism/communism attractive to a wider group of people than In current year, when all de jure discrimination is gone.

It would be surprising if previous claims of the biological inferiority of underclass groups turned out to be false but this time they were correct.

If you want to say, "the underclass was inferior back then, and they still are" that is a more consistent position.

My position is that it's silly to say "the underclass back then wasn't really biologically inferior, but now they are."

What makes his thesis “Bio”leninism is that in the 19th-20th century, society was less egalitarian and there were people being oppressed who would have otherwise been successful without these barriers which made socialism/communism attractive to a wider group of people than In current year, when all de jure discrimination is gone.

If you read something like Lothrop Stoddard's "Menace of the Underman" you find the exact same argument, that the revolutionary socialist movements are drawn from the resentful, biologically inferior underclass (what Stoddard calls the revolt of the "hand against the head"). It was a fairly common far-right line of thought that industrial workers, the primary base of support for bolshevism and other such movements, were in fact heredity 'undermen.' It find it a bit silly to say, "okay but now the underclass is really biologically inferior."