@aqouta's banner p

aqouta


				

				

				
7 followers   follows 0 users  
joined 2022 September 04 18:48:55 UTC

Friends:

@aqouta

Verified Email

				

User ID: 75

aqouta


				
				
				

				
7 followers   follows 0 users   joined 2022 September 04 18:48:55 UTC

					

No bio...

Friends:

@aqouta


					

User ID: 75

Verified Email

This is where we get into the bad. In diagnosing what was going wrong with the attempted fix, it got allllll into mess that was actually pretty low probability. Suggested permissions issues, suggested problems with registry entries. A couple of them were low risk, and at the time, seemed like they could be plausibly related, and I did mess with a couple things. Others were the ugly. No, Mr. Bot, I am not going to just delete that registry value (especially after I did a little non-LLM side research on what that registry value actually does).1

In the end, when I told it that I was balking on doing what it wanted me to do, it suggested that I could, in the meantime, do one of the standard procedures in a different way. Of course, it thought that doing this would just be a step toward me ultimately having to delete that registry value. But I figured trying this alternate procedure at the very least couldn't hurt, and indeed, it helped by giving me an actual error code!

The LLM thankfully helped me decode it (likely faster than a google search), which allowed me to adjust my fix. This was actually the key step, after which, I was able to understand what I think was going on and manage later hiccups. Unfortunately, the LLM didn't grasp this. It still was set on, "Great! Now you're ready to delete registry values!" Sigh.

There are some LLM fundamentals that aren't taught but maybe should be. one of them is that if you even sniff that the LLM might have strayed an inch in the wrong direction then you need to start a fresh context chat. In fact even if things go well once you've moved through a few steps of the process you ought to start a fresh chat. Always be starting a fresh chat.

I guess I don't really understand the general rule you're trying to make. Nuclear is the only real weapon system that has truly had a strongly bounded development ceiling. We're still making better missiles, airplanes, drones, boats, ect. The bounding on those is mostly just hard physical rules. Intelligence scales in a new vector direction. Not only is the intelligence directly usable in warfare through things like cyber where the offence/defense equilibrium seems to favor the attacker, but it also acts as a multiplier for all the other scaling. You get better airplanes, drones, boats and missiles faster because one of the most bottle necked inputs to improvements is intelligence. And then there are all the recursive elements, scaling intelligence scales how well we can scale intelligence, it also allows us to efficiently search design space for other weapons systems to scale. Intelligence scaling is the trick that let humans conquer the planet, increasing our access to it is a whole new game.

You are welcome to reject the inevitability of extinction. You are not welcome to use your rejection of extinction to claim divine right to getting everything you want the way you want it. If you need things from other people, resources, cooperation, whatever, you have to actually negotiate for them, not declare that they do what you want or else they're damning all humanity.

I want what everyone without deranged priorities wants, to not have all humans die preventably for no reason. This isn't hyperbole, it's difficult to believe there is any large contingent of people who disagree with this ultimate goal. Intelligent minds can disagree on the assessment of the risk or how best to mitigate it but the actual goal here is very basic and universal.

I am more worried about current power allocation than I am about hypothetical hostile super intelligent AGI.

If you're worried about the current power allocation I think you should at least be skeptical of the people building the most powerful tools/weapons that man have ever forged. Weapons they demonstrably cannot reliably control even if their aims were wholesome.

given that the current AI safety alliance does not see a place in the future for me and mine anyway, it doesn't seem like I've got much of a choice.

What on earth are you even talking about man?

Your viewpoint consigns us to extinction by any means it might come. I reject the inevitability of extinction.

Everyone so far has died, pascal's wager might as well apply to them all, it doesn't really matter to the living. But you gamble with the still in play lives of me and my descendants.

edit: and for what? What the fuck do you need $500 billion data centers for exactly? This isn't like, "why do you need that handgun" territory. It's "why do you need 90% of all the plutonium in the world" territory.

Then crawl under a rock and die, let the rest of us who want to live discuss the live matters.

If there were a certain kind of rock that washed up on the shores of the Ganges from time to time that granted whoever first rubbed it a wish that in the monkey's paw tradition always caused calamity for the person who used it, and we knew that calamity could include the destruction of the earth and the death of everyone on it, would you or would you not want the effected zone closed off? Or would you just trust your fate to whoever in cashmere(realistically the billionaires that each just bought a third of the shoreline) next finds a rock?

I've also pointed out that in traditional weapons systems that are useful for inter-government rivalries aren't subject to unbounded growth either. You've replied by special pleading for "intelligence."

Like how a slippery slope is not a fallacy if you describe why the slope is slippery(otherwise informing someone of a slippery step they ought watch out for as they leave a building would be a fallacy), "special pleading" is not a fallacy when one describes why a specific case is different, which I have. I've described why other weapons systems do not have unbounded development, thus it is not special pleading.

(We can set aside for the moment the fact that this is an atypical definition of intelligence - it suggests that a 200 IQ paraplegic is much much less intelligent than a newborn.)

If the 200 IQ paraplegic is able to communicate in any way I do expect they would be able to have a larger impact on the general environment. If you want a more explicit definition Yud comes up with cross domain optimization although applying it to our current conversation would take a couple back and forths.

Occasionally I hear someone say something along the lines of, "No matter how smart you are, a tiger can still eat you." Sure, if you get stripped naked and thrown into a pit with no chance to prepare and no prior training, you may be in trouble. And by similar token, a human can be killed by a large rock dropping on their head. It doesn't mean a big rock is more powerful than a human.

A large asteroid, falling on Earth, would make an impressive bang. But if we spot the asteroid, we can try to deflect it through any number of methods. With enough lead time, a can of black paint will do as well as a nuclear weapon. And the asteroid itself won't oppose us on our own level - won't try to think of a counterplan. It won't send out interceptors to block the nuclear weapon. It won't try to paint the opposite side of itself with more black paint, to keep its current trajectory. And if we stop that asteroid, the asteroid belt won't send another planet-killer in its place.

We might have to do some work to steer the future out of the unpleasant region it will go to if we do nothing, but the asteroid itself isn't steering the future in any meaningful sense. It's as simple as water flowing downhill, and if we nudge the asteroid off the path, it won't nudge itself back.

The tiger isn't quite like this. If you try to run, it will follow you. If you dodge, it will follow you. If you try to hide, it will spot you. If you climb a tree, it will wait beneath.

But if you come back with an armored tank - or maybe just a hunk of poisoned meat - the tiger is out of luck. You threw something at it that wasn't in the domain it was designed to learn about. The tiger can't do cross-domain optimization, so all you need to do is give it a little cross-domain nudge and it will spin off its course like a painted asteroid.

Steering the future, not energy or mass, not food or bullets, is the raw currency of conflict and cooperation among agents. Kasparov competed against Deep Blue to steer the chessboard into a region where he won - knights and bishops were only his pawns. And if Kasparov had been allowed to use any means to win against Deep Blue, rather than being artificially restricted, it would have been a trivial matter to kick the computer off the table - a rather light optimization pressure by comparison with Deep Blue's examining hundreds of millions of moves per second, or by comparison with Kasparov's pattern-recognition of the board; but it would have crossed domains into a causal chain that Deep Blue couldn't model and couldn't optimize and couldn't resist. One bit of optimization pressure is enough to flip a switch that a narrower opponent can't switch back.

Genuinely, if you have not read the sequences yet, I suggest you do. At least then you'll be arguing with the Yud contingent in their own terms and not your misunderstanding of them.

And there's no reason to think that the government wouldn't also believe that after a certain point, intelligence would either be actively harmful or not worth the extra effort required to get it.

You keep doing this thing where I talk about how more powerful AI is obviously more useful for existential inter-government rivalries and then you note that it might not be useful past a point for pacifying a population as if it's a counter argument. It's not, these things can obviously both be true at once.

A position which is asserted commonly as if needing no defense despite the fact that this does not seem to be true of our own species, at least in the evolutionary sense.

It is trivially better for smiting your enemies and evolutionally brains have grown right up to the size that they barely fit through the mother's birth canal and it's seen fit to leave the offspring helpless for a long time so that they can develop even further. There are some lower kinds of intelligence with very low pressure environments where the extra calories aren't worth the extra compute but that isn't the environment we find ourselves in. Intelligence is the ability to manipulate the environment to your will, the environment is contested and more intelligence wins the competition.

They argue that Superintelligence will give the AI an unbridgeable strategic advantage, that intelligence allows unlimited Xanatos Gambits, but this doesn't in fact appear to be true. Planning involves handling variables, and it seems obvious to me that variables scale much, much faster that intelligence's capacity to solve for their combinations. And again, we can see this in the real world now, because we have superintelligent agents at home: governments, corporations, markets, large-scale organizations that exist to amplify human capabilities into the superhuman, to gather, digest and coordinate on masses of data far, far beyond what any human can process. And what we see is that complexity swamps these superintelligences on a regular basis.

What are you talking about? Groups of humans such as the united states are able to blow up a target from so high up in the air that you can't see where the bomb was launched from. A medieval king couldn't even fathom defending from this sort of attack. Of course intelligence, attention and knowledge scale to create unforeseeable threats. And the medieval king is a generous case, what hope do bonobos have? The only balance we find is that other organizations of humans scale themselves and counterbalance. And before that counterbalance was formed you had scenarios like an island off the European coast conquering half the world because they got industrialization and the ability to combine intelligences in the form of a joint stock company first.

Meanwhile, "AI Safety" necessarily involves amassing absolute power, and as every human knows, I myself am the only human that can be truly trusted with absolute power, though my tribal champions might be barely acceptable in the final extremity. I am flatly unwilling to allow Yudkowksy to rule me, no matter how much he tries to explain that it's for my own good. I do not believe Coherent Extrapolated Volition is a thing that can possibly exist, and I would rather kill and die than allow him to calculate mine for me.

Yudkowsky does not want to rule you, he just wants to keep you, or anyone including himself, from massing billions of dollars worth of compute and using it to end humanity.

If you want to make the argument that AI development will hit a wall then that's certainly a position where intelligent people can disagree, it's just a different argument to the one where the wall is imposed by government interest in not going too far past what is needed for population pacification.

Comparing AI to other weapons systems in a maximally generic way seems silly. Bombs aren't developed unboundedly because increased explosion yield has a relatively low upper bound of being useful at all. Obliterating more than a city center just isn't strategically useful so development continues but in other areas like platform and delivery development which are themselves bounded by MAD doctrine. AI doesn't have this same yield scaling diminishing return. More intelligence is simply better and will be simply better scaled up to a point where alignment becomes existential.

You said earlier

I think most governments have similar incentives for, well, aligning AI to be powerful enough to succeed at its tasks, but not so powerful as to be uncontrollable.

This, to me, contradicts unbounded development to maintain edge over rival nations.

you prevent an edge from turning into an overwhelming power imbalance by developing your own capabilities. Which means your theory that they might develop AI just up until the point where it can control the population and no further cannot occur in any multi-state system.

This also means that the US will be able to safely stop developing AI well before reaching the area where AI is dangerous, since it can simply decide to retard the progress of hostile AIs using its considerable AI capability advantage in such a way as to leave its own AI capabilities considerably more powerful.

This doesn't make any sense. America has stayed ahead in ai development by just developing it faster, this does not in any way imply the ability to flick an off switch. They'd need to be at the point over being able to overthrow the CCP to do this. It's just another form of one world government.

Sure it does. AI, as currently constituted, is more vulnerable to MAD than governmental bodies, not less.

This isn't true. MAD works because of second strike capabilities, there is no AI second strike.

There's a difference between dominance in nuclear weapons and more powerful nuclear weapons.

Dominance in nuclear doesn't scale the way dominance in AI scales. You don't get better at world dominance by developing much stronger or numerous nuclear weapons than it takes to obliterate your rivals. AI capabilities continuously enable dominance of your rivals. If your AI is smarter it can defeat your rivals cybersecurity, build more efficient weapons and design better contingencies. A sufficient power gap in AI capabilities could make a conflict look as one sided as Britain with the maxim gun vs natives armed with wooden spears.

Mutually assured destruction doesn't hold for AI, and recently in the nuclear doctrine there have been escalation in defense tech that indeed indicate states would like to have dominance in the area.

AI is useful for intergovernmental conflict. More powerful AI is more useful for intergovernmental conflict.

Is this a one world government? Because the race scenario is super likely for pushing AGI forward.

Did a notably finite number of very smart people produce nuclear bombs yes or no? Can a notably finite number of very smart people almost certainly produce a super pandemic yes or no? And these are the absolutely mundane appliations of intelligence.

It seems to me that there is a long tradition of smart people coming together an inventing new and not distantly in the past foreseen weapons and technologies. The very nature of these advancements not being seen far before they came about makes conjuring up specific predictions impossible. You can always call anything specific science fiction, but nuclear was science fiction at one point. And there is of course just the more mundane issue of a sufficiently advanced AI that is merely willing to give cranks the already known ability to manufacture super weapons could be existential.

The "AI-Safety" people as you call them have a particular interest in alignment as AI hits super intelligence. They don't need to be wearing their "AI-safety" hats to oppose a surveillance state. You don't need any kind of special MIRI knowledge to oppose surveillance states and people have opposed them for a long time. This is the kind of scope creep criticism that leftists do when the accuse climate focused causes of not focusing enough on police injustice against BIPOCs.

Your complaint appears to be that this group of people concerned specific with a singularity event needs to instead focus their efforts on something you don't even seem to think AI is needed to make happen. And as an aside, all the thinkers I've read that you would consider AI-Safety aligned have in fact voiced concerns about things like turning drones over to AI. Their most famous proponent, big yud wants to nuke the AI datacenters.

Calling it non-existential is cope. As a threat it's far more likely, and we have zero counter-measures for it. Focusing on scenarios that we don't even know are possible over ones we know are possible, and we are visibly heading towards them, is exactly my criticism.

You're just describing a subset of unaligned AI where the AI is aligned with a despot rather than totally unaligned. Or, if the general intelligence isn't necessary for this, then it's a bog standard anti-surveillance stance that isn't related to AI-safety. The AI-Safety contingent would absolutely say that this is an unaligned use of AI and would further go on to say that if the AI was sufficiently strong it would be unaligned to its master and turn against their interests too. The goal of AI safety is the impossibly difficult task of either preventing a strong AI future at all or engineering an AI aligned with human interests that would not go along with the whole 1984 plan.

Where do these diminishing returns kick in? Just within the human form factor we support intelligences between your average fool and real geniuses. It seems awfully unlikely that the returns diminish sharply at the top end of the curve built by natural selection under many constraints. Or maybe you mean to application of intelligence, in which case I'd say just within our current constraints it has given us the nuclear bomb, it can manufacture pandemics, it can penetrate and shut down important technical infrastructure. If there are some diminishing returns to its application how confident are you that the wonders between where we are now and where it diminishes are lesser to normal distributional inequality that we've dealt with for thousands of years?

How about something closer to the bone then? Say I'm in the employ of an outreach organization that everyone knows is run by the mob but technically isn't and my job is to follow around cops and loudly broadcast their position to the general public. The organization also just happens to deploy my services around the time when mob activity is supposed to be going on. In fact they assign me to a particular street corner and instruct me to just wait until a cop car comes by and start work then. Is this protected speech?

These middle-ground scenarios are so absurdly under-discussed that I can't help but see the entire field of AI-safety as a complete clownshow.

  1. Middle ground plateaus aren't particularly likely and anyone who thinks about the problem for more than it takes to write snarky comment should understand that. In any world where AI is good enough to replace all or most work then it can be put towards the task of improving AI. With an arbitrarily large amount of intelligence deployed to this end then unless there is something spooky going on in the human brain then we should expect rapid and recursive improvement. There just isn't a stable equilibrium there.

  2. Alignment is about existential risk, we don't need a special new branch of philosophy and ethics to discuss labor automation, this is a conversation that has been going on since before Marx and alignment people cannot hope to add anything useful to it. People can, should be, and are starting to have these conversations just fine without them.

Is a posted lookout for a robbery not committing a crime because them alerting the thieves is protected speech?