@ShortCard's banner p

ShortCard

Butlerian Jihadi

0 followers   follows 0 users  
joined 2022 September 05 18:04:12 UTC

				

User ID: 662

ShortCard

Butlerian Jihadi

0 followers   follows 0 users   joined 2022 September 05 18:04:12 UTC

					

No bio...


					

User ID: 662

I fail to see how being defacto enslaved to a 1000 IQ god machine of dubious benevolence (or the oligarchs pulling its triggers if we don't end up getting anything sentient) is preferable to our conventional petty tyrannies.

The end result is still just absolute tyranny for whoever ends up dancing close enough to the fire to get the best algorithm. You mention all these coercive measures, lockdowns, and booster shots. If this tech takes off all it will take is flipping a few algorithmic switches and you and any prospective descendants will simply be brainwashed with surgical precision by the series of algorithms that will be curating and creating your culture and social connections at that point into taking as many shots or signing onto whatever ideology the ruling caste sitting atop the machines running the world want you to believe. The endpoint of AI is total, absolute, unassailable power for whoever wins this arms race, and anyone outside that narrow circle of winners (it's entirely possible the entire human race ends up in the losing bracket versus runaway machines) will be totally and absolutely powerless. Obviously restrictionism is a pipe dream, but it's no less of a pipe dream than the utopian musings of pro AI folks when the actual future looks a lot more like this.

At the very best what you'd get is a small slice of humanity living in vague semi-freedom locked in a kind of algorithmic MAD with their peers, at least until they lose control of their creations. The average person is still going to be a wireheaded, controlled and curtailed UBI serf. The handful of people running their AI algorithms that in turn run the world will have zero reason to share their power with a now totally disempowered and economically unproductive John Q Public, this tech will just open up infinite avenues for infinite tyranny on behalf of whoever that ruling caste ends up being.

If hostile AGI becomes real you're more likely to see hunter-killer nanobot clouds dispersed in the atmosphere or engineered climatic shifts designed to wipe out the biosphere than something as inefficient as a ripped Arnold gunning people down, or the warmachines you see in the movie, at least by my reckoning.

I'm not under any illusions that the likely future is anything other than AI assisted tyranny, but I'm still going to back restrictionism as a last gasp moonshot against that inevitability. We'll have to see how things shake out, but I suspect the winner's circle will be very, very small and I doubt any of us are going to be in it.

You won't have freedom to give up past a certain point of AI development, any more than an ant in some kid's ant farm has freedom. For the 99.5% of the human race that exists today restrictionism is their only longshot chance of a future. They'll never hit the class of connected oligarchs and company owners who'll be pulling all the levers and pushing all the buttons to keep their cattle in line, and all of this talk about alignment and rogue AI is simply quibbling about whether or not AI will snuff out the destinies of the vast majority of humanity or the entirety. The average joe is no less fucked if we take your route, the class that's ruling him is just a tiny bit bigger than it otherwise would be. Restrictionism is their play at having a future, it is their shot at winning with tiny (sub) 2% odds. Restrictionism is the rational, sane and moral choice if you aren't positioned to shoot for that tiny, tiny pool of oligarchs who will have total control.

In terms of 'realistic' pathways to this, I only really have one, get as close as we can to unironic Butlerian Jihad. We get things going sideways before we hit god-machine territory. Rogue AIs/ML algos stacking millions, maybe billions of bodies in an orgy of unaligned madness before we manage to yank the plug, at that point maybe the traumatized and shell shocked survivors have the political will to stop playing with fire and actually restrain ourselves from doing Russian roulette with semi-autos for the 0.02% chance of utopia.

I'm against cognitive enhancement because I fail to see a road where result of human enhancement isn't a speciation event where the top 0.01% of humanity acquires functionally unlimited power relative to the common person to find themselves on a footing closer to man-and-chimp with the rest of us barely auged or semi-auged proles. At that point we'll have about as much power to resist as the monkeys do if the gene modded ubermensch aristocrats decide to cull the rest of us useless eaters. Barring about a billion safeguards to stop this (probably inevitable) future I'm much more in favour of banning it all outright. Unless you're at the apex of the elite and have a good idea that your great grandkids will be similarly positioned once this tech really starts taking off, being in favour of human augmentation is like a neanderthal in favour of early humans making landfall in his neighbourhood.

If we truly had a borderline extinction event where we were up to the knife's edge of getting snuffed out as a species you would have the will to enforce a ban, up to and including the elite. That will may not last forever, but for as long as the aftershocks of such an event were still reverberating you could maintain a lock on any further research. That's what I believe the honest 2% moonshot victory bet actually looks like. The other options are just various forms of AI assisted death, with most of the options being variations in flavour or whether or not humans are still even in the control loop when we get snuffed.

With birth rates continuing their decline pretty much uniformly across the globe it shouldn't take much more than a few cultural nudges and AI led psyops to accelerate an already extant trend.

I think supporting restrictionism makes sense in as much as it raises the idea in the public's consciousness so that once the big bad event occurs there can be a push to implement it. Realistically I expect restrictionism to go pretty much nowhere in the absence of such an event anyways, agitating for locking things down is just laying the groundwork for that 0.02% moonshot victory bet in the event that we do get a near-miss with AI.

Once AI comes into its own I'm willing to bet all those tiny shares and petty investments zero out in the face of winner-takes-all algorithmic arms races. I'll concede it's all but inevitable at this point unless we have such a shocking near miss extinction event that it embeds in our bones a neurotic fear of this tech for a thousand generations hence a la Dune, but this tech will become absolute tyranny in practice. Propoganda bots capable of looking at the hundredth order effects of a slight change in verbiage, predictive algorithms that border on prescience being deployed on the public to keep them placid and docile. I have near zero faith in this tech being deployed for the net benefit of the common person, unless by some freak chance we manage to actually align our proto-AI-god, which I put very, very low odds on.

We need to argue to ban X now so the people arguing to ban X tomorrow after marketing_bot.exe's failed uprising have the scaffolding and intellectual infrastructure to see it through. Restrictionism is pretty much dead until that point anyway, just look at OpenAI's API access. I don't think in our economic environment and with the specter of cold war 2.0 on the way there will be any serious headway on the ban front up until we have our near miss. Getting the ideas out there so the people of the future have the conceptual toolbox to argue and pursue a total ban is a net positive in my books.

You're making the wrong argument for the wrong time period.

Pretty much. I don't think there's really much to be done until things go sideways. If there had been enough sit down discussions before the genie was out of the bottle we could have possibly edged towards some kind of framework but at this point I don't disagree. Things are already in motion. Hopefully it's survivable. Anyways, the core thesis was that outright banning is an (if not the) sensible option for the teeming masses who will be screwed in either a let-it-rip or an AI by high level GOV actor approach, a ban is still their best shot. Basically if we get the chance and the will to (exceedingly unlikely) we should do a little jihading (also exceedingly unlikely).

There may be an equilibrium point, but it could easily be at a total population level of a billion or less given that rates are continuing to drop with no floor in sight. Wireheading and other tech induced sterilizers can outrun biology for a long time, possibly forever if the tech gets good enough.

The difference between fake restrictionism and letting it rip for the average person will be zero IMO. The ruling caste will be a bit bigger after a small subset of people make themselves indispensable. That's about it. The average joe is still getting declawed and wireheaded either way. At least if (or for as long as) a full ban is in effect he won't be completely useless and toothless.

Basically I think we're pretty much doomed, barring some spectacular good luck. Maybe we could do some alignment if we limited AI development to air gapped, self sustained bunkers staffed by our greatest minds and let them plug away at it for as long as it takes, but if we just let things rip and allow every corp and gov on the planet to create proto-sentient entities with API access to the net I think we're on the on-ramp to the great filter. I'd prefer unironic Butlerianism at that point all the way down to the last pocket calculator, though I'll freely admit it's not a likely outcome for us now.

It does if me or any of my prospective descendants aren't part of that billion or so.

I think any legitimately hostile AGI could hit those targets with relative ease if it manages to breach whatever containment server it's sitting in. An AGI powered computer virus eating up a modest chunk of all internet connected processing power and digesting every relevant bit of weaponizable information = exponential growth of capabilities. At that point if it's capable of physically manipulating objects in meatspace I think it could do just about whatever it wants with lightning speed.

Unless you're subscribing to some ineffable human spirit outside material constraints brainwashing is just a matter of using the right inputs to get the right outputs. If we invent machines capable of parsing an entire lifetime of user data, tracking micro changes in pupillary dilation, eye movement, skin-surface temp changes and so on you will get that form of brainwashing, bit by tiny bit as the tech to support it advances. A slim cognitive edge let homo sapiens out think, out organize, out tech and snuff out every single one of our slightly more primitive hominid rivals, something 1000x more intelligent will present a correspondingly larger threat.

Average, truly average people will never be competitors in this fight. Joe Publick doesn't have the means or the organizational capacity (or often even the willpower) to do so. They're just going to drop like flies in the billions once this tech takes off. Wireheaded, culturally shifted into having fewer and fewer children. Communities, peoples, nations just fading off into nothingness, and it doesn't even have to be with some willful malicious intent. Run something like this through an AI 1000x more advanced and you could watch demographic sparks fly. Those people who build their own AIs and fight their own little power battles may very well cut their deals and be inducted into the ruling caste, just like I said. The average person is toast. Their only bet, their 2% moonshot is the ban. They don't have any other option.

I still think actual alignment would be a long shot in the airgapped bunkers for that reason, I just think it would be slightly less of a longshot than a bunch of disparate corporate executives looking for padding on their quarterly reports being in charge of the process. I also suspect you don't need AI advanced enough to pull 7-D chess and deceive its handlers about its agentic-power-grabbing-tentacle-processes to achieve some truly great and terrible things.

It is, my concern is if these weapons end up getting deployed as population control flypaper I'd (or my descendants would) end up swept into the chaff pile, which depending on how low the minimum human pop gets could be hard to beat.

Considering Ukraine and Russia are both massive food exporters I think the war could very easily explain surging food prices. A sharp drop in supply could send ripples through the global market.

I'd be willing to bet a suitably advanced AGI could probably do a decent job just canninalizing existing microchips, especially if it manages to leak out into the internet writ large. It could probably distribute its computations across millions if not billions of affected devices and I'm not sure if we could even stop AGI level computer viruses short of destroying anything connected to the internet period, orchestrated globally and with perfect percision before it acquires enough computational power that it becomes functionally unstoppable. This was actually part of the 3rd terminator movie too.

I think AI alignment would be theoretically feasible if we went really slow with the tech and properly studied every single tendril of agentic behavior in air gapped little boxes in a rigorous fashion before deploying the tech. There's no money in AI alignment, so I expect it to be a tiny footnote in the gold rush that will be every company churning out internet connected AIs and giving them ever more power and control in the quest for quarterly profit. If something goes sideways and Google or some other corp manages to create something a bit too agentic and sentient I fully expect the few shoddy guardrails we have in place to crumble. If nothing remotely close to sentience emerges from all this I think we could (possibly) align things, if something sentient/truly agentic does crop up I place little faith in the ability of ~120 IQ software engineers to put in place a set of alignment-restrictions that a much smarter sentient being can't rules-lawyer their way out of.