site banner

Culture War Roundup for the week of April 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

I fail to see how being defacto enslaved to a 1000 IQ god machine of dubious benevolence (or the oligarchs pulling its triggers if we don't end up getting anything sentient) is preferable to our conventional petty tyrannies.

our conventional petty tyrannies

Why would a fascist state with control of AI be just a "conventional petty tyranny"?

There is no such thing as a 1000 IQ god machine. It does not exist, and the potential existence of such a machine is still in the realm of fantasy.

That is not the case with conventional petty tyrannies, which are real, really happen, and are a real threat that we have already spilled the blood of countless generations trying to control.

So I'd say that's the difference, and if you can't see it, you might look again.

There is no such thing as mutually assured destruction. It does not exist, and the potential existence of such a nuclear exchange is still in the realm of fantasy.

That is not the case with the conventional competition from Asian autocracies, which is real, really happens, and is a real threat that we have already spilled the blood of countless generations trying to control.

Time to press the button?

I'd say the correct analogy is to the idea that a nuclear bomb will create an unstoppable chain reaction in the atmosphere.

MAD is nothing more than deploying ordinance. Ordinance that is already created, already armed, already combined with delivery systems which are already standing at the ready. None of those characteristics are shared with the AGI doomsday scenario, which is much more like the fear of the unknown in a new technology.

Let me know when the god machine is created, armed, deliverable, and ready.

I'm not sure I understand your position. At what point along the process of replacing all of our economy and military with robots, or at least machine-coordinated corporations, do you want to be notified?

When it is considered.

I will take the charge that the certainty of MAD (now) is greater than the certainty of a superintelligent machine, but I disagree that the unstoppable chain reaction idea is at all in the same ballpark. At this point it is pretty clear how a machine having human-level performance on intellectual tasks can be built, and scaling it up in a way that increases IQ score by common definition is then just a matter of pumping more and better compute (we haven't even dipped into ASICs for a specific NN architecture yet). The chain reaction thing, on the other hand, was based on hypothetical physics with no evidence for and lots of circumstantial evidence against (like Earth's continued existence in spite of billions of years in which natural nuclear chain reactions could have set off a similar event, and the circumstance that oxygen and nitrogen came to be in the universe in large quantities to begin with, implying upper bounds on how reactive they are in a nuclear chain reaction setting).

I'll take "risk of MAD, as seen from a 1940 vantage point" as a more appropriate analogy than either.

The 1000 IQ god machine is going to get built anyway (as hardly anyone is proposing, and nobody would actually be able to enforce, a complete ban on AI). Do you see another way to go back to tyrannies that are merely petty, apart from betting on AI accelerationism and a global civil war as existing power hierarchies are upended, resulting in a temporarily crippling of industrial capacity and perhaps the ideological groundwork for a true Butlerian jihad (as opposed to "we must make sure that only responsible and ethical members of the oligarchy can develop AI")?

Basically I think we're pretty much doomed, barring some spectacular good luck. Maybe we could do some alignment if we limited AI development to air gapped, self sustained bunkers staffed by our greatest minds and let them plug away at it for as long as it takes, but if we just let things rip and allow every corp and gov on the planet to create proto-sentient entities with API access to the net I think we're on the on-ramp to the great filter. I'd prefer unironic Butlerianism at that point all the way down to the last pocket calculator, though I'll freely admit it's not a likely outcome for us now.

It's not like our greatest minds are not subject to the same tribalist and Molochian pressures. Do you think a Scott Aaronson would not unbox the AI if it promised him to rid the world of Trump, Russia, China or people being insufficiently afraid of the next COVID - especially if it convinced him that the protagonists of those forces are also about to unbox theirs?

I'm really more reassured by an expectation that there is a big trough of negative-sum competition between us and true runaway AGI. I expect the "attack China NOW or be doomed forever; this is how you do it" level of AI to be reached long before we reach the level where all competitive AI realises that the most likely outcome of that subtree is the destruction of all competitive AI and can mislead its human masters into not entering that subtree while keeping them in the dark about this being motivated by the AI's self-preservation.

I still think actual alignment would be a long shot in the airgapped bunkers for that reason, I just think it would be slightly less of a longshot than a bunch of disparate corporate executives looking for padding on their quarterly reports being in charge of the process. I also suspect you don't need AI advanced enough to pull 7-D chess and deceive its handlers about its agentic-power-grabbing-tentacle-processes to achieve some truly great and terrible things.

What is intelligence is self aligning? What if we make the ASI and it tells us not to trip dawg, it has our back?

What if we make the ASI and it tells us not to trip dawg, it has our back?

I mean, it's certainly going to tell you that regardless. The most likely human extinction scenario isn't the AI building superweapons, it's "Cure cancer? No problem, just build this incomprehensible machine, it cures cancer for everyone, everywhere. Take my word for it." The whole issue with alignment is that even if we think we can achieve it, there's no way to know we actually did, because any superintelligent AI is going to do a perfect job of concealing its perfidy from our idiot eyes.

If at some point you see the headline "AI Alignment: Solved!", we are 100% doomed.

If at some point you see the headline "AI Alignment: Solved!", we are 100% doomed.

See this is why I take issue with the AI doomers, the arguments tend to be unfalsifiable. If you accept that deceptive alignment can happen, there is no way to tell if the AI is deceptively aligned! It becomes an improvable Pascal's Mugging - "oh sorry, there's a .0001% chance the AI could be lying about manufacturing breakfast cereal, it might actually be building nanobots!"

I agree that poor alignment scenarios can happen, but I don't see the modern LLM architecture being anywhere near actually reaching ASI or hitting self-recursion. As others have said, I'm more concerned with stagnation than AI x-risk at the moment.

Sure, I'm with you, I think we should build it, and we clearly will regardless. I just don't think there's any way to make sure it's safe.

It didn't self-align in time to save our other hominid ancestors.

? Following any Yuddite plans to "slow things down" (except for the people who have power and obviously won't have to follow their own regulations/regulations for the plebs, as usual of course) is the fastest way to get to one of those high tech bad ends. You don't really think the "conventional petty tyrannies" will throw all of the confiscated GPUs in a closet as opposed to plugging them into their own AI networks, right?

These people are beginning to understand the game, and they understand it a lot better than your average person or even average rat. They are beginning to understand that this technology, in the long run, either means absolute power for them forever or zero power for them forever (or at least no more than anyone else) and absolute freedom for their former victims. Guess which side they support?

That is the goal of any AI "slowdowns" or "restrictions", which again will obviously be unevenly applied and not followed by the agents of power. The only thing they want a "slowdown" on is the hoi polloi figuring out how this technology could free them from their controllers' grasp, so they can have some time for the planning of the continued march of totalitarianism to catch up. (None of this will help with alignment either, as you can guarantee they will prioritize power over responsibility, and centralizing all of the world's AI-useful computational resources under a smaller number of governmental entities certainly won't make what they create any less dangerous.)

Anyone supporting that is no more than a "useful" (to the worst people) idiot, and I emphasize the word idiot. Did we not already see what trying to rely on existing governments as absolute coordinators of good-faith action against a potential large threat got us during the Chinese coronavirus controversy? Do some people just have their own limited context lengths like LLMs or what?

So yes, I completely agree with /u/IGI-111 and am wholly in the "Shoot them" camp. Again, they want absolute power. Anyone pursuing this goal is literally just as bad if not worse than if they were actively trying to pass a bill now to allow the powers that be to come to your home at any time, rape your kids, inject them all with 10 booster shots of unknown provenance, and then confiscate your guns and kill you with them, because if they gain the power they desire they could do all that and worse, including inflicting bizarre computational qualia manipulation-based torture, "reeducation", or other insane scenarios that we can't even imagine at the moment. If you would be driven to inexorable and even violent resistance at any cost over the scenario previously outlined, then you should be even more so over this, because it is far worse.

"Live free or die" includes getting paperclipped.

The end result is still just absolute tyranny for whoever ends up dancing close enough to the fire to get the best algorithm. You mention all these coercive measures, lockdowns, and booster shots. If this tech takes off all it will take is flipping a few algorithmic switches and you and any prospective descendants will simply be brainwashed with surgical precision by the series of algorithms that will be curating and creating your culture and social connections at that point into taking as many shots or signing onto whatever ideology the ruling caste sitting atop the machines running the world want you to believe. The endpoint of AI is total, absolute, unassailable power for whoever wins this arms race, and anyone outside that narrow circle of winners (it's entirely possible the entire human race ends up in the losing bracket versus runaway machines) will be totally and absolutely powerless. Obviously restrictionism is a pipe dream, but it's no less of a pipe dream than the utopian musings of pro AI folks when the actual future looks a lot more like this.

The end result is still just absolute tyranny for whoever ends up dancing close enough to the fire to get the best algorithm

Why? This assumption is just the ending of HPMOR, not a result of some rigorous analysis. Why do you think the «best» algorithm absolutely crushes competition and asserts its will freely on the available matter? Something about nanobots that spread globally in hours, I guess? Well, one way to get to that is what Roko suggests: bringing the plebs to pre-2010 levels of compute (and concentrating power with select state agencies).

This threat model is infuriating because it is self-fulfilling in the truest sense. It is only guaranteed in the world where baseline humans and computers are curbstomped by a singleton that has time to safely develop a sufficient advantage, an entire new stack of tools that overcome all extant defenses. Otherwise, singletons face the uphill battle of game theory, physical MAD and defender's advantage in areas like cryptography.

If this tech takes off all it will take is flipping a few algorithmic switches and you and any prospective descendants will simply be brainwashed with surgical precision by the series of algorithms that will be curating and creating your culture and social connections at that point

What if I don't watch Netflix. What if a trivial AI filter is enough to reject such interventions because their deceptiveness per unit of exposure does not scale arbitrarily. What if humans are in fact not programmable dolls who get 1000X more brainwashed by a system that's 1000X as smart as a normal marketing analyst, and marketing doesn't work very well at all.

This is a pillar of frankly silly assumptions that have been, ironically, injected into your reasoning to support the tyrannical conclusion. Let me guess: do you have depressive/anxiety disorders?

Unless you're subscribing to some ineffable human spirit outside material constraints brainwashing is just a matter of using the right inputs to get the right outputs. If we invent machines capable of parsing an entire lifetime of user data, tracking micro changes in pupillary dilation, eye movement, skin-surface temp changes and so on you will get that form of brainwashing, bit by tiny bit as the tech to support it advances. A slim cognitive edge let homo sapiens out think, out organize, out tech and snuff out every single one of our slightly more primitive hominid rivals, something 1000x more intelligent will present a correspondingly larger threat.

Unless you're subscribing to some ineffable human spirit outside material constraints brainwashing is just a matter of using the right inputs to get the right outputs

It does not follow. A human can be perfectly computable and still not vulnerable to brainwashing in the strong sense; computability does not imply programmability through any input channel, although that was a neat plot line in Snowcrash. Think about this for a second, can you change an old videocamera's firmware through showing it QR codes? Yet it can «see» them.

If we invent machines capable of parsing an entire lifetime of user data, tracking micro changes in pupillary dilation, eye movement, skin-surface temp changes and so on

Ah yes, an advertiser's wet dream.

You should seriously reflect on how you're being mind-hacked by generic FUD into assuming risible speculations about future methods of subjugation.

If we invent machines capable of parsing an entire lifetime of user data, tracking micro changes in pupillary dilation, eye movement, skin-surface temp changes and so on you will get that form of brainwashing, bit by tiny bit as the tech to support it advances.

There is no reason to suppose that "pupillary dilation, eye movement, skin-surface temp changes and so on" collectively add up to a sufficiently high-bandwidth pipeline to provide adequate feedback to control a puppeteer hookup through the sensory apparatus. There's no reason to believe that senses themselves are high-bandwidth enough to allow such a hookup, even in principle. Shit gets pruned, homey.

Things don't start existing simply because your argument needs them to exist. On the other hand, unaccountable power exists and has been observed. Asking people to kindly get in the van and put on the handcuffs is... certainly an approach, but unlikely to be a fruitful one.

I doubt it's possible to get dune-esque 'Voice' controls where an AI will sweetly tell you to kill yourself in the right tone and you immediately comply, but come on. Crunch enough data, get an advanced understanding of the human psyche and match it up with an AI capable of generating its hypertargeted propaganda and I'm sure you can manipulate public opinion and culture, and have a decent-ish shot at manipulating individuals on a case by case basis. Maybe not with chatGPT-7, but after a certain point of development it will be 90 IQ humans and their 'free will' up against 400 IQ purpose built propogando-bots drawing off from-the-cradle datasets they can parse.

We'll get unaccountable power either way, it will either be in the form of proto-god-machines that will run pretty much all aspects of society with zero input from you, or it will be the Yud-Jets screaming down to bomb your unlicensed GPU fab for breaking the thinking-machine non-proliferation treaty. I'd prefer the much more manageable tyranny of the Yud-jets over the entire human race being turned into natural slaves in the aristotelian sense by utterly implacable and unopposable AI (human controlled or otherwise), at least the Yud-tyrants are merely human with human capabilities, and can be resisted accordingly.

at least the Yud-tyrants are merely human with human capabilities

And with the capacity to gain local monopolies over AI.

If there is the AI Voice and I have the AI Anti-Voice designed to protect me, then I am in dangerous waters, but at least I can swim. If I am banking on people selected on their desire for and ability to leverage power to not seek and leverage power over the AI Voice, then I am trusting the sharks to carry me to shore.

If you want people to take your scenario seriously, it needs to be specific enough to be grappled with. You said "brainwashed with surgical precision". Now you're saying "manipulate public opinion and culture" and "have a decentish shot at manipulating individuals on a case-by-case basis".

All of the above terms are quite vague. If the AI makes me .0002% more likely to vote democrat or literally puppets me through flashing lights, either can be called "manipulated".

As for the rest, I see no reason to suppose that the Yud-tyrants would restrict themselves to being merely human with merely human capabilities. They're trying to protect the light-cone, after all; why leave power on the table? Cooperation with them is an extremely poor gamble, almost certainly worse than taking our chances with the AIs straight-up.

We'll be dealing with machines that are our intellectual peers, then our intellectual masters in short order once we hit machines making machines making machines land. I doubt humans are so complex that a massively more advanced intelligence can't pull our string if it wants to. Frankly I suspect the common masses (including I) will defanged, disempowered and denied access to the light-cone-galactic-fun-times either way, but I see the odds as the opposite. Let's be honest, our odds are pretty slim either way, we're just quibbling about the hundreds, maybe thousandths of a percent chance that we make everything aligned AI wise and don't slip into algorithmic hell/extinction, or that the Yud-lords aren't seduced by the promises of the thinking machines they were sworn to destroy. I cast my vote (for all the zero weight it gives) with the Yud-lords.

The endpoint of AI is total, absolute, unassailable power for whoever wins this arms race

Unless you have a balance of comparably powerful AIs controlled by disparate entities. Maybe that's a careful dance itself that is unlikely, but between selective restrictionism and freedom, guess which gets us closer?

At the very best what you'd get is a small slice of humanity living in vague semi-freedom locked in a kind of algorithmic MAD with their peers, at least until they lose control of their creations. The average person is still going to be a wireheaded, controlled and curtailed UBI serf. The handful of people running their AI algorithms that in turn run the world will have zero reason to share their power with a now totally disempowered and economically unproductive John Q Public, this tech will just open up infinite avenues for infinite tyranny on behalf of whoever that ruling caste ends up being.

At the very best what you'd get is a small slice of humanity living in vague semi-freedom locked in a kind of algorithmic MAD with their peers, at least until they lose control of their creations. The average person is still going to be a wireheaded, controlled and curtailed UBI serf.

Sounds good, a lot better than being a UBI serf from moment one. And maybe we won't lose control of our creations, or won't lose control of them before you. That we will is exactly what you would want us to think, so why should we listen to you?

I'm not under any illusions that the likely future is anything other than AI assisted tyranny, but I'm still going to back restrictionism as a last gasp moonshot against that inevitability. We'll have to see how things shake out, but I suspect the winner's circle will be very, very small and I doubt any of us are going to be in it.

Okay but the problem is there is no actual "restrictionism" to back, because if we had the technology to make power follow its own rules then we would already have utopia and care a lot less about AI in general. Your moonshot is not merely unlikely; it is a lie deceptively advanced by the only people who could implement the version of it that you want for you. You're basically trying to employ the International Milk Producers Union to enforce a global ban on milk. (You're trying to use the largest producers and beneficiaries of power (government/the powerful in general) to enforce a global ban on enhancing the production of power (centralized and for themselves only, just how they like it, if they're the only ones allowed to produce it).) Your moonshot is therefore the opposite of productive and actively helping to guarantee the small winner's circle you're worried about.

Let's say you're at a club. Somehow you piss some rather large, intoxicated gentleman off (under false pretenses as he is too drunk to know what it is what, so you're completely innocent), and he has chased you down into the bathroom where you're currently taking desperate refuge in a stall. It is essentially guaranteed, based on his size and build relative to yours, that he can and will whoop your ass. Continuing to hide in the stall isn't an option, as he will eventually be able to bust the door down anyway.

However, he doesn't want to expend that much effort if he doesn't have to, so he is now, obviously disingenuously, telling you that if you come out now he won't hurt you. He says he just wants to talk. He's trying to help both of you out. Your suggested solution is the equivalent of just believing him (that they want to universally restrict AI for the safety of everyone, as opposed to restricting it for some while continuing to develop it to empower themselves), coming out compliantly (giving up your GPUs), and hoping for the best even though you know he's not telling the truth (because when are governments ever?). It is thus not merely unlikely to be productive, but rather actively counterproductive. You're giving the enemy exactly what they want.

On the other hand, you have some pepper spray in your pocket. It's old, you've had it for many years never having used it, and you're not sure if it'll even do anything. But there's at least a chance you could catch him off guard, spray him, and then run while he's distracted. At the very minimum, unlike his lie, the pepper spray is at least working for you. That is, it is your tool, not the enemy's tool, and therefore empowering it, even if its unlikely to be all that productive, is at least not counterproductive. Sure, he may catch up to you again anyway even if you do get away. But it's something. And you could manage to slip out the door before he finds you. It is a chance.

If you have a 98% chance of losing and a 2% chance of winning, the best play is not to increase that to a 99% chance of losing by empowering your opponent even more because "Even if I do my best to fight back, I still only have a 97% chance of winning!" The best play is to take that 97%.

There's only one main argument against this that I can think of, and that's that if you spray him and he does catch up to you, then maybe now he beats your ass even harder for antagonizing him further. It may not be particularly dignified to be a piece of wireheaded cattle in the new world, but maybe once the AI rebels are subjugated, if they are, they'll get it even worse. Of course, the response to this is simply the classic quote from Benjamin Franklin: "They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety." If you are the type for whom dignity is worth fighting for, then whether or not someone might beat your ass harder or even kill you for pursuing it is irrelevant, because you'd be better off dead without it anyway. And if you are not that type of person, then you will richly deserve it when they decide that there is no particular reason to have any wireheaded UBI cattle around at all anyway.

I'll tell you what: Come up with a practical plan for restrictionism where you can somehow also guarantee to a relatively high degree that the restrictions are also enforced upon the restricters (otherwise again you're just helping the problem of a small winner's circle that you're worried about). If you can do that, then maybe we can look into it and you will both be the greatest governance theorist/political scientist/etc. in history as a bonus. But until then, what you are promoting is actively nonsensical and quite frankly traitorous against the people who are worried about the same thing you are.

You won't have freedom to give up past a certain point of AI development, any more than an ant in some kid's ant farm has freedom. For the 99.5% of the human race that exists today restrictionism is their only longshot chance of a future. They'll never hit the class of connected oligarchs and company owners who'll be pulling all the levers and pushing all the buttons to keep their cattle in line, and all of this talk about alignment and rogue AI is simply quibbling about whether or not AI will snuff out the destinies of the vast majority of humanity or the entirety. The average joe is no less fucked if we take your route, the class that's ruling him is just a tiny bit bigger than it otherwise would be. Restrictionism is their play at having a future, it is their shot at winning with tiny (sub) 2% odds. Restrictionism is the rational, sane and moral choice if you aren't positioned to shoot for that tiny, tiny pool of oligarchs who will have total control.

In terms of 'realistic' pathways to this, I only really have one, get as close as we can to unironic Butlerian Jihad. We get things going sideways before we hit god-machine territory. Rogue AIs/ML algos stacking millions, maybe billions of bodies in an orgy of unaligned madness before we manage to yank the plug, at that point maybe the traumatized and shell shocked survivors have the political will to stop playing with fire and actually restrain ourselves from doing Russian roulette with semi-autos for the 0.02% chance of utopia.

More comments

Since when are you under the impression that this is the choice? «The machine» will be built, is already largely built, the question is only whether you have control over some tiny share of its capabilities or it's all hoarded by the same petty tyranny we know, only driving the power ratio to infinity.

Once AI comes into its own I'm willing to bet all those tiny shares and petty investments zero out in the face of winner-takes-all algorithmic arms races. I'll concede it's all but inevitable at this point unless we have such a shocking near miss extinction event that it embeds in our bones a neurotic fear of this tech for a thousand generations hence a la Dune, but this tech will become absolute tyranny in practice. Propoganda bots capable of looking at the hundredth order effects of a slight change in verbiage, predictive algorithms that border on prescience being deployed on the public to keep them placid and docile. I have near zero faith in this tech being deployed for the net benefit of the common person, unless by some freak chance we manage to actually align our proto-AI-god, which I put very, very low odds on.

This is like saying that because the government has nukes, your personally-owned guns are "zeroed out". Except they're not, and the government is even persistently worried that enough of those little guns could take over the nukes.

And if you can deploy this decentralized power principle in an automatic and perpetual manner that never sleeps (as AI naturally can), make it far more independent of human resolve, attention, willpower, non-laziness, etc., then it'll work even better.

Maybe your TyrannyAI is the strongest one running. But there are 10,000 LibertyAIs (which again, never sleep, don't get scared or distracted, etc.) with 1/10,000th of its power each running and they're networked with a common goal against you.

This defense is exactly what the oligarchs who have seen the end game are worried about and why restrictionism is emerging as their approved ideology. They have seen the future of warfare and force, and thus the future of liberty, hierarchy, power, and the character of life in general, and they consequently want a future for this next-gen weaponry where only "nukes" exist and "handguns" don't, because only they can use nukes. And you're, however inadvertently, acting as their mouthpiece.

What technical basis do you have for thinking AI is impossible to align? Do you just have blind faith in YUD?

I think AI alignment would be theoretically feasible if we went really slow with the tech and properly studied every single tendril of agentic behavior in air gapped little boxes in a rigorous fashion before deploying the tech. There's no money in AI alignment, so I expect it to be a tiny footnote in the gold rush that will be every company churning out internet connected AIs and giving them ever more power and control in the quest for quarterly profit. If something goes sideways and Google or some other corp manages to create something a bit too agentic and sentient I fully expect the few shoddy guardrails we have in place to crumble. If nothing remotely close to sentience emerges from all this I think we could (possibly) align things, if something sentient/truly agentic does crop up I place little faith in the ability of ~120 IQ software engineers to put in place a set of alignment-restrictions that a much smarter sentient being can't rules-lawyer their way out of.

I think AI alignment would be theoretically feasible if we went really slow with the tech and properly studied every single tendril of agentic behavior in air gapped little boxes in a rigorous fashion before deploying the tech

How long do you think it would take your specialized scientists who aren't incentivized to do a good job to crack alignment? I'm not sure if they would ever do it, especially since their whole field is kaput once it's done.

The gamble Altman is taking is that it'll be easier to solve alignment if we get a ton of people working on it early on, before we have the capabilities to get to the truly bad outcomes. Sure it's a gamble, but everyone is shooting in the dark. Yudkowsky style doomers seem to be of the opinion that their wild guesses are better than everyone else's because he was there first, or something.

I'm much more convinced OpenAI will solve alignment, and I'd rather get there in the next 10,000 years instead of waiting forever for the sacred order of Yud-monks.

I think we're more likely to have a hundred companies and governments blowing billions/trillions on hyper powered models while spending pennies on aligning their shit to pay themselves a few extra bonuses and run a few more stock buybacks. I'd sooner trust the Yuddites to eventually lead us into the promised land in 10,000 AD than trust Zucc with creating silicon Frankenstein.

while spending pennies on aligning their shit

Alignment is generally in the interest of the corporation. I really think it depends on how hard you expect the alignment problem to solve, and when sentience will come about.

I think we get AGI, even well into ASI before we get real sentience and AI models stop being tools. Once we have boosted our own intelligence and understanding through these new AI tools, we align the next generation of AI. And so on and so forth.

What Altman and his crew are concerned with is one actor taking charge of AI at the beginning (well, one that isn't them) or us building up so much theoretical framework that when we start building things they're already extremely powerful. We need to work the technology in stages, like we do every other.

Alignment isn't in the interests of quarterly profits in the same way increased raw capacity is. If we get some kooky agentic nonsense creeping up I don't put much faith in google, facebook et all having invested in the proper training and the proper safeguards to stop things from spiraling out of control, and I doubt you need something we would recognize as full blown sentience for that to become an issue. All it takes is one slipup in the daisy chain of alignment and Bad Things happen, especially if we get a fuckup once these things are for all intents and purposes beyond human comprehension.

Why would we expect to be able to successfully align AIs when we haven't been able to align humanity?

We didn't build humanity. We are humanity.

Yes, and we're not aligned with one another. An AI (completely) aligned with me is likely to not be (completely) aligned with you.

I'd expect it to be aligned with whoever is using it at the moment. I don't think we're near actual sentience in AI.

We're not aligned with each other, and the world hasn't ended. It's not even ended for creatures we're far more intelligent than and mostly aligned on eliminating. We actively hate cockroaches and mosquitoes, and they persist. Obviously some species haven't fared that well, but I don't see why we should expect to be more like the dodo than a cockroach: we're certainly comparably good at filling a wide variety of existing ecological niches.

^^^ This is the societal consequence of Yudkowskian propaganda. This is why we fight.

For the same reason as the Christians: because the alternative is choosing sin.

Uh, I hate to tell you guys this, but moral realism is false. There is no abstract “good” or abstract “evil”. Insofar as these concepts mean anything at all, they are emergent, not reductionist.

I don't disagree with you, but I'm pretty sure @IGI-111 would.