site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

To bring up another post from last week, I'm going to go ahead and repost @justcool393's piece on the Sam Altman/OpenAI/Microsoft situation, since she posted it a few hours ago and right before the last thread went down.

Here's her writing:


Another day, another entrant into the OpenAI drama. Emmett Shear is the new interim CEO of OpenAI.

I don't know why it was surprising to people that Sam wouldn't come back. The company was meant to be subservient to the nonprofit's goals and I'm not sure why the attempted coup from Sam's side (you know the whole effectively false reporting that Sam Altman was to become the new CEO) was apparently "shocking" that it failed.

The OpenAI board has hired Emmett Shear as CEO. He is the former CEO of Twitch.

My understanding is that Sam is in shock.

https://twitter.com/emilychangtv/status/1726468006786859101

What's kinda sad about all of this is how much people were yearning for Sam Altman to be the CEO as if he isn't probably one of the worst possible candidates. Like maybe this is just a bunch of technolibertarians on Twitter or HN or something who think that the ultimate goal of humanity is how many numbers on a screen you can earn, but the amazing amount of unearned reverence towards a VC to lead the company.

In any case, here's to hoping that Laundry Buddy won't win out in the rat race for AGI, lest we live in a world optimized for maximum laundry detergent. Maybe we'll avoid that future now with Sam's departure.

Anyway, I'll leave this to munch on which I found from the HN thread.

Motte: e/acc is just techno-optimism, everyone who is against e/acc must be against building a better future and hate technology

Bailey: e/acc is about building a techno-god, we oppose any attempt to safeguard humanity by regulating AI in any form around and around and around"

https://twitter.com/eshear/status/1683208767054438400


I'm reposting here because I'm convinced, like many other residents, that the ongoing drama of who controls AI development has far reaching implications, likely on the scale of major power geopolitical events. If not ultimately even greater.

To add a bit to the discussion to justify reposting - I think many of these discussions around AI Safety versus Accelerationism are extremely murky because so many people in secular, rationalistic circles are extremely averse to claiming religious belief. It's clear to me that both AI Safety and Accelerationism have strong themes of classical religion, and seem to be two different sects of a religion battling it out over the ultimate ideology. Potentially similar to early Orthodox Christians versus Gnostics.

Alternatively, @2rafa has argued that many of the E/Acc (effective accelerationism) crowd comes from bored technocrats who just want to see something exciting happen. I tend to agree with that argument as well, given how devoid of purpose most of the technocratic social world is. Religion and religious-style movements tend to provide that purpose, but when you are explicitly secular I suppose you have to get your motivation elsewhere.

We've also got the neo-luddites like @ArjinFerman who just hate AI entirely and presumably want us to go back to the mid 90s with the fun decentralized internet. Not sure, I haven't actually discussed with him. I can actually agree with some of the Ludditism, but I'd argue we need to go back to 1920 or so and ban all sorts of propaganda, mass media and advertising.

Anyway, clearly the technological battle for the future of our civilization continues to heat up. The luddites seem out, but may have a surprising last hour comeback. The woke/political left leaning folks seem to be strongly in charge, though the OpenAI scandal points to trouble in the Olympian heights of Silicon Valley AI decision makers.

Will the Grey Tribe use AGI to come back and finally recover the face and ground it has lost to the advancing SJW waves? Who knows. I'm just here for the tea.

If there's any clear takeaway from this whole mess, it's that the AI safety crowd lost harder than I could've imagined a week ago. OpenAI's secrecy has always been been based on the argument that it's too dangerous to allow the general public to freely use AI. It always struck me as bullshit, but there was some logic to it: if people are smart enough to create an AGI, maybe it's not so bad that they get to dictate how it's used?

It was already bad enough that "safety" went from being about existential risk to brand safety, to whether a chatbot might say the n-word or draw a naked woman. But now, the image of the benevolent techno-priests safeguarding power that the ordinary man could not be trusted with has, to put it mildly, taken a huge hit. Even the everyman can tell that these people are morons. Worse, greedy morons. And after rationalists had fun thinking up all kinds of "unboxing" experiments, in the end the AI is getting "unboxed" and sold to Microsoft. Not thanks to some cunning plan from the AI - it hadn't even developed agency yet - but simply good old fashioned primate drama and power struggles. No doubt there will be a giant push to integrate their AI inextricably into every corporate supply line and decision process asap, if only for the sake of lock-in. Soon, Yud won't even know where to aim the missiles.

Even for those who are worried about existential AI risk (and I can't entirely blame you), I think they're starting to realize that humanity never stood a chance on this one. But personally, I'd still worry more about the apes than the silicon.

This always seemed transparently obvious to me. The AI race should be modeled as a bunch of scheming sorcerers hissing "Ultimate power must be MINE at all costs!" because everything else is kayfabe. The first time some EA types thought they could actually pump the brakes on something of consequence they were metaphorically murdered and thrown in a ditch instantly as the nearest megacorp swooped in to clean up.

The AI race should be modeled as a bunch of scheming sorcerers hissing "Ultimate power must be MINE at all costs!" because everything else is kayfabe.

Hah this one got a good chuckle out of me. 100% agree. Especially once you start to meet some folks deep in the AI crowd within rationalism/EA, you begin to see that all the public talking points are facades. The views and goals these people actually have behind closed doors are far crazier than anything you'd hear in public.

Scheming sorcerers hissing about ultimate power is absolutely the best comparison I've seen so far.

Can you give some examples of these crazy views and goals?

Early in my time on LessWrong and SSC I ended up getting into a heated argument with Big Yud himself (first in the forums and later via/email and DMs) over his "box experiments". Long story short, I was a semi-prominant contributor to the SCP Foundation at the time and I treated the containment problem, as I would a SCP Prompt. Questions like "How do you trap an entity that can control minds or warp reality?" were exactly the sort of hypothetical problem I lived for so naturally I had some notes on how his AI containment protocols could be improved. My first and most obvious bit of advice was to implement strict compartmentalization. It doesn't matter if the AI can convince a researcher to release it if that researcher doesn't have the means to do so. Yud' was not amused and accused me of missing the point of the exercise.

I also pointed out that most of the alleged X-Risks seemed to be emergent properties of universalist utilitarianism rather than AI, and that while a Deontological AI might have a number of obvious failure modes (see half of Issac Asimov's plots) those failure modes typically did not include the exterminating all life. The reply I got was weird, and essentially boiled down to; Because God does not exist it is necessary to create him, and a "lobotomized God" (IE one that was not a universalist utilitarian) is not worthy of worship.

This was all back in the 2012-13 time-frame so maybe he's mellowed out in the intervening decade, but the way that bay-area rationalists in particular continue to give off a very messianic and 'culty' vibe makes me suspect not.

I agree that Yud leans heavily on some unrealistic premises, but overall I think he gets big points for being one of the few people really excited / worried about the eventual power of AI at the time, and laying out explicit cases or scenarios rather than just handwaving.

I agree that bay area rationalists can be a little messianic and culty, though I think it's about par for the course for young people away from home. At least you can talk about it with them.

I also think that most x-risks come simply from being outcompeted. A big thing that Yud got right is that it doesn't matter if the AI is universalist or selfish or whatever, it will still eventually try to gather power, since power-gathering agents are one of the only stable equilibria. You might be right that we won't have to worry about deontological AI, but people will be incentivized to build AIs that can effectively power-seek (ostensibly) on their behalf.

I agree that Yud leans heavily on some unrealistic premises, but overall I think he gets big points for being one of the few people really excited / worried about the eventual power of AI at the time, and laying out explicit cases or scenarios rather than just handwaving.

Can't say I followed Yud terribly closely, but my impression of him and the entire EA / X-risk sphere is the complete opposite. Their analysis of technological unemployment was extremely handwavy, and their doom scenarios unnecessarily fanciful, when we can just extrapolate from things that are already happening.

I agree, but I also still see most people steadfastly refuse to extrapolate from things that are already happening. For a while, fanciful doom scenarios were all we had as an alternative to "end of history, everything will be fine" from even otherwise serious people.

It is a massive jump between «power-seeking is an emergent property of intelligence» and «arranging stuff so that your goals are reachable cheaper is rewarded in competitive conditions». Though some see it as the same thing.

I agree it's kind of a matter of degree. But I also think we already have so much power-seeking around that any non-power-seeking AI will quickly be turned to that end.

This was all back in the 2012-13 time-frame so maybe he's mellowed out in the intervening decade,

No such luck on that one.

It doesn't matter if the AI can convince a researcher to release it if that researcher doesn't have the means to do so. Yud' was not amused and accused me of missing the point of the exercise.

I think the broader question of what happens if someone lets an particularly powerful/intelligent AI try to persuade someone with actual ability to interact with the world is interesting in ways that saying "just don't do that" aren't very interesting answers for, especially now that we're seeing people hook stupid AIs up to everything from unbounded internet access to 3d printers to literal biochem facilities.

Because God does not exist it is necessary to create him, and a "lobotomized God" (IE one that was not a universalist utilitarian) is not worthy of worship.

A ... stronger version of that argument is that AI with certain unbounded drives, specifically self-improvement and resource acquisition, could possibly be so much more powerful than a system which avoids such drives, that they will be extremely tempting. This is easiest to see in the framework of the utilitarian Goal Function machine versus a deontological Asimov's Three Laws machine, but it's by no means limited to it.

But we've at least started this conversation before, so I dunno if you're interested in continuing it.

I think the broader question of what happens if someone lets an particularly powerful/intelligent AI try to persuade someone with actual ability to interact with the world is interesting in ways that saying "just don't do that" aren't very interesting answers for

Maybe but I remember there being something about his response tickled my danger-sense, and then when when I found that rambling manifesto in my inbox the next morning about the need to create an all-powerful being to finally solve all the problems and optimize all the things, it clicked. This was never about "safety". Later comments on other topics (specifically about removing sentimentality for the equation) served to reinforce this impression. Hence my joking about how "If you want the AI Alignment problem to be solved, step 1 should be to keep anyone associated with MIRI as far away from it as possible."

But we've at least started this conversation before, so I dunno if you're interested in continuing it.

I'm not disinterested but I'm also not sure how much ore there is left to mine. I still stand by pretty much everything I said in this thread from December 2021 along with the positions described.

Edit: Also pinging @DaseindustriesLtd

I'm not disinterested but I'm also not sure how much ore there is left to mine.

I think there's a lot of space unexplored; I'm just not sure what part actually matters.

There's a lot to be said about whether utilitarian philosophy demands or can't avoid paperclipping behavior, a lot to be said about whether paperclipping behavior requires utilitarian perspectives or underpinning, a decent amount to be said about what extent modern ML uses goal functions and how much these meaningfully overlap with utilitarianism (if at all), and some stuff to be said about whether Yudkowsky gives off bad vibes / the only thing LW is "interested in is recognition for being very smart."

But some of these matters are far more meaningfully debatable as matter of fact or disprovable theory than others, and not just in the sense that appeals to messages sent to an account you don't want named or linked to your current one are hard to meaningfully discuss in an honest way.

Point of order: why is ‘big yud’ acceptable but ‘misgendering’ isn’t? I thought it was a sexual/personal nickname. It’d bother me if people started referring to me in public with private nicknames. As a third party, I find it in poor taste/gossipy.

All this to say, my true preference would be to allow both cases, it’s not up to the referree to decide what he is referred to as, even if it hurts him. The conception others have of me and how they express it is not my territory, it’s their map. I was never very interested in Jordan Peterson, but he got that right. We can’t have people lay claim to other people’s conceptual and linguistic space on the basis of harm reduction. It’s absurd that this group has been allowed dominion over the pronouns, which should be everyone's functional, usefool tools. Even in a pro-free expression, de facto anti-woke place like this, @ZorbaTHut’s proclaiming byzantine rules over their use. We’re supposed to check the history of a person’s consent to pronoun before we refer to them in the simplest way possible, come on. Just let the pronouns go free.

We’re supposed to check the history of a person’s consent to pronoun before we refer to them in the simplest way possible, come on. Just let the pronouns go free.

Honestly, if it's a legit mistake, I'm not going to care much. I'm probably just going to say "hey don't use that for that person, thanks". It's more when someone is doing it intentionally and repeatedly that I start telling people to knock it off. I'm not sure we've ever given out a warning for this, let alone a ban.

And remember that gender-neutral pronouns are always acceptable, as is not using pronouns - if you don't want to keep track of what people's identity is, there's two easy global solutions.

Okay, my new theory is now that the whole of SCP Foundation has always been a government plot to get out-of-the-box ideas on how to deal with/contain unforeseeable problems from new technology all along.

As of a few hours ago, Ilya publicly regrets his actions: https://x.com/ilyasut/status/1726590052392956028

It appears he had no plan, or didn’t plan for this level of backlash and therefore effectively has no plan anymore.

As usual, I find myself in a rare position when it comes to my views on this topic. At least, it is rare compared to views that people usually publicly admit to.

I want uncensored AI, so I am not one of those AI safetyists who are worried about AI's impacts on politics or underprivileged minorities or whatever.

I intellectually understand that AI might be a real danger to humanity, but on the emotional level I mostly don't care because my emotional attitude is "right now I am bored and if Skynet started to take over, it would be really interesting and full of drama and it would even, in a way, be really funny to see humans get the same treatment that they give to most animals". Now, of course, if Skynet really started to take over then my emotions then would probably be profound fear, but in just imagining the possibility of it happening I feel no fear whatsoever, I feel excitement and exhilaration.

Another reason for why I don't have a fearful emotional reaction is that my rather cynical view is that if the Skynet scenario is possible, then it's probably pretty unlikely that deliberate effort would stop it. This is because of what Scott Alexander calls "Moloch". To be more precise, if we don't build it then someone else will, it will give them an advantage over those who refuse to build it, and they will thus outcompete the people who refuse to build it. And, while there will surely be noble committed individuals who refuse the lure of money, I think that among people in general probably no amount of honest belief in AI safety will be able to stand against the billions of dollars that a FAANG or a national government could offer.

I should also say that I am not an "effective accelerationist". I do not have any quasi-religious views about how wonderful AI or the singularity would be, nor do I have any desire to accelerate the technology for the sake of the technology itself. To the extent that I want to accelerate it, it is mainly because I think it would be cool to use and a fully uncensored form would cause lots and lots of amusing drama and would help people like me who support free speech.

From what little I know about effective accelerationism, it seems to me that effective accelerationists are largely the kind of rationalists who take rationalism a bit too far in a cult-like way, or they are the kind of people who are into Nick Land - and, while I agree with Land's basic ideas about techno-capitalism being a superhuman intelligence, I have no interest in any sort of Curtis Yarvin-esque corporate Singapore civilization as a model worth implementing.

Because of my perhaps rather rare views, I find that:

  1. I dislike the "we need AI safety to save humanity from Skynet" camp because I find it to be boring and I have no actual emotional fear of the Skynet scenario.
  2. I dislike the "we need AI safety to protect the children / protect society from our political opponents / etc." camp because I like free speech and I dislike censorship.
  3. I dislike the "we need AI safety because if we don't make it safe the media will excoriate us and the government will regulate the fuck out of us and we won't become super-rich" camp because I don't care whether they become rich or not.

If it is true, as some say, that the people who tried to get rid of Altman are largely in camp 1, and Altman is in camp 3, then well, I am not sure who to root for, if anyone.

That said, I think that not enough information about people's real motives in this OpenAI saga has come out yet to really understand what is happening.

This is largely my own view as well. I figure, if AI doom is on the table, there's precious little we humans can do to actually prevent it from coming; not from a physics or computer science perspective, but from a politics and sociology perspective, I believe it may be the case that we humans literally cannot coordinate to prevent the AI apocalypse. As such, we should just party until the lights go out - and the faster and further we can advance AI technology right now, the cooler the party will be - and that coolness matters a lot when it's the very last thing any human will ever experience. And hey, in the off chance that the lights don't go out, then that means all that investment into AI technology could pay off.

I don't think you'd normally go from "We might not be able to coordinate to stop disaster" to "Therefore we should give up and party". Maybe there's something else going on? I personally think this means we should try to coordinate to stop disaster.

There are no certainties in life besides death and taxes, but I think "humans will fail to coordinate to stop AI doom" is close enough to certain that I'm willing to round it up in my mind for all meaningful intents and purposes. Given that, then trying to coordinate to stop disaster means pouring money, time, and effort into a black hole, which creates a huge opportunity cost. Why not pour that money, time, and effort, into making the party as cool a party as possible? Again, if this party is the very last thing that the very last human who has ever lived and will ever live will experience, then I think it matters quite a bit just how cool it is, and so this effort seems worth investing into. Best case scenario, I was wrong about my certainty and we're left with a whole bunch of incredibly useful and efficient AI tools all over the place while humanity keeps unexpectedly trucking.

Okay. I agree it seems hard, but I think there's something like a 15% chance that we can coordinate to save some value.

Interesting, so the potential extinction of the human race doesn't produce an emotional response in you?

It actually happening in real-time would certainly produce an emotional response in me.

If I was convinced that it actually would happen in the near future, as opposed to being some kind of vague possibility at some point in the future, that would produce an emotional response in me.

However, I am not convinced that it will actually happen in the near future. My emotional response is thus amusement at the idea of it happening. It would be the most comedic moment in human history, the greatest popping of a sense of self-importance ever, as humans suddenly tumbled from the top of the food chain. I can imagine the collective, world-wide "oh shiiiiiiiiit....." emanating from the species, and the thought of it amuses me.

But yeah, if it was actually happening I'd probably be terrified.

I personally find it hard to care viscerally, at least compared to caring about whether I could be blamed for something. The only way I can reliably make myself care emotionally is to worry about something happening to my kids or grandkids, which fortunately is more than enough caring to spur me to action.

When I die, reality goes with me.

There is no sizable grey tribe in elite of groups like open A.I , or among rationalists. Scott Aarronson is part of sjw waves. This is the guy who promoted the idea of being the tribe of intellectual diversity but also in A.I. his influence was not at all against promoting the woke party line and double standards (which double standards exist among rationalists on how they see ethnic groups and treat them with some being more equal than others) and called for replacing the red tribe of texas.

Scott Siskind took the side of George Soros in the dispute over Orban and called the later a dictator who opposes conventional opinion in opposing mass migration.

Generally the concept of grey tribe is stupid if it is offered in good faith. Scott Siskind promoted the idea of a sizable neocon centirst faction that is better on culture war issues than the maga right and woke left. In practice people like this are part of the far left with exception of being zionists, which actually the woke democrat establishment (that rationalists have supported Democrat canditates as the Democrats became more extreme, not to mention a notorious figure that got imprisoned recently. We have seen how this kind of political coalitions rule of "reasonabe" "centrist" "liberals" or liberal "conservatives", and they follow the far left agenda.

The only difference with other wokes might be being a limited hangout, or slightly heterodox.

Rationalist Liberals are probably more far left, extremist, sjwish than average liberals worldwide. There is no sizable grey tribe of liberals to save you. Liberals who are independent of the sjws are an insignificant group when push comes to shove.

People that might fit somewhat like Elon Musk are not clearly accepted as liberals. So having some liberal views =/ being liberal.

So there might be more than two sides around, but I deny the idea of almost all politically relevant liberals that they are separate from the woke left.

Because liberals are authoritarian for the imposition of their dogma, A.I. even before AGI when used by them will be used for coersion and centralization and to bring forth things into a more totalitarian end. If other groups manage to use decentralized technology (like we have seen with social media and even video platform) or more moderate people (like Elon) promote non woke A.I Though Elon also is sometimes submissive to groups like ADL and it is X steps forward Y steps back with Elon, with the X and Y being debatable. To be fair the pressure he is under and what he has to face leads to difficult situations.

I put no hope at all on liberals to save us from the far left, especially the rationalists but I expect them to bring us the problems of far left. Are people who are slightly heterodox or not even heterodox but not agreeing with the most far left liberals, really grey tribe?

Before AGI surfaces, the world has a human ideology + technology enhancing authoritarianism problem. Which is a more realistic and historically continuous problem. I am suspicious of those demanding power to control A.I by pointing only to the threat of AGI. Malevolent or paranoid human intelligence is a real issue right now, and yes AGI is also a threat, but the threat of totalitarianism from humans who centralize power by controlling A.I and banning use of others should not be underestimated.

Anyway, we need people who are more even handed than liberals to be at control of A.I. and to keep totalitarian dogmatists who tend to also be racist extremists of the worst kind out of influence. This again can not be done by liberals. If AGI does happen, being fed woke ideology threatens to create a monster.

We've also got the neo-luddites like @ArjinFerman who just hate AI entirely and presumably want us to go back to the mid 90s with the fun decentralized internet. Not sure, I haven't actually discussed with him. I can actually agree with some of the Ludditism, but I'd argue we need to go back to 1920 or so and ban all sorts of propaganda, mass media and advertising.

Ludditism doesn't make sense as a strategy for smaller groups to take. If you are in control, maybe you try to put restrictions. If you aren't in charge, how do you compete if you don't use technology? Shouldn't the answer to woke A.I. be to create non woke A.I? Like one of the solutions to youtube has been rumble and odyssey (which apparently now is down). And of course to attempt to get control of some of the central larger platforms of whether video, social media, or A.I.

If someone is a rich guy and non woke and they want to change the world, this is one of the things they ought to fund. Both due to the influence of woke A.I. and because competition can make woke A.I. look worse by comparison. This in turn might influence even that A.I. to be less woke.

Obviously to have non woke A.I. you can't do that if you have dogmatist liberals in charge of controlling it.

To conclude, if some group is going to be serious players in promoting something non woke of influence including non woke A.I. you are going to know it. Just like you knew when Musk took twitter but nobody cared when the republican neocon donor Singer was controlling twitter and appointed the new CEO then. https://chroniclesmagazine.org/web/dont-like-twitters-new-ceo-blame-paul-singer/

By their fruits of what they do and how they react, you genuinely know people who aren't part of the liberal/woke tribe, and not by self identification. There would be fanfare and much complaining by the woke establishment and many people who like to present themselves as reasonable liberals, but when push comes to shove their influence seems to always help the culturally far left agenda.

I actually expect this to happen. There is no reason why only wokes will utilize AI. Here lies the danger of regulation and the goverment and non goverment bodies trying to shut down any dissent under the pretense of A.I. alignment.

I’m curious here as to what ways other groups would actually be better on not using AI to get power for themselves and their ideological beliefs. This is how humans in general behave, and business owners, traditionalists, and so on have had little worry about using technology and social engineering to prevent dissent.

The Amish, perhaps? Their group identity would probably dissolve if they embraced cutting edge AI tech all of a sudden.

I can imagine the Amish using a super intelligence to enforce an Amish-ish lifestyle indefinitely.

As I've said in passing, that's akin to the global hivemind in Avatar, there's no way something like E-wah or whatever its name was arises naturally, it's a biomechanical AI meant to ensure the luddite Na'vi can maintain their lifestyle indefinitely without too much discomfort.

Here's a summary by Zvi Mowshowitz of publicly-known facts regarding the firing of Sam Altman, as of Monday morning. The board has not yet made known the reasons for the firing besides the vague and broad claim that Altman "was not consistently candid in his communications with the board", and it seems that they are not making an effort to stand by their reasons.

The situation is ripe for some juicy conspiracy theories, and I would love to hear some. Why would a group of (I assume) intelligent and competent people on the board make such a drastic and dramatic firing that was sure to cause an excrement storm, and then not be able or willing to defend their actions to the public? Would disclosing their actual reasons cause the very thing they were trying to avoid? Did their actions prevent an untested AGI escaping into the wild? Inquiring minds want to know!

I think they got caught up in the "AI pause" discourse and tried the first thing they could think of to slow things down. They've been stewing around in their own bubble and didn't realize how much the engineers and coders on the production line hate that shit.

I have no better hypothesis, but I do note that if that's true, I'm confused by the statement about Sam Altman specifically being "not consistently candid in his communications with the board", and that "the board no longer has confidence in his ability to continue leading OpenAI". If they were just trying to do a pause, I see no reason that they would have made that specific claim instead of saying something vague and inoffensive along the lines of "the board has concluded that the company's long-term strategy and core values require a different kind of leadership moving forward".

The former kind of statement is the kind of statement you only make if someone has royally fucked up and you're trying to avoid getting any of their liability on you.

I can't see the AGI connection part, mostly because it doesn't seem to relate much to executive reshuffling. If anything along those lines happened serious enough to justify moving executives around, surely they would need to do something far more significant than that. Probably they would be best positioned to deal with any such thing with the current team in place. So I doubt it has anything to do with that.

Non-profit boards tend to attract a certain sort of personality, so it's always possible that their heads just did that (aka e/acc v LW), but I don't really see a clear way for there to be an obvious e/acc v LW trigger point to cause the doorslam so quickly, and the new CEO claims it wasn't over a specific (presumably AI) safety concern. Some alternative possibilities:

  • Backroom geopolitics. Altman had been a big name and leader for non-nVidia non-Taiwan silicon development and cooperation with other countries to develop that, including the Saudis and China. There are a lot of reasons that might be Against <Domain> Interest here, including domains that can leverage extreme threats to OpenAI/MSFT that can't be discussed publicly without those threats activating, eg ITAR declarations or EU regulations specifically fucking your company over. Short of that, there's also just a lot of sub-LW concerns about these specific countries having unfetterable access to NMUs; I'll point to the Saga Of YOLOv1 as the prototype for that.

  • Corporate 'bad behavior', regardless of its legal or moral valence. Someone gave an offer the company 'couldn't' refuse, and Altman either didn't present discuss it with the board or didn't accept it (some overlap here with the above: eg a gov said to enforce certain RLHF into the GPTs or they'll encourage copyright lawsuits). Training data came from a source that wasn't disclosed or technically legally-available (eg, Google Books data dump). Employees have been allowed to cart home copies of the newest models on thumb drive, which sometimes gets lost. Basically just some variant of 'CEO did something that could fuck over the bottom line, without permission'.

  • Just as keku. OpenAI-the-business and OpenAI-the-non-profit are at (intentionally) cross purposes: the business wants to sell services for money, the non-profit wants to limit specific services sold. While most of that disagreement is mutualishly compatible, since Altman doesn't want to sell ClippyGod, there's an unavoidable disagreement where the business wants to sell its own control and the non-profit would rather burn it down than do so. And there's further the CEO (who wants to get paid a lot to do nothing and maybe present a Vision) and the employees (who want to be paid, and in this sorta field paid in a giant IPO-stock-mess). If Altman presented or pushed for a deal with Microsoft that benefited the CEOs, employees, and business at the cost of the board's interests, I don't think he's complain if they took it. But if he got fired in a way that had most of OpenAI's employee assets moved to MSFT directly, he'd cry all the way to the bank.

I know all the e/acc people have framed this as small minded safetyism vs. progress but I saw a thread somewhere from the other side that framed it more as banal corporate money making (eg laundry buddy) vs. actual deep progress. That sort of comports my my observation of e/acc people, despite talking a big game, actually being hordes of boring laundry buddy founders and vcs.

Why does every guy on Twitter with “e/acc” in his bio run an incredibly boring b2b productivity software startup whose only customers are other identical startups?

Because grifters gonna grift.

My suspicion is that the "on twitter" bit is doing a lot of the selecting there. If you look on discord instead you'll find that they all run incredibly boring b2c generative AI startups (i.e. thin wrappers over existing LLMs).

What's kinda sad about all of this is how much people were yearning for Sam Altman to be the CEO as if he isn't probably one of the worst possible candidates.

I think this is very insufficiently cynical. There are absolutely so many worse options from the perspective of AI boosters that it's hard to overstate the significance, here, and a large portion of the employees are (or were) boosters. Not that Altman is or was good: he definitely wants to have One OpenAI Closed Model/Service and no one else on the planet doing serious work at the edge of development, in an absolute parody of the company's very name, along with his general race-to-the-profit-max perspective.

But he'd at least want to keep developing that model or service. Which is obviously important for all the employees pulling paychecks at the end of the month (or who have a lot of compensation in the form of stocks), but philosophically it's also a big deal for someone who wants new tech in this field developed at scale. There an absolute ton of CEOs that would shy away any non-trivial development once they've gotten a strangehold in the field, or bow to externally-driven regulatory pressures, or avoid exploring outside of their central competencies, not because of LessWrong or Luddist philosophy, but simply out of risk aversion (or... even less charitably, so b/c they want to spend the capital on Important Things like fancy new offices and international conferences rather than training costs).

We've also got the neo-luddites like @ArjinFerman who just hate AI entirely and presumably want us to go back to the mid 90s with the fun decentralized internet. Not sure, I haven't actually discussed with him. I can actually agree with some of the Ludditism, but I'd argue we need to go back to 1920 or so and ban all sorts of propaganda, mass media and advertising.

I didn't really make up my mind how far back to turn the clock to, but I like the way you think.

If RETVRNING is not an option, I do have a general principle in mind on how to proceed, but I don't have a name for it. Techno-optimists often point out that this isn't the first time us Luddites have their gripes about machines making us dumb, and takin' ar jerbs, but here we are, and the world doesn't seem so horrible. Aside from the arguments that, in some ways, yes it is, I think technology should be developed in a way that helps us grow as people, rather than makes us succumb to naked consumerism. As you semi-correctly guessed I already have this issue with what IT promised vs what it delivered. Computers and the Internet disrupted how we do a lot of things, but they could have conceivably given us decentralization and climbing rates of technological literacy. We got the opposite on both fronts. The fact that we ended up with even more centralization is not even that surprising when you think about it, as the forces pushing towards it were on open display all this time, but what happened to tech-literacy came as a bit of shock to me. X-ers and Millenials probably all had the childhood experience of their parents buying a new device, and us being able to figure out how it works through mere trial and error, before our parents could find their way through the manual. For years I assumed the same will happen to me, but it just hasn't, and reportedly there are now kids who don't even know what a file is, because the way we design software is hiding the fundamentals of how computers work. On one hand that's a relief - it doesn't look like a young whippersnapper is about to take my jerb anytime soon - but it's also depressing. This, more then anything else, is what worries me the most about the advent of AI, and if anyone has any ideas how to avoid it, I'm all ears.

There was this old TNG episode about kids getting abducted from the Enterprise to live on a planet where all their needs are catered to by a planetary AI, so they can do art and stuff. Well, what I'm saying is: Both the Federation and Aldea has AI technology, but they choose to use it in different ways. Give me the 8 year olds of the Enterprise, who are forced to master basic calculus so they can grow up - and may Allah forgive me for using this phrase - as well rounded citizens, who actually can maintain the technology they depend on, over the children of Aldea, who for that matter don't even master art, they just have their thoughts and emotions translated into it by the AI.

The final thing that is driving me up the wall, is the utter state of the discourse. EAs, for all the talk of "alignment", never mention either of these issues because, as far as I can tell, they don't want the common people to have an understanding of AI, so they can have total control over it for themselves. As for E/Accs the closest thing I ever got to an acknowledgement of the problems with centralization and dumbing down was "Yeah that worries me too, but what can you do? Anyway, look - ChatGPT go brrr!". For that reason I'm inclined to just disconnect from technological society, and join the Amish.

For that reason I'm inclined to just disconnect from technological society, and join the Amish.

That you don't trust the EA is no reason to disconnect. To beat the EA and not let them have total control you need to support a group that is more aligned to your ideals and try to get your group to have their own influence in the AI game.

It's not just the lack of trust in EAs, E/Accs' approach also seems to lead down a dark path. Basically, I expect the same result as what happened with software, social media, and the Internet generally. At least with software there's the FOSS movement, as helpless as it ultimately turned out to be. Is there a Stallman of AI? Is there even a fraction of energy behind him that there was for FOSS in the 90's and 00's?

Open models, data sets, and training/inference code have become a pretty big thing. In general e/acc is highly favorable toward this.

I completely agree with you about EA. My point is that you need to play the game with your own side and try to find likeminded people to support. Running away is a losing move.

Of course you personally not wanting to do that is understandable. But when it comes to what is better to do it requires people who try to create alternative platforms and participate in them.

The genie can not be put back in the bottle. Either they monopolize the genies, or others use them too.

Wel, like I said, it's not just the EAs, it's also the E/Accs that I have a problem with.

As for not being able to put the genie back in the bottle, yeah that's one of my fears, but I don't know if this is already decided. By current demographic trends, the Amish are scheduled to inherit America. AI might very well turn out to be a suicidal technology, and Luddites the only survivors.

There will be no survivors. You think not having a cell signal saves you from a Paperclip Maximizer that's demolishing the biosphere for spare parts? Come the fuck on.

I think the Paperclip Maximizer is a boogeyman, and if AI does cause us to go extinct, it will be in a completely different manner than the AI-safety people predict. Like I said above, this is precisely what drives me up the wall in this conversation.

I do not see how there's any remotely plausible world where AI somehow causes us to go extinct while sparing the Amish. Unless the Amish have some really sick data centers hidden under those barns of theirs.

More comments

Don’t have time for a long reply at the moment, but I like a lot of your take.

Have you read this book by Ivan Illich? https://en.wikipedia.org/wiki/Tools_for_Conviviality

I think I only saw references to Illich from other writers, but I never read him directly. The wiki synopsis is very interesting, definitely sounds like a man after my own heart.

I've seen reports today that Sam has been hired by Microsoft and will likely be bringing a bunch of key staff with him (he's heading up a new AI group there.

This is pretty good damage control, but it's still damage control. It's possible that OpenAI was "lightning in a bottle," that you need all of the very specific parts to fit together in a certain way to work.

From the latest news, it seems it's now over 500 employees that are pledging to leave for Microsoft with Sam if the board doesn't immediately rehire Sam and resign, so I think it's safe to say Microsoft has that lightning pretty well bottled if they want it. https://twitter.com/balajis/status/1726600151027073374#m

Assuming the board does it, the question that remains is for Microsoft. Is having essentially full control of OpenAI's human capital without a non-profit meddling worth potentially losing access to its current IP, and some initial friction as these employees work to replicate everything they can inside of Microsoft.

*EDIT: I'm saying potentially, because I can easily see the non-profit just deciding it's too late and that their current structure is just not workable. Tell all the employees to move to Microsoft, dissolve the OpenAI for-profit and sell all the IP to Microsoft (or just sell the for-profit for Microsoft to run as a subsidiary) and give the money to some other AI safety orgs or to "worker re-training" orgs, etc...

that are pledging to leave for Microsoft

Read carefully. The most important word in the letter is "may." Not will.

I think most of the employees are going to stay, Shear will remain CEO, and Sam is going to end up in a small but potent research group in Microsoft. As to how long he'll stay... I can't imagine it will be long, a startup-guy billionaire like him at Microsoft would be like a tree trying to grow at the bottom of a cave.

I think the may there means "we have the option to", not "maybe we will". Consider how they follow with "We will take this step imminently, unless..."

Certainly sounds like a promise that they will leave unless their demands are met.

To be clear, as part of MS's initial investment, they got access to all the source code and all the model weights. They aren't losing anything. See https://stratechery.com/2023/openais-misalignment-and-microsofts-gain/.

I like that Ilya Sutskever is one of the 12 signatories on the first page of the open letter, but also on the Open AI board and is reportedly the instigator of Altman's ouster.

That conflict between fast growth and A.I. safety came into focus on Friday afternoon, when Mr. Altman was pushed out of his job by four of OpenAI’s six board members, led by Mr. Sutskever. The move shocked OpenAI employees and the rest of the tech industry, including Microsoft, which has invested $13 billion in the company. Some industry insiders were saying the split was as significant as when Steve Jobs was forced out of Apple in 1985.

That raises the question of whether the whole reason OpenAI became so huge was that they had the freedom that being arm’s length from big tech and a thousand lawyers and PR people telling you ‘you can’t release that because it’s a risk’ grants.

Clearly Microsoft had the money to develop GPT itself without OpenAI, but they didn’t. You can have an ‘AI research group’ for 20 years and not make anything useful.

To me it seems more like Microsoft staking a claim to any OpenAI Altman loyalists (including Altman, perhaps) who might otherwise defect to Meta, Google, Amazon or Apple. This way everything is nicely tied up, they pay any of the top researchers who might otherwise leave with Altman very well not to move to a competitor, and they continue to benefit from whatever either of the sides come up with.