site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

To bring up another post from last week, I'm going to go ahead and repost @justcool393's piece on the Sam Altman/OpenAI/Microsoft situation, since she posted it a few hours ago and right before the last thread went down.

Here's her writing:


Another day, another entrant into the OpenAI drama. Emmett Shear is the new interim CEO of OpenAI.

I don't know why it was surprising to people that Sam wouldn't come back. The company was meant to be subservient to the nonprofit's goals and I'm not sure why the attempted coup from Sam's side (you know the whole effectively false reporting that Sam Altman was to become the new CEO) was apparently "shocking" that it failed.

The OpenAI board has hired Emmett Shear as CEO. He is the former CEO of Twitch.

My understanding is that Sam is in shock.

https://twitter.com/emilychangtv/status/1726468006786859101

What's kinda sad about all of this is how much people were yearning for Sam Altman to be the CEO as if he isn't probably one of the worst possible candidates. Like maybe this is just a bunch of technolibertarians on Twitter or HN or something who think that the ultimate goal of humanity is how many numbers on a screen you can earn, but the amazing amount of unearned reverence towards a VC to lead the company.

In any case, here's to hoping that Laundry Buddy won't win out in the rat race for AGI, lest we live in a world optimized for maximum laundry detergent. Maybe we'll avoid that future now with Sam's departure.

Anyway, I'll leave this to munch on which I found from the HN thread.

Motte: e/acc is just techno-optimism, everyone who is against e/acc must be against building a better future and hate technology

Bailey: e/acc is about building a techno-god, we oppose any attempt to safeguard humanity by regulating AI in any form around and around and around"

https://twitter.com/eshear/status/1683208767054438400


I'm reposting here because I'm convinced, like many other residents, that the ongoing drama of who controls AI development has far reaching implications, likely on the scale of major power geopolitical events. If not ultimately even greater.

To add a bit to the discussion to justify reposting - I think many of these discussions around AI Safety versus Accelerationism are extremely murky because so many people in secular, rationalistic circles are extremely averse to claiming religious belief. It's clear to me that both AI Safety and Accelerationism have strong themes of classical religion, and seem to be two different sects of a religion battling it out over the ultimate ideology. Potentially similar to early Orthodox Christians versus Gnostics.

Alternatively, @2rafa has argued that many of the E/Acc (effective accelerationism) crowd comes from bored technocrats who just want to see something exciting happen. I tend to agree with that argument as well, given how devoid of purpose most of the technocratic social world is. Religion and religious-style movements tend to provide that purpose, but when you are explicitly secular I suppose you have to get your motivation elsewhere.

We've also got the neo-luddites like @ArjinFerman who just hate AI entirely and presumably want us to go back to the mid 90s with the fun decentralized internet. Not sure, I haven't actually discussed with him. I can actually agree with some of the Ludditism, but I'd argue we need to go back to 1920 or so and ban all sorts of propaganda, mass media and advertising.

Anyway, clearly the technological battle for the future of our civilization continues to heat up. The luddites seem out, but may have a surprising last hour comeback. The woke/political left leaning folks seem to be strongly in charge, though the OpenAI scandal points to trouble in the Olympian heights of Silicon Valley AI decision makers.

Will the Grey Tribe use AGI to come back and finally recover the face and ground it has lost to the advancing SJW waves? Who knows. I'm just here for the tea.

Here's a summary by Zvi Mowshowitz of publicly-known facts regarding the firing of Sam Altman, as of Monday morning. The board has not yet made known the reasons for the firing besides the vague and broad claim that Altman "was not consistently candid in his communications with the board", and it seems that they are not making an effort to stand by their reasons.

The situation is ripe for some juicy conspiracy theories, and I would love to hear some. Why would a group of (I assume) intelligent and competent people on the board make such a drastic and dramatic firing that was sure to cause an excrement storm, and then not be able or willing to defend their actions to the public? Would disclosing their actual reasons cause the very thing they were trying to avoid? Did their actions prevent an untested AGI escaping into the wild? Inquiring minds want to know!

I think they got caught up in the "AI pause" discourse and tried the first thing they could think of to slow things down. They've been stewing around in their own bubble and didn't realize how much the engineers and coders on the production line hate that shit.

I have no better hypothesis, but I do note that if that's true, I'm confused by the statement about Sam Altman specifically being "not consistently candid in his communications with the board", and that "the board no longer has confidence in his ability to continue leading OpenAI". If they were just trying to do a pause, I see no reason that they would have made that specific claim instead of saying something vague and inoffensive along the lines of "the board has concluded that the company's long-term strategy and core values require a different kind of leadership moving forward".

The former kind of statement is the kind of statement you only make if someone has royally fucked up and you're trying to avoid getting any of their liability on you.

I can't see the AGI connection part, mostly because it doesn't seem to relate much to executive reshuffling. If anything along those lines happened serious enough to justify moving executives around, surely they would need to do something far more significant than that. Probably they would be best positioned to deal with any such thing with the current team in place. So I doubt it has anything to do with that.

Non-profit boards tend to attract a certain sort of personality, so it's always possible that their heads just did that (aka e/acc v LW), but I don't really see a clear way for there to be an obvious e/acc v LW trigger point to cause the doorslam so quickly, and the new CEO claims it wasn't over a specific (presumably AI) safety concern. Some alternative possibilities:

  • Backroom geopolitics. Altman had been a big name and leader for non-nVidia non-Taiwan silicon development and cooperation with other countries to develop that, including the Saudis and China. There are a lot of reasons that might be Against <Domain> Interest here, including domains that can leverage extreme threats to OpenAI/MSFT that can't be discussed publicly without those threats activating, eg ITAR declarations or EU regulations specifically fucking your company over. Short of that, there's also just a lot of sub-LW concerns about these specific countries having unfetterable access to NMUs; I'll point to the Saga Of YOLOv1 as the prototype for that.

  • Corporate 'bad behavior', regardless of its legal or moral valence. Someone gave an offer the company 'couldn't' refuse, and Altman either didn't present discuss it with the board or didn't accept it (some overlap here with the above: eg a gov said to enforce certain RLHF into the GPTs or they'll encourage copyright lawsuits). Training data came from a source that wasn't disclosed or technically legally-available (eg, Google Books data dump). Employees have been allowed to cart home copies of the newest models on thumb drive, which sometimes gets lost. Basically just some variant of 'CEO did something that could fuck over the bottom line, without permission'.

  • Just as keku. OpenAI-the-business and OpenAI-the-non-profit are at (intentionally) cross purposes: the business wants to sell services for money, the non-profit wants to limit specific services sold. While most of that disagreement is mutualishly compatible, since Altman doesn't want to sell ClippyGod, there's an unavoidable disagreement where the business wants to sell its own control and the non-profit would rather burn it down than do so. And there's further the CEO (who wants to get paid a lot to do nothing and maybe present a Vision) and the employees (who want to be paid, and in this sorta field paid in a giant IPO-stock-mess). If Altman presented or pushed for a deal with Microsoft that benefited the CEOs, employees, and business at the cost of the board's interests, I don't think he's complain if they took it. But if he got fired in a way that had most of OpenAI's employee assets moved to MSFT directly, he'd cry all the way to the bank.