site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

NYT: Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding

The NYT scooped everybody. We finally know why Sam Altman was fired:

A few weeks before Mr. Altman’s ouster, he met with [OpenAI board member Helen Toner] to discuss a paper she had recently co-written for Georgetown University’s Center for Security and Emerging Technology.

Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, according to an email that Mr. Altman wrote to colleagues and that was viewed by The New York Times.

In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology.

Ms. Toner defended it as an academic paper that analyzed the challenges that the public faces when trying to understand the intentions of the countries and companies developing A.I. But Mr. Altman disagreed.

“I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight.”

Senior OpenAI leaders, including Mr. Sutskever, who is deeply concerned that A.I. could one day destroy humanity, later discussed whether Ms. Toner should be removed, a person involved in the conversations said.

There are a few other minor issues mentioned in the article, but this sounds like the big one. Rationalist/EA types take being told that they can't criticize "allies" in public very negatively, a position I am quite sympathetic to. Helen Toner works at an Open Philanthropy-funded think tank, so she's as blue blood an effective altruist as they get. My guess is that this was the moment that she decided that Sam had to be eliminated before he took control of the board and jeopardized OpenAI's mission.

What gets me is how disingenuous this makes the original firing announcement: "Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities." It sounds like he was perfectly candid. They just didn't like what he was about.

In completely unrelated news, ChatGPT has been down for the last three hours.

I’ll be honest I have come down on the Toner being correct and Altman deserved to be fired side of the coin. I do think going slow with AI is in humanities best interest though I don’t know how to do it. All of humans evolutionary competitive pressures force us forward.

This feels legitimately like a tech where we aren’t sure on how it will end up. I prefer having safety people in charge (even though I think usually their probably my political enemies). I believe Oppenheimer had a fear nukes could ignite the atmosphere. But he thought it was a low probability. I agree with the people who think the AI kills us all is a plausible scenerio. I don’t know how to put odds on that.

The other thing is Hansons work. I don’t know why we haven’t met aliens. It appears to me a great filter exists and AI feels like it could be that.

It’s not going to be a Hollywood movie where somehow the human spirit wins at the end of the day. If the AI is off it will just kill us all.

If you think it might be a great filter I don’t think delaying it a 100 years is a big deal or even a thousand years. It’s a minuscule amount of time in the galaxy.

Now we have capitalism doing its thing which is usually a good thing for pushing techs forward, but it feels different if you think it could be existential. If my choice is between going slow and watching China do it then I guess a prefer Microsoft’s MBA’s.

I also predicted Sam would be back atleast the first time. The now second coming back I’m a bit surprised on.

I’ll be honest I have come down on the Toner being correct and Altman deserved to be fired side of the coin.

I think if the board had just led with that a lot of people would have agreed. "Leader tries to dismantle the structures that hold him accountable" is a problem that people know very well, and "get rid of leader" is not a controversial solution to that problem.

But in fact the board accused Altman of being a lying liar and then refused to stand behind that accusation, even to the subsequent CEOs.

There's gotta be something else going on.

I mentioned this deep in the guts of some other thread, but everyone is modeling this race wrong and buying into the kayfabe too much. Your mental model of this race should consist of a bunch of sinister scheming wizards gazing into their scrying orbs and hissing "Ultimate power must be MINE ALONE at any cost!" Everything else is varying degrees of prevarication and misdirection.

Some of these necromancers want to crush their enemies, get rich, normal shit like that. Others want to create god, or become god, or uplift kangaroos to sentience so they can marry one, or god knows what. Point is, the safetyists never stood a chance. At best they're pawns to be swept aside when they become inconvenient. If ultimate godlike power is on the table, there was never going to be anything but a mad scramble for it.

I mean I think at least some of the people involved on the are quite clear that their goal is a "gameboard-flipping" act which results in world which is permanently safe from anyone who could destroy it. Probably by seizing ultimate power.

I don't think sufficiently godlike power for world domination (as in "gaining control of the world without destroying almost everything of value") is actually on the table though.