site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

NYT: Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding

The NYT scooped everybody. We finally know why Sam Altman was fired:

A few weeks before Mr. Altman’s ouster, he met with [OpenAI board member Helen Toner] to discuss a paper she had recently co-written for Georgetown University’s Center for Security and Emerging Technology.

Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, according to an email that Mr. Altman wrote to colleagues and that was viewed by The New York Times.

In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology.

Ms. Toner defended it as an academic paper that analyzed the challenges that the public faces when trying to understand the intentions of the countries and companies developing A.I. But Mr. Altman disagreed.

“I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight.”

Senior OpenAI leaders, including Mr. Sutskever, who is deeply concerned that A.I. could one day destroy humanity, later discussed whether Ms. Toner should be removed, a person involved in the conversations said.

There are a few other minor issues mentioned in the article, but this sounds like the big one. Rationalist/EA types take being told that they can't criticize "allies" in public very negatively, a position I am quite sympathetic to. Helen Toner works at an Open Philanthropy-funded think tank, so she's as blue blood an effective altruist as they get. My guess is that this was the moment that she decided that Sam had to be eliminated before he took control of the board and jeopardized OpenAI's mission.

What gets me is how disingenuous this makes the original firing announcement: "Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities." It sounds like he was perfectly candid. They just didn't like what he was about.

In completely unrelated news, ChatGPT has been down for the last three hours.

Ilya Sutskever ... thought Mr. Altman was not always being honest when talking with the board.

The lack of candour may have referred to this or to things not reported on in the article.

Luckily, a brand new article just dropped with details about that:

Some executives said they were getting questions from regulators and law-enforcement entities such as the U.S. attorney’s office in Manhattan over the charge of Altman’s alleged lack of candor, the people said. The truth was going to come out one way or another, they told the board.

People familiar with the board’s thinking said there wasn’t one incident that led to their decision to eject Altman, but a consistent, slow erosion of trust over time that made them increasingly uneasy. Also complicating matters were Altman’s mounting list of outside AI-related ventures, which raised questions for the board about how OpenAI’s technology or intellectual property could be used.

The board agreed to discuss the matter with their counsel. After a few hours, they returned, still unwilling to provide specifics. They said that Altman wasn’t candid, and often got his way. The board said that Altman had been so deft they couldn’t even give a specific example, according to the people familiar with the executives.

Not entirely related, but here's a particularly eye-popping quote:

OpenAI leadership and employees were growing increasingly concerned about being painted in the press as “a bunch of effective altruists,” as one of them put it.

Wait, why would being seen as EA he bad? If anything, wouldn't it be good? Or are people still kvetching over the Bankman-Fried stuff?

still unwilling to provide specifics. They said that Altman wasn’t candid, and often got his way. The board said that Altman had been so deft they couldn’t even give a specific example

This is profound weakness. Completely unacceptable for any functioning adult. How a group of them with hours of legal counsel advising them could still be this incompetent is baffling.

The link doesn't work for me - maybe this is explained elsewhere in the article, but going solely on the excerpt...

The board said that Altman had been so deft they couldn’t even give a specific example, according to the people familiar with the executives.

Horseshit. "Oh he was lying but we can't give you any examples because he's that good at lying" is the kind of excuse I would expect from a four year old, not a group of supposedly intelligent and qualified professionals. At this point I think that unless they actually give us the specifics, this all boils down to the GPT marketplace blowing up Poe and making a boardmember unreasonably angry.

I took the point to be adjacent to the one Scott made - wow, is it really that long ago? - last December about how the media rarely lies. I don't agree with how Scott frames the observation, which I would have phrased in terms of how the ways they lie are relatively subtle - but the observation itself, as distinct from the debate over the best language to characterize it, is solid.

Skilled liars make as few statements that are straightforwardly false in a plain, literal way as they can and still spread whatever narrative they want to spread. One of the many advantages of this is that there's rarely a clear-cut smoking gun someone in the board's position can point to. Instead it's a matter of which facts they emphasize and which they omit, what they juxtapose with what in order to imply connections that may not actually exist, how they manipulate your emotions around aspects of their narrative, how they take advantage of people's trust in them, or at least willingness to give the benefit of the doubt, in situations that really are ambiguous.

So while I can see how the statement you quote is poor optics, I have no trouble imagining how it could be true.

I totally agree with you that it is possible for someone to be deceptive in a subtle manner like this, but that doesn't change anything about the obligation to make your accusations comprehensible. There's nothing about this type of deception that makes it impossible to describe - even something simple in the form "While the situation was actually x, Sam deceived us into believing that the situation was y" would work. If the deception is so subtle and mysterious in its effects that it had no impact whatsoever, it wasn't a good enough justification for Sam's ouster.

"Oh he was lying but we can't give you any examples because he's that good at lying" is the kind of excuse I would expect from a four year old, not a group of supposedly intelligent and qualified professionals.

Sam Altman is a real business shark whose literal job for the last twelve years has been dealing with boards of directors and VC investors. Running circles around a shape-rotator like Sutskever is child's play for him. Running circles against an ivory tower researcher like Toner is easy for him. McCauley doesn't strike me as a serious contender for someone who successfully wrestled Reddit away from Conde Nast either. And, tellingly, only D'Angelo managed to remain on the board of directors after Altman got his way. Scratch that, I have no idea how D'Angelo managed to survive the debacle.

Running circles around a shape-rotator like Sutskever is child's play for him.

Actually, reportedly, it was Anna Brockman crying and begging Sutskever to switch his allegiance that seemed to clinch it. Ilya had officiated Greg and Anna's wedding, held at the OpenAI office. Another point for Hanania's theory that women's tears win in the marketplace of ideas.

It's not like you even have to be an experienced business shark to out-argue people who say "hey employees, you know what, I know that we can all become ridiculously rich in the next couple of years, but guys... guys... AI might destroy humanity at some point so let's not become ridiculously rich".

Trying to stop people from developing AI is like trying to stop people from developing nuclear weapons. Obviously, having nuclear weapons gives one enormous benefits. So the idea that someone could talk the whole world out of trying to get nukes by just using intellectual arguments is absolutely ludicrous.

Imagine starting a company called "OpenNuclear". "Let's develop nuclear technology in a safe way, for the benefit of all humanity". And then expecting that somehow the world's talented engineers will just go along with your goal of nuclear safety, instead of going to work building nuclear weapons for various organizations for huge salaries and/or because of powerful emotional reasons like "I don't want my country to get attacked". I can't think of any example in history of humanity as a whole refusing to develop a powerful technology. Even if somehow the world temporarily agreed to pause AI research, that agreement would probably be dropped like a hot potato the second some major war broke out and both sides realized that AI could help them.

It's not like you even have to be an experienced business shark to out-argue people who say "hey employees, you know what, I know that we can all become ridiculously rich in the next couple of years, but guys... guys... AI might destroy humanity at some point so let's not become ridiculously rich".

That's been my issue with the entire "open letter calling for a moratorium" and the rest of it. When the share price drops just because the guy who is promoting commercial use of AI gets booted, then we see how this plays out in reality. Market forces don't care about safety or alignment or paperclip maximisers or the rest of the beautiful Golden Age SF techno-optimism theories that the EA subset concerned about AI have been working on for years; they care about the magical eternal money-fountain that this technology promises to be. Microsoft and other companies are already selling their versions of AI to be integrated into your business and pump productivity and profitability up to the moon and beyond. People are already using AI for everything from "write my term paper for me" to increasing amount of articles I see online which are gibberish but do their job of "fill space, get clicks, earn ad revenue".

Nobody is going to pause for six months while their competitors get to market first. That's what the idealists seem to have their heads in the sand about: Microsoft partnered with OpenAI because (a) they were going to develop a marketable product fast and first and (b) just like Altman told Toner, it was to keep the regulators happy: "oh yeah we totally are working on security and safety, don't worry!"

But if "security and safety" stand in the way of "get our hands on the spigot of the money-fountain", guess which gets dropped? I think Sutskever and the board are learning that lesson the hard way now. Altman was telling them what they wanted to hear while making sure the funding kept flowing and the product was being developed. That's why they felt uneasy when it finally dawned on them that they weren't really in control of what was happening, and why they tried kicking him out (straight into the arms of Microsoft and now it seems returning like victorious Caesar to triumph over their corpses).

But the world did that with Atoms for Peace:

https://en.wikipedia.org/wiki/Atoms_for_Peace

See also the International Atomic Energy Agency and the Treaty on the Non-Proliferation of Nuclear Weapons. Countries like Japan have highly developed nuclear industry, but they don’t have nuclear weapons.

Japan is often said to be a "screwdriver's turn" away from possessing nuclear weapons

Because such latent capability is not prescribed by the Nuclear Non-Proliferation Treaty, this is sometimes called the "Japan Option" (as a work-around to the treaty), as Japan is considered a "paranuclear" state, being a clear case of a country with complete technical prowess to develop a nuclear weapon quickly

But do they have the willpower to do so? This is the country that has first-hand experience with what nuclear weapons can do in the middle of a war, after all.

More comments

Countries like Japan have highly developed nuclear industry, but they don’t have nuclear weapons.

Because when America occupied them after the war, it made damn sure Japan would never again get any notions about being a military power. It's why their military is known as the Japanese Self-Defense Forces:

The Occupation was commanded by American general Douglas MacArthur, whose office was designated the Supreme Command for the Allied Powers (SCAP). In the initial phase of the Occupation, from 1945 to 1946, SCAP had pursued an ambitious program of social and political reform, designed to ensure that Japan would never again be a threat to world peace. Among other reforms, SCAP worked with Japanese leaders to completely disband the Japanese military. In addition, SCAP sought to unravel the wartime Japanese police state by breaking up the national police force into small American-style police forces controlled at the local level. SCAP also sought to empower previously marginalized groups that it believed would have a moderating effect on future militarism, legalizing the Communist and Socialist parties and encouraging the formation of labor unions. The crowning achievement of the first phase of the Occupation was the promulgation at SCAP's behest in 1947 of a new Constitution of Japan. Most famously Article 9 of the Japanese Constitution explicitly disavows war as an instrument of state policy and promises that Japan will never maintain a military.

Countries like Japan have highly developed nuclear industry, but they don’t have nuclear weapons.

The question is how many hours after wanting to have they will have them.

Its implied that they are keeping quiet for legal reasons, but that seems like a cop-out to me. If getting rid of Altman is worth blowing up the most productive company on the face of the planet, then it's worth getting a defamation lawsuit over. Like really? You're letting the lawyers dictate your messaging about saving the world? Get it together.