site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

NYT: Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding

The NYT scooped everybody. We finally know why Sam Altman was fired:

A few weeks before Mr. Altman’s ouster, he met with [OpenAI board member Helen Toner] to discuss a paper she had recently co-written for Georgetown University’s Center for Security and Emerging Technology.

Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, according to an email that Mr. Altman wrote to colleagues and that was viewed by The New York Times.

In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology.

Ms. Toner defended it as an academic paper that analyzed the challenges that the public faces when trying to understand the intentions of the countries and companies developing A.I. But Mr. Altman disagreed.

“I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight.”

Senior OpenAI leaders, including Mr. Sutskever, who is deeply concerned that A.I. could one day destroy humanity, later discussed whether Ms. Toner should be removed, a person involved in the conversations said.

There are a few other minor issues mentioned in the article, but this sounds like the big one. Rationalist/EA types take being told that they can't criticize "allies" in public very negatively, a position I am quite sympathetic to. Helen Toner works at an Open Philanthropy-funded think tank, so she's as blue blood an effective altruist as they get. My guess is that this was the moment that she decided that Sam had to be eliminated before he took control of the board and jeopardized OpenAI's mission.

What gets me is how disingenuous this makes the original firing announcement: "Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities." It sounds like he was perfectly candid. They just didn't like what he was about.

In completely unrelated news, ChatGPT has been down for the last three hours.

Ilya Sutskever ... thought Mr. Altman was not always being honest when talking with the board.

The lack of candour may have referred to this or to things not reported on in the article.

Luckily, a brand new article just dropped with details about that:

Some executives said they were getting questions from regulators and law-enforcement entities such as the U.S. attorney’s office in Manhattan over the charge of Altman’s alleged lack of candor, the people said. The truth was going to come out one way or another, they told the board.

People familiar with the board’s thinking said there wasn’t one incident that led to their decision to eject Altman, but a consistent, slow erosion of trust over time that made them increasingly uneasy. Also complicating matters were Altman’s mounting list of outside AI-related ventures, which raised questions for the board about how OpenAI’s technology or intellectual property could be used.

The board agreed to discuss the matter with their counsel. After a few hours, they returned, still unwilling to provide specifics. They said that Altman wasn’t candid, and often got his way. The board said that Altman had been so deft they couldn’t even give a specific example, according to the people familiar with the executives.

Not entirely related, but here's a particularly eye-popping quote:

OpenAI leadership and employees were growing increasingly concerned about being painted in the press as “a bunch of effective altruists,” as one of them put it.

The link doesn't work for me - maybe this is explained elsewhere in the article, but going solely on the excerpt...

The board said that Altman had been so deft they couldn’t even give a specific example, according to the people familiar with the executives.

Horseshit. "Oh he was lying but we can't give you any examples because he's that good at lying" is the kind of excuse I would expect from a four year old, not a group of supposedly intelligent and qualified professionals. At this point I think that unless they actually give us the specifics, this all boils down to the GPT marketplace blowing up Poe and making a boardmember unreasonably angry.

"Oh he was lying but we can't give you any examples because he's that good at lying" is the kind of excuse I would expect from a four year old, not a group of supposedly intelligent and qualified professionals.

Sam Altman is a real business shark whose literal job for the last twelve years has been dealing with boards of directors and VC investors. Running circles around a shape-rotator like Sutskever is child's play for him. Running circles against an ivory tower researcher like Toner is easy for him. McCauley doesn't strike me as a serious contender for someone who successfully wrestled Reddit away from Conde Nast either. And, tellingly, only D'Angelo managed to remain on the board of directors after Altman got his way. Scratch that, I have no idea how D'Angelo managed to survive the debacle.

It's not like you even have to be an experienced business shark to out-argue people who say "hey employees, you know what, I know that we can all become ridiculously rich in the next couple of years, but guys... guys... AI might destroy humanity at some point so let's not become ridiculously rich".

Trying to stop people from developing AI is like trying to stop people from developing nuclear weapons. Obviously, having nuclear weapons gives one enormous benefits. So the idea that someone could talk the whole world out of trying to get nukes by just using intellectual arguments is absolutely ludicrous.

Imagine starting a company called "OpenNuclear". "Let's develop nuclear technology in a safe way, for the benefit of all humanity". And then expecting that somehow the world's talented engineers will just go along with your goal of nuclear safety, instead of going to work building nuclear weapons for various organizations for huge salaries and/or because of powerful emotional reasons like "I don't want my country to get attacked". I can't think of any example in history of humanity as a whole refusing to develop a powerful technology. Even if somehow the world temporarily agreed to pause AI research, that agreement would probably be dropped like a hot potato the second some major war broke out and both sides realized that AI could help them.

But the world did that with Atoms for Peace:

https://en.wikipedia.org/wiki/Atoms_for_Peace

See also the International Atomic Energy Agency and the Treaty on the Non-Proliferation of Nuclear Weapons. Countries like Japan have highly developed nuclear industry, but they don’t have nuclear weapons.

Japan is often said to be a "screwdriver's turn" away from possessing nuclear weapons

Because such latent capability is not prescribed by the Nuclear Non-Proliferation Treaty, this is sometimes called the "Japan Option" (as a work-around to the treaty), as Japan is considered a "paranuclear" state, being a clear case of a country with complete technical prowess to develop a nuclear weapon quickly

But do they have the willpower to do so? This is the country that has first-hand experience with what nuclear weapons can do in the middle of a war, after all.

More comments

Countries like Japan have highly developed nuclear industry, but they don’t have nuclear weapons.

Because when America occupied them after the war, it made damn sure Japan would never again get any notions about being a military power. It's why their military is known as the Japanese Self-Defense Forces:

The Occupation was commanded by American general Douglas MacArthur, whose office was designated the Supreme Command for the Allied Powers (SCAP). In the initial phase of the Occupation, from 1945 to 1946, SCAP had pursued an ambitious program of social and political reform, designed to ensure that Japan would never again be a threat to world peace. Among other reforms, SCAP worked with Japanese leaders to completely disband the Japanese military. In addition, SCAP sought to unravel the wartime Japanese police state by breaking up the national police force into small American-style police forces controlled at the local level. SCAP also sought to empower previously marginalized groups that it believed would have a moderating effect on future militarism, legalizing the Communist and Socialist parties and encouraging the formation of labor unions. The crowning achievement of the first phase of the Occupation was the promulgation at SCAP's behest in 1947 of a new Constitution of Japan. Most famously Article 9 of the Japanese Constitution explicitly disavows war as an instrument of state policy and promises that Japan will never maintain a military.

Countries like Japan have highly developed nuclear industry, but they don’t have nuclear weapons.

The question is how many hours after wanting to have they will have them.