site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

NYT: Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding

The NYT scooped everybody. We finally know why Sam Altman was fired:

A few weeks before Mr. Altman’s ouster, he met with [OpenAI board member Helen Toner] to discuss a paper she had recently co-written for Georgetown University’s Center for Security and Emerging Technology.

Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, according to an email that Mr. Altman wrote to colleagues and that was viewed by The New York Times.

In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology.

Ms. Toner defended it as an academic paper that analyzed the challenges that the public faces when trying to understand the intentions of the countries and companies developing A.I. But Mr. Altman disagreed.

“I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight.”

Senior OpenAI leaders, including Mr. Sutskever, who is deeply concerned that A.I. could one day destroy humanity, later discussed whether Ms. Toner should be removed, a person involved in the conversations said.

There are a few other minor issues mentioned in the article, but this sounds like the big one. Rationalist/EA types take being told that they can't criticize "allies" in public very negatively, a position I am quite sympathetic to. Helen Toner works at an Open Philanthropy-funded think tank, so she's as blue blood an effective altruist as they get. My guess is that this was the moment that she decided that Sam had to be eliminated before he took control of the board and jeopardized OpenAI's mission.

What gets me is how disingenuous this makes the original firing announcement: "Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities." It sounds like he was perfectly candid. They just didn't like what he was about.

In completely unrelated news, ChatGPT has been down for the last three hours.

Senior OpenAI leaders, including Mr. Sutskever, who is deeply concerned that A.I. could one day destroy humanity

These people are so weird. They are obsessed with making something they think will destroy the world. Just build AI or don't. If AI is so dangerous, nothing you can do will really stop it in the long run. Eventually, over hundreds or even thousands of years, the inevitable will happen. I feel like this is almost a religion for them and they are the absolute worst people to be in charge or this kind of technology.

  1. They think AGI is inevitable, and if it's not made by people with safety and alignment in mind, the odds of it being misaligned skyrocket.

  2. Control of an aligned Superintelligent AGI is equivalent to having the keys to the lightcone, if you make it through the gauntlet of it not killing you and it listens to what you tell it, then you have the means to dominate everyone else, including others who make misaligned AGI, if yours is capable of squashing them at birth, or at the very least capable of panopticon surveillance to prevent anyone from building one in the first place.

  3. Even prior to that, being on the cutting edge of AI research gives you a voice, people can dismiss Eliezer Yudkowsky as a crank, far harder to do that to Geoffrey Hinton or Ilya Sutskever. You have far more power to get governments to regulate things, or develop industry best standards that reduce the risk of more laissez-faire competitors YOLOing the whole thing.

AI is dangerous, as is anything much smarter than you that potentially doesn't share your goals, but if you see it as inevitable, then your best bet is making sure it comes out with goals you share or control.

My own p(doom) from AI has dropped to about 30% from a high of 70%, when RLHF and other techniques showed that it was possible to ~mostly align the best AI today, in the form of LLMs, which are the frontrunners for the best AIs of tomorrow. Just because I currently do not think AI will probably kill us in a decade doesn't mean I don't think it's a serious risk, and there are few things on the planet more worth being passionate about.

It is not remotely as simple as build AI or don't, even if everyone associated with the AI X-risk community died overnight, the genie is out of the bottle, and others will pursue it with billions of dollars. OAI had a value, before this debacle, of about $80 billion, with operating expenses on the order of 500 million p/a.

If you can't undo that, or enforce a "pause", then you grudgingly find a way to stay ahead of the competition while doing your best not to be the reason why it all went to hell.

Control of an aligned Superintelligent AGI is equivalent to having the keys to the lightcone

I think this sort of argument consistently relies on assumptions regarding the possibilities of various highly-powerful technologies being physically possible but not yet discovered due to humans not being smart enough that I find insufficiently supported. It's always "AI will give us X" — Drexlerian nanotech, space elevators, "energy too cheap to meter," whatever — without considering the outcome where it turns out none of these is actually possible, no matter how smart you are. To quote from the Futurama episode "A Clone of My Own":

Professor Hubert Farnsworth: Nothing is impossible. Not if you can imagine it. That's what being is a scientist is all about.

Cubert J. Farnsworth: No, that's what being a magical elf is all about.

I disagree, it's largely Yudkowsky who vocally claims that a SAGI will rely on things like "diamondoid bacteria" and other nanotech to get an advantage.

For me, and many others, subversion of existing human infrastructure through social engineering to do things like launch nukes, engineering hyper-lethal and virulent pathogens and the like are all feasible for something modestly above human, without relying on anything that doesn't exist. The AI will need robust automation to replace humans, but we're already doing that ourselves, so..

We could already have had energy too cheap to meter if we went full send on nuclear, for one. It would certainly be dirt cheap compared to today's rates.

subversion of existing human infrastructure through social engineering

I think this is overrated, too — though that might be due to reading too many "unboxing" arguments predicated on the assumption that absolutely anyone can be convinced to do absolutely anything, if only you're smart enough to figure out the particular individually-tailored set of Magic Words.

I have never claimed it can convince anyone of literally anything. We've already had plenty of nuclear close-calls simply because of the fog of war or human/technical error.

Similarly, there are already >0 misanthropically omnicidal people around and kicking, and an AI could empower them to pursue their goals, or they might choose to adopt the AI for that purpose.

Mere humans, or human-run orgs like the CIA have long engineered regime change, it seems to me incredibly unlikely, to the point it can be outright dismissed from consideration, that an AGI only modestly higher in intelligence couldn't do the same, and even independently play multiple sides against each other until they all make terrible decisions.

Besides, it's clear that nobody even tries the Yudkowskian boxing approach these days. ARC evals, red-teaming and the like are nowhere close to the maximally paranoid approach, not even for SOTA models.

A group of say, 160 IQ humans with laser-focus and an elimination of many/most of the coordination and trust bottlenecks we face could well become an existential threat. Even a modestly superintelligent or merely genius level AGI can do that and more.

and an AI could empower them to pursue their goals, or they might choose to adopt the AI for that purpose.

Empower them how, exactly? What is it that they aren't able to do now only because they're not smart enough, that more intelligence alone can solve? Intelligence isn't magic.

that an AGI only modestly higher in intelligence couldn't do the same, and even independently play multiple sides against each other until they all make terrible decisions.

Perhaps, but what's your proof that it could do this so much better than the CIA or anyone else, just because it's smarter? Intelligence isn't magic.

A group of say, 160 IQ humans with laser-focus and an elimination of many/most of the coordination and trust bottlenecks we face could well become an existential threat.

Actually, as a 151 IQ human, I mostly disagree with this, so that's part of it right there.

Even a modestly superintelligent or merely genius level AGI can do that and more.

What's your proof of the part I just emphasized? You appear to simply assume it.

I think you might be a uniquely ineffective 151 IQ human if it doesn't seem plausible to you that a group of very smart humans could do extreme and perhaps existential harm. To me, the main thing preventing that seems to be not the inherent hardness or weakness of, say, COVID-Omicron-Ebola, but the resistance of an overwhelming majority of other humans (including both very smart ones and mediocre but well-organized ones).

As for what a superintelligent AI changes? Well for one thing, it eliminates the need to find a bunch of peers. And, with robots, the need for lab assistants.

And I have like 3% P(AI Doom).