site banner

Culture War Roundup for the week of January 30, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

Two Tweets from OpenAI's Sam Altman: "eliezer has IMO done more to accelerate AGI than anyone else. certainly he got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc." "it is possible at some point he will deserve the nobel peace prize for this--I continue to think short timelines and slow takeoff is likely the safest quadrant of the short/long timelines and slow/fast takeoff matrix."

Eliezer Yudkowsky thinks that the rapid development of AGI will likely kill us and he has devoted his life to trying to stop this from happening, and Sam Altman almost certainly knows this. My personal guess is that quantum immortality means regardless of who is right, some branches of the multiverse will survive AGI, and the survivors will have enough computational power to know what percentage of the branches survived, and consequently whether Altman or Yudkowsky were right.

Edit: Eliezer's response Tweet, which I don't understand.

I've said it before and I'll say it again. To the degree that I believe that AGI present an existential threat to Humanity, I believe that is largely because of rather than in spite of people like Yudkowsky and the folks at MIRI. I believe that the so-called "AI Alignment Problem" has less to do with itelligence (artificial or otherwise) than it does the fundamental flaws of Utilitarianism as an ethical framework or model for decision making. While I actually do think that Scott means well, I find it kind of telling that he seems to more concerned with teaching rationalists "how not to sound like a killer robot" than he is how not to become one.

I'll repeat that I don't think this reflects a good understanding of Yudkowsky's concerns: the mainline ratsphere already considered and accepted the 'don't be a killer robot' problem; their problem is that there's a nice big shiny candy-like button labelled 'free money' that (may be) hooked up to a killer robot release gate.

And I will reply, that as I have before. that I am unconvinced. In my eyes, both you and Yudkowsky are trying to invoke a distinction without a difference.

The killer robot is a killer robot, and those who serve as the allies of killer robots are the allies of killer robots.