site banner

Culture War Roundup for the week of December 18, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

New from me - Effective Aspersions: How the Nonlinear Investigation Went Wrong, a deep dive into the sequence of events I summarized here last week. It's much longer than my typical article and difficult to properly condense. Normally I would summarize things, but since I summarized events last time, I'll simply excerpt the beginning:

Picture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece of negative information they can find. They contact Émile Torres, David Gerard, and Timnit Gebru, collect evidence about Sam Bankman-Fried, the OpenAI board blowup, and Pasek's Doom, start calling Astral Codex Ten (ACX) readers to ask them about rumors they'd heard about affinity between Effective Altruists, neoreactionaries, and something called TESCREAL. They spend hundreds of hours over six months on interviews and evidence collection, paying Émile and Timnit for their time and effort. The phrase "HBD" is muttered, but it's nobody's birthday.

A few days before publication, they present key claims to the Centre for Effective Altruism (CEA), who furiously tell them that many of the claims are provably false and ask for a brief delay to demonstrate the falsehood of those claims, though their principles compel them to avoid threatening any form of legal action. The Times unconditionally refuses, claiming it must meet a hard deadline. The day before publication, Scott Alexander gets his hands on a copy of the article and informs the Times that it's full of provable falsehoods. They correct one of his claims, but tell him it's too late to fix another.

The final article comes out. It states openly that it's not aiming to be a balanced view, but to provide a deep dive into the worst of EA so people can judge for themselves. It contains lurid and alarming claims about Effective Altruists, paired with a section of responses based on its conversation with EA that it says provides a view of the EA perspective that CEA agreed was a good summary. In the end, it warns people that EA is a destructive movement likely to chew up and spit out young people hoping to do good.

In the comments, the overwhelming majority of readers thank it for providing such thorough journalism. Readers broadly agree that waiting to review CEA's further claims was clearly unnecessary. David Gerard pops in to provide more harrowing stories. Scott gets a polite but skeptical hearing out as he shares his story of what happened, and one enterprising EA shares hard evidence of one error in the article to a mixed and mostly hostile audience. A few weeks later, the article writer pens a triumphant follow-up about how well the whole process went and offers to do similar work for a high price in the future.

This is not an essay about the New York Times.

The rationalist and EA communities tend to feel a certain way about the New York Times. Adamantly a certain way. Emphatically a certain way, even. I can't say my sentiment is terribly different—in fact, even when I have positive things to say about the New York Times, Scott has a way of saying them more elegantly, as in The Media Very Rarely Lies.

That essay segues neatly into my next statement, one I never imagined I would make:

You are very very lucky the New York Times does not cover you the way you cover you.

[...]

I follow drama and blow-ups in a lot of different subcultures. It's my job. The response I saw from the EA and LessWrong communities to [the] article was thoroughly ordinary as far as subculture pile-ons go, even commendable in ways. Here's the trouble: the ways it was ordinary are the ways it aspires to be extraordinary, and as the community walked headlong into every pitfall of rumormongering and dogpiles, it did so while explaining at every step how reasonable, charitable, and prudent it was in doing so.

Great post. This whole controversy is pretty fascinating but also seems like something you could sink dozens of hours into learning about without coming to any clear conclusions about what actually happened, who's telling the truth, etc. Nevertheless, here are a few things that come to mind after reading a bit about it.

  • The original investigation by Ben Pace seems clearly negligent. Perhaps you could justify giving Nonlinear very little time to respond, but many of the claims in the original post are presented with evidence that seems to amount to little more than "Alice and Chloe told me this and they seem trustworthy to me (even though lots of people told me Alice is not trustworthy." The post also claims that "I personally found Alice very willing and ready to share primary sources with me upon request (texts, bank info, etc)" but often does not reference the primary sources supporting various factual claims that it makes.
  • The original post also features almost no quotes or perspectives from other employees of Nonlinear. But if you are claiming that Nonlinear has an abusive work culture, such perspectives seem clearly relevant. There is one section labelled "Perspectives From Others Who Have Worked or Otherwise Been Close With Nonlinear" but this section is very vague and often makes claims that are not backed up with quotes. The quotes it does include are lacking context and it's often unclear who they are attributed to. For example, one quote is preceded by "Another person said about Emerson:" But we are not told who this person is or what their relationship to Emerson is.
  • These people seem to have a pathological obsession with anonymity. I understand the argument for keeping some people's identities secret, but often so much information about people being quoted is removed that it is hard to tell how to evaluate it. One example is described in the previous bullet point. For another example, see the list of 28 times 'Alice' accused people of being abusive from the Nonlinear response post. It includes things that are almost impossible to evaluate like:
  1. Alice accused [Person] of [abusing/persecuting/oppressing her]
  2. Alice accused [Person] of [abusing/persecuting/oppressing her]
  3. Alice accused [Person] of [abusing/persecuting/oppressing her]
  4. Alice accused [Person] of [abusing/persecuting/oppressing her]
  • More generally, it strikes me that both reports are very badly written. Compare them to basically any investigative report by a high quality news organization like the New York Times. No matter what you think about the NYT's bias, accuracy, etc, their articles are typically clear and easy to read. They clearly lay out the context for the story and the overall narrative and they manage to do so while supporting most of their claims with specific quotes from either named individuals or people whose role in the story is clearly explained and they typically include quotes from outside experts to contextualize things. Importantly, they also do so relatively concisely. Ben Pace's original report is about 10,000 words! And yet, it does a worse job providing context, evidence for its main claims, and a clear narrative than many 2000 word NYT articles.
  • Part of the reason both reports are so badly written is that they spend so long on haranguing the readers about how they should feel about the evidence provided. The original report begins with a paragraph-long "epistemic status" and spends a huge amount of verbiage analyzing the author's (i.e. Ben Pace's) own opinions about how much to believe what he wrote. But these feelings seem to mostly boil down to "I think that Alice and Chloe are fairly trustworthy and feel that there is evidence supporting their accusations." But instead of spending so many words saying this, why not just present the evidence as clearly as you can? I understand that some of the evidence may be inconclusive, but then why not present it as such and let readers draw their own conclusions? To an outsider, the post has an atmosphere of "I, Ben Pace, am a responsible and trustworthy person and so you should trust that I have studied this issue carefully even though I won't present most of my evidence."
  • Ignoring the truth or falsity of the various accusations, the whole setup sounds pretty crazy. Even if Nonlinear is not at all abusive, it seems like a terrible idea to accept a job where you'll be viewed as "part of the family" or "part of the gang." And why were they jetting around the world, staying in exotic locations in the Bahamas, etc anyway? Is that necessary or helpful in doing work on AI safety? I realize that Nonlinear was supposed to be at least partly an incubator, but to me it seems to have been much looser and blended work and personal life much more than most other incubators. Perhaps that's what some people want, but it seems to come with big risks (which this blowup demonstrates).