site banner

Culture War Roundup for the week of December 18, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

New from me - Effective Aspersions: How the Nonlinear Investigation Went Wrong, a deep dive into the sequence of events I summarized here last week. It's much longer than my typical article and difficult to properly condense. Normally I would summarize things, but since I summarized events last time, I'll simply excerpt the beginning:

Picture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece of negative information they can find. They contact Émile Torres, David Gerard, and Timnit Gebru, collect evidence about Sam Bankman-Fried, the OpenAI board blowup, and Pasek's Doom, start calling Astral Codex Ten (ACX) readers to ask them about rumors they'd heard about affinity between Effective Altruists, neoreactionaries, and something called TESCREAL. They spend hundreds of hours over six months on interviews and evidence collection, paying Émile and Timnit for their time and effort. The phrase "HBD" is muttered, but it's nobody's birthday.

A few days before publication, they present key claims to the Centre for Effective Altruism (CEA), who furiously tell them that many of the claims are provably false and ask for a brief delay to demonstrate the falsehood of those claims, though their principles compel them to avoid threatening any form of legal action. The Times unconditionally refuses, claiming it must meet a hard deadline. The day before publication, Scott Alexander gets his hands on a copy of the article and informs the Times that it's full of provable falsehoods. They correct one of his claims, but tell him it's too late to fix another.

The final article comes out. It states openly that it's not aiming to be a balanced view, but to provide a deep dive into the worst of EA so people can judge for themselves. It contains lurid and alarming claims about Effective Altruists, paired with a section of responses based on its conversation with EA that it says provides a view of the EA perspective that CEA agreed was a good summary. In the end, it warns people that EA is a destructive movement likely to chew up and spit out young people hoping to do good.

In the comments, the overwhelming majority of readers thank it for providing such thorough journalism. Readers broadly agree that waiting to review CEA's further claims was clearly unnecessary. David Gerard pops in to provide more harrowing stories. Scott gets a polite but skeptical hearing out as he shares his story of what happened, and one enterprising EA shares hard evidence of one error in the article to a mixed and mostly hostile audience. A few weeks later, the article writer pens a triumphant follow-up about how well the whole process went and offers to do similar work for a high price in the future.

This is not an essay about the New York Times.

The rationalist and EA communities tend to feel a certain way about the New York Times. Adamantly a certain way. Emphatically a certain way, even. I can't say my sentiment is terribly different—in fact, even when I have positive things to say about the New York Times, Scott has a way of saying them more elegantly, as in The Media Very Rarely Lies.

That essay segues neatly into my next statement, one I never imagined I would make:

You are very very lucky the New York Times does not cover you the way you cover you.

[...]

I follow drama and blow-ups in a lot of different subcultures. It's my job. The response I saw from the EA and LessWrong communities to [the] article was thoroughly ordinary as far as subculture pile-ons go, even commendable in ways. Here's the trouble: the ways it was ordinary are the ways it aspires to be extraordinary, and as the community walked headlong into every pitfall of rumormongering and dogpiles, it did so while explaining at every step how reasonable, charitable, and prudent it was in doing so.

I'm getting incredibly sick of the "rationalist" affectation/verbal tic of "statistically" "quantifying" your predictions in contexts where this is completely meaningless.

"But I think it would still have over a 40% chance of irreparably harming your relationship with Drew"

"Nonlinear's threatening to sue Lightcone for Ben's post is completely unacceptable, decreases my sympathy for them by about 98%"

What it does mean to have a 40% chance of irreparably harming her relationship with Drew? Does that mean that there's a 60%, 70% etc. chance of it harming her relationship with Drew, but in a way that could be fixed, given enough time and effort? What information could she be presented with that would cause her to update her 40% prediction up or down?

The numbers are made up and the expressions of confidence don't matter. It's just cargo cult bullshit, applying a thin veneer of "logic" and "precision" to a completely intuitive gut feeling of the kind everyone has all the time.

I disagree. Even if the numbers are somewhat made up, having a ballpark figure that tells you the relative probability of certain events that would result from a decision you’re planning to make.

Going to the Drew example, if I think that doing something (say going to school in another city and trying to have a LDR is going to result in a 40% chance that I’ll lose the relationship entirely, and a 60% chance that I’ll damage it in away that would be difficult but not impossible to fix, then I can use that to decide if that would be more important to me than the job opportunities, the scholarships, or whatever else I gain from going to school away from him. Might doesn’t give you enough information for a true reality check imo, because it treats low probability events equally to large probability events. Even using verbal categories like low, medium and high probability, especially when making a group decision aren’t precise enough to communicate what I’m actually thinking. Low is how low? For you it might be 5%, for me it’s 20%. We can’t communicate that well if we don’t know what the terms are.

Even using verbal categories like low, medium and high probability, especially when making a group decision aren’t precise enough to communicate what I’m actually thinking. Low is how low? For you it might be 5%, for me it’s 20%. We can’t communicate that well if we don’t know what the terms are.

I think part of the point is the numerical values convey an unwarranted degree of precision based on the process that generated them. Say your estimate is 20% probability for X. Why not 21%? 19%? 25%? 15%? What's the size of the error term on your estimate? Is your forecasting of this outcome so good as to warrant a <1% margin? Of course, estimation of that error term itself has the same problems as generating the initial estimate.

I don't think this is a good objection. Numbers are often approximate. 20% means 'somewhere between 10% and 30%' as much as 'around a hundred pounds' might mean '75-125 pounds'. On the other hand, I usually think it's better to actually say what ideas and conditionals inform your judgement rather than just saying a number, and I'm not sure what the number adds to the former.

There is some deep epistemological stuff operating here.

If the complaint is that numbers are too "precise", and the solution is to add ranges (whether implicit or explicit), the obvious next question is what these ranges mean.

In formal Bayesian epistemology, there are no "error bars" around probability estimates. The probability estimate is simply the probability you believe X will occur, which can be computed using Bayes formula and a prior probability in simple cases (more complicated cases yield more complicated formulas).

In less Bayesian epistemology, you can recapture much of this using something like proper scoring rules, but, again, a scoring rule only asks for a number - not a range, so it's still unclear what it even means to say "somewhere between 10% and 30%".

In human language, you might want to say something like "I'm saying 20%, but that's because I've consulted the important evidence, but I know there is smaller evidence I'm not considering that might tweak that ±10pp." In this case, you are using error bars to indicate logical uncertainty.

Alternatively, there is a decent analogue in finance: market makers often have both buy and sell limit orders. The spread between them is an indicator of confidence. Note: in finance, the spread is useful only because finance is a fundamentally social endeavor. If you choose to offer a very small spread, what you're really doing is saying that you don't think anyone else can do better than you. [note: this is less true for non-market-makers, who, for binary instruments mostly just care about point estimates].

The financial framing has an obvious betting analogy here: when you say "between 10% and 30%" under this interpretation, you're actually saying is that you would accept a bet that X is true if the odds given were higher than 9:1 and that X is false if the odds were better than 7:3. If we wanted to formalize error bars on probability, this is the model I would advocate for.

[To be a tad pedantic, finance also cares about risk. If I'm market making, I might actually have multiple buy and sell limit orders with different volumes. Likewise, me saying "between 10% and 30%" might mean I'm willing to bet 1¢ at those odds, but it doesn't necessarily imply I'm willing to bet half my 401k.]

@Gillitrut