site banner

Culture War Roundup for the week of January 15, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

A limitation of usual Bayesian reasoning.

Scott is doing his annual subscription drive and I was reminded of a (still) private post of his I disagree with: https://www.astralcodexten.com/p/but-seriously-are-bloxors-greeblic

In my post on uncertainty around AI, I wrote:

If you have total uncertainty about a statement (“are bloxors greeblic?”), you should assign it a probability of 50%. If you have any other estimate, you can’t claim you’re just working off how radically uncertain it is. You need to present a specific case.

Commenters were skeptical! I agree this important topic needs more discussion:

And then he proceeded to list some of the objections and his objections to objections. The objection I'm personally most partial to was not listed, so I assume it's a sort of novel idea, at least in that (and this) community.

Suppose that in your travels you encounter a shady guy who offers you an opportunity to bet on the outcome of a coin flip. Nearby stands a yudkowsky, who tells you that according to his observations the coin is biased and the next flip is about 66% likely to land on heads. You know that yudkowskis are honest and good Bayesians, so you trust his assessment.

The shady guy flips the coin and it lands on tails. Now consider two possible worlds: in one the yudkowsky says that his new estimate is 50% heads, in another he says that he has updated to 65% heads. That's two very different worlds! It turns out that the yudkowsky has an important parameter: how many coinflips he has observed so far, and therefore how much of his estimation comes from the observations and how much from the prior, and for some reason he doesn't tell you its value!

Scott's assertion is correct in a narrow technical sense: in a world where the shady stranger forces you to make a bet at gunpoint, you are forced to use the yudkowsky's estimation and the yudkowsky is forced to use a symmetric prior that gives him a 50% probability of heads when he has not seen any flips at all yet.

However in the real world there's almost always an option to wait and collect more data, and whether you want to exercise it critically depends on the difference between "it's a 50/50 chance based on observing 100 coinflips" and "it's a 50/50 chance based solely on the prior I pulled out of my ass".

So what's going on I think is that people intuitively understand that there's this important difference and suspect that when Scott says that normally they should start with a 50/50 prior, he's trying to swindle them into accepting Bayesians' estimations without asking how sure they are about them. And rightfully so, because that's a valid and important question to ask and honestly Bayesians ought to get a habit of volunteering this information unprompted, instead of making incorrect technical arguments insinuating that the estimated probability alone should be enough for everyone.

There's also the problem that 50-50 is not actually a neutral probability, if you're a coherent Bayesian and you don't have an ultra-simple sample space. For example, if I think that the probability of each possible bloxor being greeblic is 50%, then I am committed to thinking that the probability that 70/100 bloxors being greeblic is 0.004%. So my "neutral" prior commits me to extremely strong confidence that the distribution of greeblic among those 100 bloxors is not 70!

If I set my prior for each bloxor being greeblic to 69.5%, then it is approximately neutral with respect to 70/100 bloxors being greeblic. But now I'm obviously far from neutral with respect to any individual bloxor being greeblic.

This is one of the limitations of Bayesianism as a formalism: it can model neutral belief with respect to any individual partition of the sample space, but not all partitions of the sample space. So, Scott is just wrong and frankly hasn't understood the mathematics, given his statement "If you have total uncertainty about a statement (“are bloxors greeblic?”), you should assign it a probability of 50%," since this norm implies incoherence, but coherence is a fundamental Bayesian norm.

Put briefly, what Scott is saying requires that you reject Bayesian epistemology/decision theory. I haven't read the whole post yet, but I would be surprised if he realised that.

A different model solves this. If you treat the proportion of greeblic bloxors as an unknown parameter, then assign a prior to that parameter, you can have both

  1. a single bloxor has a 50% chance of being greeblic

  2. the chance of 70/100 bloxors being greeblic is not negligible

This works because the bloxors are no longer independent; they are related through the proportion parameter. Observing one bloxor would change your belief about the parameter, and thus about the other bloxors.

A sufficiently large number of conjunctions of single-case hypotheses of the "bloxor x is greeblic" regenerates the problem. I put it in terms of proportions for familiarity's sake, but formally it's easier to understand the point if you consider Boolean operations on the elements of partitions, and note that in Bayesian epistemology the sample space is assumed to be closed under Boolean operations.