site banner

Culture War Roundup for the week of January 15, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

A limitation of usual Bayesian reasoning.

Scott is doing his annual subscription drive and I was reminded of a (still) private post of his I disagree with: https://www.astralcodexten.com/p/but-seriously-are-bloxors-greeblic

In my post on uncertainty around AI, I wrote:

If you have total uncertainty about a statement (“are bloxors greeblic?”), you should assign it a probability of 50%. If you have any other estimate, you can’t claim you’re just working off how radically uncertain it is. You need to present a specific case.

Commenters were skeptical! I agree this important topic needs more discussion:

And then he proceeded to list some of the objections and his objections to objections. The objection I'm personally most partial to was not listed, so I assume it's a sort of novel idea, at least in that (and this) community.

Suppose that in your travels you encounter a shady guy who offers you an opportunity to bet on the outcome of a coin flip. Nearby stands a yudkowsky, who tells you that according to his observations the coin is biased and the next flip is about 66% likely to land on heads. You know that yudkowskis are honest and good Bayesians, so you trust his assessment.

The shady guy flips the coin and it lands on tails. Now consider two possible worlds: in one the yudkowsky says that his new estimate is 50% heads, in another he says that he has updated to 65% heads. That's two very different worlds! It turns out that the yudkowsky has an important parameter: how many coinflips he has observed so far, and therefore how much of his estimation comes from the observations and how much from the prior, and for some reason he doesn't tell you its value!

Scott's assertion is correct in a narrow technical sense: in a world where the shady stranger forces you to make a bet at gunpoint, you are forced to use the yudkowsky's estimation and the yudkowsky is forced to use a symmetric prior that gives him a 50% probability of heads when he has not seen any flips at all yet.

However in the real world there's almost always an option to wait and collect more data, and whether you want to exercise it critically depends on the difference between "it's a 50/50 chance based on observing 100 coinflips" and "it's a 50/50 chance based solely on the prior I pulled out of my ass".

So what's going on I think is that people intuitively understand that there's this important difference and suspect that when Scott says that normally they should start with a 50/50 prior, he's trying to swindle them into accepting Bayesians' estimations without asking how sure they are about them. And rightfully so, because that's a valid and important question to ask and honestly Bayesians ought to get a habit of volunteering this information unprompted, instead of making incorrect technical arguments insinuating that the estimated probability alone should be enough for everyone.

While your idea is somewhat valid, it either misses the point of the question that a Bayesian probability answers or it ignores that it is an important part of Bayesian reasoning. In other words, a good Bayesian would say that your idea is trivial and irrelevant, unless there is further information acquisition. It is not a "valid and important question to ask" except for some contexts.

In your example, if you can only take the bet once, optimally choosing to take the bet or not involves calculating the expected gain using the correct Bayesian probability. Any other information is irrelevant. In another simple example, you can formulate this as a problem with an option to continue. In that case, there will be an instantaneous (also called flow) payoff and a continuation value (the value of being able to take the bet again). The continuation value depends on the posterior probability which, as you correctly mention, depends on other stuff. However, this continuation value only matters for the decision if it is affected by the decision. If the shady guy will nonetheless toss the coin, then how the posterior probability will change is irrelevant for you.

More generally, dynamic problems with new information are not a problem for Bayesians. Specifying the informational context of a problem requires a proper prior, which is a joint distribution of all variables. These variables can be the decision-relevant ones (the particulars of the coin) or informational ones (the history of coin tosses or extra information on how the coin was obtained). Bayes theorem has us update this prior in the usual way. While there are some examples where this extra information can be neatly summarized into a simple sufficient statistic (e.g., the number of tosses and number of heads for coins with a given probability of landing heads and independently distributed outcomes given the coin), those examples are the exception.

To recap, Bayesians are not "making incorrect technical arguments insinuating that the estimated probability alone should be enough for everyone." They are making correct arguments that fail only in a very small subset of problems, those with information acquisition that is affected by the decisions. In this way, it is not a "a valid and important question to ask." Furthermore, it is not clear that "Bayesians ought to get a habit of volunteering this information unprompted" because this information, besides being irrelevant to most decisions, is not easy to communicate succinctly.

They are making correct arguments that fail only in a very small subset of problems, those with information acquisition that is affected by the decisions.

I disagree that this is a very small subset of problems, the majority of real life problems let you decide to wait and collect more information or decide how many resources you're willing to bet. See examples in https://en.wikipedia.org/wiki/Multi-armed_bandit

For example, I think I first noticed this problem many years ago in one of Scott's linkdumps where he disapprovingly linked to Obama saying that CIA told him that such and such thing had a 70% probability but really they had no good information so it was a coinflip. And Scott was indignant, 70% is 70% what more do you need to know before you authorize some military operation, even the President doesn't understand probability smdh. In my opinion Obama was right, if technically imprecise, while Scott was wrong, which demonstrates the danger of having a little knowledge and also the need for more awareness.

is not easy to communicate succinctly.

You say this as if it's not Bayesians' fault that they have not developed (or got into a habit of using) a succinct way of conveying how much of the estimate comes from the prior and how much from the previous updates. I would understand if it was an acknowledged hard problem in need of community's attention, but for example Yudkowsky sequences don't mention it at all.