site banner

Culture War Roundup for the week of October 17, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

16
Jump in the discussion.

No email address required.

I think there's an interesting phenomenon where if somebody says "I'm pretty sure X will happen" then people are like "yeah, okay, I could see that" or "nah, I don't think that's true" whereas if somebody says "I think there's an 80% chance that X will happen" people will respond with "WHOA there, look who's larping as an economist with his fancy percentage points"

Possible solution: Say "I'm pretty sure X will happen. Let's say 80% sure." Sounds very unassuming that way.

The problem is cultural. Around here, when someone makes an 80% prediction of a specific event, we know they're publicly stating their priors to make themselves clear and so they can check / other people can check their rationality later. To general internet-goers, making a quantitative prediction that specific sounds ludicrously overconfident. (Not only will it happen, but you know down to the percentage point how likely it is? Mind showing your math, Mr. Silver?)

so that gets us to the question of how much of a difference in practice there is between "80% +-20% chance" vs "80% chance +- 0%" of a thing happening. I suspect in practice not much? Since anything that feeds into your meta-level uncertainty about a probability score should also propagate down into your object-level uncertainty of the actual thing happening.

Could somebody please convince me that "80% +-20% chance" is a coherent thought.

If I say "there's a 10% ± 5% chance AAPL will increase by $100 today" and you say "there's a 10% chance ± 30% chance AAPL will increase by $100 today" and then it happens, who do you trust more?

For the record I'm definitely not convinced that "80% +- 20% chance" is a coherent thought.

Here's a thought experiment: I give you a coin, which is a typical one and therefore has a 50% chance of landing heads or tails. If I asked you the probability it lands on heads, you'd say 50%, and you'd be right.

Now I give you a different coin. I have told you it is weighted, so that it has an 80% chance of landing on one side and 20% chance of landing on another (but I haven't told you whether or not it favors heads or tails.) If I asked you the probability it lands heads when flipped, you should still say 50%.

That's because probabilities are a measure of your own subjective uncertainty about the set of possible outcomes. Probabilities are not a fact about the universe. (This is trivially true because a hypothetical omniscient being would know with 100% certainty the results of every future coinflip, thereby rendering them, by a certain definition, "nonrandom". But they would still be random to humans.)

My first thought was that I agree with you. The second thought was that you can have a confidence in your confidence. My third thought was that that should baked into your primal estimation, and that if you're saying 80% +/- 20%, you're really saying something like 70%. Does it really make sense to be saying, hey, there's a chance I'm 99% sure, but it's only 10%??

These things really only make sense with repeated results anyway. A single event happens or it doesn't.

If you felt a real need to give more details on your prediction, I think it would be more interesting to give buckets, e.g. I expect the housing market to

5%: shrink > 15%

30%: shrink 5 - 15%

60%: stay flat, changing [-5, +5]

12%: grow 5 - 15%

3% grow > 15%

OTOH, thinking about this more (which makes me think that someone smarter and with more statistics has thought more about this), this still doesn't capture the idea of confidence. What if I don't know what a house is, and you ask me this? Can I really make any meaningful statement? I think I could say 50/50 grows or shrinks, or 90% doesn't change more than 95% (because few things do), but it's hard to capture what it means to have little certainty.

i think if it's a binary choice 50% is exactly right since if you don't know what a house is, no process of reasoning could get you better than a coin flip as to the right answer. Similar if you have N different choices where you can't distinguish between them in any meaningful way.

If you never get any subsequent information about the problem, there's no difference. Either way, you'd let an ignorant stranger bet with you (in any amounts large enough to ignore transaction costs and small enough to ignore decreasing marginal utility of money) if they were willing to at less than 4:1 odds, but not at any higher odds.

Once you allow for updating on subsequent information (including as little as "there exists a possibly-non-ignorant stranger willing to take the other side of the bet"!) ... I'm not sure how to interpret the "+/- N%" chances quantitatively for N>0, but there's at least a qualitative difference between "I don't care if the President of Thing has already pushed the Make Thing Definitely Happen Tomorrow Button, there's still a 20% chance the button's broken" and "a drunk at the bar wants to put up his $20 against my $80, and with that kind of high-rolling stake maybe he knows something I don't; better not risk it after all."

Trying to figure out how many yearly predictions you'd need to make to calibrate error bar estimates. "out of 7163 80%+/-10% predictions, my average error was 7.3%..."

I think the normies are at least partly correct here. I think it's a mistake to say "I don't have a methodology for actually calculating my Baysean priors, but let me put a number on it anyway just to make myself more clear." You are not actually clarifying your position, you are obfuscating it.

In science, the concept of significant figures is extremely important because you have to represent the precision of your knowledge accurately. Lets say I have 1kg of lead and lead has a density of 11342 kg/m3, how many m3 of lead do I have? 1/11342 = .0000881679. Is it accurate to say I have ".0000881679m3" of lead? No, because that's representing an inaccurate degree of precision in my knowledge.

I think people reporting a Baysean prior of "90% confidence" are usually committing the same mistake -- they're misrepresenting the precision of their knowledge. Normies pick up on this and interpret it (correctly) as ludicrous overconfidence.

I do wonder how precisely the human mind can really internally assign confidence, without augmenting it with external tools. If most people can only hold 7 items in working memory, maybe there are just seven buckets of confidence; offering a probability out of 100 just comes off as wildly overconfident, unless you actually show your work. With that understanding, someone saying something will happen is communicating the precision more accurately compared to someone saying they assign a 90% probability to something.

The way to test this is to go around saying, "90% of confidence plus or minus blah blah blah"

If normies intuitively understand significant figures and uncertainty, the blah blah amount will influence their reaction.

If normies are disgusted by numbers and wanna-be-economists, then the uncertainty wouldn't ever matter.

90% of confidence plus or minus blah blah blah

Unless you are using some transparent methodology to calculate the confidence interval, this is even worse than just saying 90% because you are now claiming to know both your priors and the uncertainty of your priors with high levels of precision.

The normie can, and probably will, also doubt how accurately your confidence interval is calculated.

(And that assumes a normie who understands the term. Intuitively understanding X is not the same as understanding all the terms used to describe X.)

Probably because these things are hard to quantify. To say that there is an X-percent chance of something happening with a high degree confidence means you need a lot of datapoints, which the OP doesn't have. It's a problem with forecasting in general, not just finance.

Eh, I doubt it's anything that logical. "Pretty sure that X" is, I think, just a colloquialism whose meaning is synonymous with "roughly 80% chance of X", similar to how "I'm basically certain of X" cashes out to "roughly 98% chance of X". Do you think of these statements as being fundamentally different in some way?

Humans can't count as high as 100, and {"maybe", "possibly", "probably", "likely", "definitely", "almost certainly"} aren't shorthand for different numbers. Remember that, at scale, people don't differentiate between 95p and 99p and 99.99p.