site banner

Pay no attention to the Model Behind the Curtain!

link.springer.com

Many widely used models amount to an elaborate means of making up numbers—but once a number has been produced, it tends to be taken seriously and its source (the model) is rarely examined carefully. Many widely used models have little connection to the real-world phenomena they purport to explain. Common steps in modeling to support policy decisions, such as putting disparate things on the same scale, may conflict with reality. Not all costs and benefits can be put on the same scale, not all uncertainties can be expressed as probabilities, and not all model parameters measure what they purport to measure. These ideas are illustrated with examples from seismology, wind-turbine bird deaths, soccer penalty cards, gender bias in academia, and climate policy.

8
Jump in the discussion.

No email address required.

But that need not be because of how much money it is, but because money is fungible to other projects that we care about

Yes, obviously, that is the point. You are still comparing a human life to that other project, whose benefits may be in lives saved, but it also might not be, and you're sometimes going to be faced with such a decision.

maybe saving lives is infinitely valuable, but spending more than a million dollars to save one would stop you from saving two lives at 500k each

Until you more precisely define what you mean by "infinitely valuable," this statement is meaningless because 2 infinite things may be identical to 1. But also, in practice, literally no one's behavior reflects such a claim, and since you can't have everything be infinitely valuable, you would still be faced with many other "noncomparable" decisions, like, to use the paper's examples, culture and the environment.

(unless you want to put all of your probability mass on a countable set, as this paper discusses

Is there any instance in which doing so would be empirically distinguishable from a truly continuous distribution or outcome space? We can only make measurements that have rational values, for example. Using real numbers is a often very very good approximation to something that is actually discrete (like molecules in a fluid) and that avoids even more tedious less-than-symbol chasing, but isn't necessary. And if you don't take the axiom of choice, the response to "what about non-measurable sets?" is "What are you talking about? Should I also consider what happens when 1+1 is 3?"

Moreover, that paper says:

Propositions need not get a probability, but may instead be assigned a range of probabilities.

which, as far as I can tell, hasn't actually avoided the alleged problem.

The Vitali set was simply an example, but you could just as well use a non-measurable set to represent an agent’s imprecise credence in a prosaic proposition about which they have genuine ignorance, rather than mere uncertainty.

There are models of ZF (with a weaker version of choice, even) in which all subsets of R are Lebesgue measurable. If you want I suppose you could develop an epistemology where some propositions have undefined probability, but if I choose not to use choice, are you going to say that doing so must be wrong, because my model of the world contains no such sets? After all, the original paper is claiming that there definitely are hypotheses to which probability cannot be applied at all.

I don’t see what’s problematic about that.

Well, for one, if it has an undefined prior probability, how do you do any sort of update based on evidence? If you receive some information on it, how would you know how strongly it supports your hypothesis compared to others, such as "the measurement device was flawed"? But again, even if you would like to think this way, it doesn't mean that an alternative is wrong.

Until you more precisely define what you mean by "infinitely valuable," this statement is meaningless because 2 infinite things may be identical to 1.

As in, more valuable than any consideration not to do with saving lives.

But also, in practice, literally no one's behavior reflects such a claim, and since you can't have everything be infinitely valuable, you would still be faced with many other "noncomparable" decisions, like, to use the paper's examples, culture and the environment.

I don't think your empirical claim is true. And I don't know what you mean by "non-comparable." If one factor gets absolute priority, then nothing is non-comparable.

Is there any instance in which doing so would be empirically distinguishable from a truly continuous distribution or outcome space?

It wouldn't reflect orthodox Bayesianism, that's for sure. Also it would make your probability measure no longer translation invariant or dilation-linear, which is bad. Now it seems like you're falling back to naïve operationalism instead of actually defending Bayesianism. And if you're only concerned with is some pre-theoretical conception of what's "empirically distinguishable," then why bother with infinities at all instead of just sticking to finite sigma-algebras? No one ever actually observes countable infinities of events either.

which, as far as I can tell, hasn't actually avoided the alleged problem.

Except they explicitly deny that that's an adequate solution in the paper: "Although we argue for imprecise credences, in §6 we argue against the standard interpretation of imprecise credences as sets of precise probabilities."

And if you don't take the axiom of choice, the response to "what about non-measurable sets?" is "What are you talking about? Should I also consider what happens when 1+1 is 3?"

Only if you have literally 0 credence that the axiom of choice might be true. Arguably even that's not enough, you need to not include any propositions that involve that axiom in your sigma-algebra (in which case Cox's theorem fails again), because many prominent Bayesians (e.g. Al Hajek) think Bayesianism should be extended with primitive conditional probabilities to allow conditionalization on any non-impossible proposition. Otherwise, you're still going to have to deal with questions like, "If the Axiom of Choice were true, then what would be the probability that a fair dart on the unit interval hits an element of the Vitali set"?

If you want I suppose you could develop an epistemology where some propositions have undefined probability, but if I choose not to use choice, are you going to say that doing so must be wrong, because my model of the world contains no such sets?

I think that it would be a defective model because, as discussed above, it would be incapable of even contemplating alternatives within its own framework. But even putting that aside, that would be a substantial retreat from your original claim that Cox's theorem proves that all uncertainty is be reducible to (precise) probability.

After all, the original paper is claiming that there definitely are hypotheses to which probability cannot be applied at all.

And you said that there definitely aren't, so you made an equally strong claim, which is what I took issue with, not the idea that there's no coherent model where that's true.

Well, for one, if it has an undefined prior probability, how do you do any sort of update based on evidence?

But I'm not saying it would be undefined, just that it would be imprecise.

But again, even if you would like to think this way, it doesn't mean that an alternative is wrong.

See above.

I don't think your empirical claim is true.

No one that I'm aware of has given up all non-essential consumption, and attempted to force others to do so, in order to save more lives.

And I don't know what you mean by "non-comparable." If one factor gets absolute priority, then nothing is non-comparable.

What you said doesn't allow you to compare art and the environment. I already gave this example, so I don't know why you're so confused.

It wouldn't reflect orthodox Bayesianism, that's for sure. Now it seems like you're falling back to naïve operationalism instead of actually defending Bayesianism.

What? This has nothing to do with Bayesianism. We can only measure things to finite accuracy, so we will never know if a result is exactly pi meters long. And the universe itself may be discrete, so that concept may not even make sense. Similarly, we can never explicitly describe all of the possible outcomes in an uncountable set. You could use rational numbers for all of the relevant math we do, it would just be harder for no real improvement.

But I'm not saying it would be undefined, just that it would be imprecise.

So what does this have to do with the paper being discussed?

"If the Axiom of Choice were true, then what would be the probability that a fair dart on the unit interval hits an element of the Vitali set"?

Asking if an axiom of mathematics is true is a nonsense question. We have different systems of mathematics, with different sets of axioms. As long as your system is consistent, it is not any more or less "true" than a different system that is also consistent. What you could ask is something like "within ZFC, what is P(X in V)?" (where X ~ U(0,1) and V is the Vitali set). And this is not a real number. But the original paper makes no such argument--it just asserts that probability doesn't always apply for reasons that are entirely unrelated to this argument. It certainly never mentions that you must assume the axiom of choice for this argument; given the author is a statistician, I would be surprised if he knew any of this. This discussion is also unrelated to science: Such sets are never going to be relevant in practice, and even if you assume the AoC, you can never explicitly define any non-measurable sets.

Most people believe in deontological constraints in addition to value maximization, so even if they thought that saving lives were infinitely valuable, they wouldn’t necessarily force others to try to do so. Cf. Christians who believe saving souls is infinitely valuable, but don’t try to force everyone to convert because they think there are deontological constraints against forced conversion. And plenty of people have been willing to sacrifice all unnecessary consumption and force others to do so to save lives, see e.g. lots of climate fanatics.

Making decisions about which thing to prioritize doesn’t require them to be comparable. There is a vast philosophical literature about decision theory involving incompatibilities, exactly none of which affirms that we can never rationally choose one incomparable thing over the other.

Whether or not we can measure things to arbitrary precision, that has little to no bearing on whether our probabilities should be arbitrarily precise, because probabilities are not all about empirical measurements. What is the probability of a fair dart thrown at [0,pi] landing within [0,1]? Hey, it’s exactly 1/pi, no arbitrarily-precise calculations necessary. Lots of probabilities are a priori and hence independent of empirical measurements.

It has to do with the paper because where do they say that the inapplicability of precise probabilities requires that the probability be undefined rather than precise? It seems obvious they’re talking specifically about precise probabilities, so clearly undefined probabilities are not the sole relevant alternative here.

What you are asserting about mathematical truth is a highly contested position in the philosophy of mathematics, and it’s not clear to me that it’s even coherent. If by “true” you literally mean “true,” then it can’t be coherent because then you’d have axiom systems with contradictory axioms which would both be true. If you don’t mean “true” literally then I don’t know what you mean by it.

I am not exclusively defending the original paper on its own terms, I just think your arguments against some of its conclusions rest on unsound premises for (at least partially) independent reasons. And my main issue is with your original implication that the conclusions you criticized are just obviously wrong. As we’ve seen in the course of this discussion, the premises you assert against those conclusions are highly non-obvious themselves, whether they’re true or not.

Most people believe in deontological constraints in addition to value maximization, so even if they thought that saving lives were infinitely valuable, they wouldn’t necessarily force others to try to do so

Yes, exactly. "The value of a soul" and "deontological considerations about forced conversion" are wildly different things that they are comparing, just like the author of this paper would assert is "quantifauxcation."

Making decisions about which thing to prioritize doesn’t require them to be comparable. There is a vast philosophical literature about decision theory involving incompatibilities, exactly none of which affirms that we can never rationally choose one incomparable thing over the other.

This sounds like a quibble over definitions. I would consider any decision between X and Y to constitute a comparison between them, by the common definition of the word "compare." You don't have to agree with that definition, but it seems like you do agree that people regularly decide between 2 things that are extremely different from each other, just like it is totally valid to say something like "the average person would decides to take X dollars in exchange for an increase of Y to their risk of death."

Lots of probabilities are a priori and hence independent of empirical measurements.

I think you're making a very, very different argument than was made in the original paper. Which is fine, but it's not really relevant to my argument. The probability you gave is exactly 1/pi, yes. As far as I can tell, this is is unrelated to the claim that the use of probabilities for complex problems is inappropriate because you lack sufficient information to calculate a probability. As far as I know, no one says something like "the completely precise probability of conflict in the Korean peninsula this year is 5 + pi/50 percent." That's just a strawman.

What you are asserting about mathematical truth is a highly contested position in the philosophy of mathematics, and it’s not clear to me that it’s even coherent.

I don't think most logicians would tell you there's a definitive answer to the question, "Is the axiom of choice true?" Or, perhaps an even better example, the continuum hypothesis. But again, I don't think any of this is relevant to the claims being made in the paper. I guess you think that some of what I wrote isn't literally true, if interpreted in a different context than this thread?

Perhaps our disagreement about comparability is merely verbal, but for future reference your usage of the term is widely divergent from most philosophical treatments of "comparability" in value theory (e.g. see here), so you may want to change it to avoid confusion in the future.

I think you're making a very, very different argument than was made in the original paper. Which is fine, but it's not really relevant to my argument.

As I said before, I think that your argument drew far more general conclusions than simply the negations of the paper's conclusions, so I disagree.

As far as I know, no one says something like "the completely precise probability of conflict in the Korean peninsula this year is 5 + pi/50 percent." That's just a strawman.

Well, I don't think that anyone says "the completely precise probability of X is Y" about much of anything, because people usually don't work with completely exact probabilities at all. Which I take to be among the claims of the paper.

I don't think most logicians would tell you there's a definitive answer to the question, "Is the axiom of choice true?"

That is completely different from saying that it makes no sense to draw conclusions about what would follow if it were true, like values of conditional probabilities, which mathematicians and logicians do all the time.

Perhaps our disagreement about comparability is merely verbal, but for future reference your usage of the term is widely divergent from most philosophical treatments of "comparability" in value theory (e.g. see here), so you may want to change it to avoid confusion in the future.

The author is a statistician, not a philosopher, and based on what I was responding to I think what I said makes sense. Maybe you should assume more common definitions instead of esoteric ones, and explain in advance that you are using such an obscure meaning, unless the context is highly specific. I'm certainly not going to warp how I use words around some academics' redefinition.

Your link is paywalled, so I can't really comment on it.

Well, I don't think that anyone says "the completely precise probability of X is Y" about much of anything, because people usually don't work with completely exact probabilities at all. Which I take to be among the claims of the paper.

That's not how I read it. The author said that the entire notion of probability is inapplicable in situations where you lack the information to calculate an exact probability, as you would with a fair coin or fair die.

That is completely different from saying that it makes no sense to draw conclusions about what would follow if it were true, like values of conditional probabilities, which mathematicians and logicians do all the time.

You can certainly discuss what consequences the axiom of choice implies. My point was that if your position about real-world empirical work (or even pure mathematics) depends on whether the AoC "is true" then you have most likely lost the plot. A statement like "if the axiom of choice, then probability is inapplicable in many real world scenarios, but if not AoC, then it is applicable" is almost certainly wrong.

I don’t think that my definition is any more obscure than your personal usage of the term. I’ve never heard anyone else use the term like you do, whereas my usage at least conforms to an extant literature. Here’s a sci-hub link to that paper, sorry, it wasn’t paywalled for me: https://sci-hub.st/https://www.journals.uchicago.edu/doi/full/10.1086/339673?

Lots of empirical work depends upon a background mathematical framework. Statistics is no different. And I never said that probability wasn’t applicable if the AoC is true, all I said was that the AoC would have to be absolutely, determinately false if every uncertainty is to be reducible to precise probabilities.

I don’t think that my definition is any more obscure than your personal usage of the term. I’ve never heard anyone else use the term like you do, whereas my usage at least conforms to an extant literature. Here’s a sci-hub link to that paper, sorry, it wasn’t paywalled for me: https://sci-hub.st/https://www.journals.uchicago.edu/doi/full/10.1086/339673?

This is not "my personal usage", this is just what the word means in English, and how it's used all the time.

https://www.dictionary.com/browse/compare

"to examine (two or more objects, ideas, people, etc.) in order to note similarities and differences"

Lots of empirical work depends upon a background mathematical framework. Statistics is no different. And I never said that probability wasn’t applicable if the AoC is true,

Lots of empirical work depends on the mathematical framework, but the axiom of choice is not one that should be very relevant.

all I said was that the AoC would have to be absolutely, determinately false if every uncertainty is to be reducible to precise probabilities.

Again, I think you're making an entirely different claim to the one that was in the paper, but using similar terminology in a way that's confusing.

More comments