site banner

not-guilty is not the same as innocent

felipec.substack.com

In many discussions I'm pulled back to the distinction between not-guilty and innocent as a way to demonstrate how the burden of proof works and what the true default position should be in any given argument. A lot of people seem to not have any problem seeing the distinction, but many intelligent people for some reason don't see it.

In this article I explain why the distinction exists and why it matters, in particular why it matters in real-life scenarios, especially when people try to shift the burden of proof.

Essentially, in my view the universe we are talking about is {uncertain,guilty,innocent}, therefore not-guilty is guilty', which is {uncertain,innocent}. Therefore innocent ⇒ not-guilty, but not-guilty ⇏ innocent.

When O. J. Simpson was acquitted, that doesn’t mean he was found innocent, it means the prosecution could not prove his guilt beyond reasonable doubt. He was found not-guilty, which is not the same as innocent. It very well could be that the jury found the truth of the matter uncertain.

This notion has implications in many real-life scenarios when people want to shift the burden of proof if you reject a claim when it's not substantiated. They wrongly assume you claim their claim is false (equivalent to innocent), when in truth all you are doing is staying in the default position (uncertain).

Rejecting the claim that a god exists is not the same as claim a god doesn't exist: it doesn't require a burden of proof because it's the default position. Agnosticism is the default position. The burden of proof is on the people making the claim.

-2
Jump in the discussion.

No email address required.

I'm not asking if the coin is biased, I'm asking if the next coin flip will land heads. It's a yes-or-no question that Bayesians would use a single number to answer.

Yeah

But at first it seems to me you were talking about the bias and what you can learn about it from repeated tosses (and were confused in thinking Bayesians wouldn't learn).

If I throw the coin 100 times and 50 of those it lands head the final percentage is 50%. If I throw the coin 1,000,000 times and 500,000 of those times it land heads it's still 50%, so I have gained zero information.

So like we've talked, they'd use many numbers to compute the probability of the yes-no question, they just give the final answer as one number. Bayesians do consider uncertainty, to all levels they feel they need. What they don't do is give uncertainties about uncertainties in their answers. And they see the probability of next toss heads as equivalent to "how certain am I that it's going to be heads?" (to a Bayesian, probabilities are also uncertainties in their minds, not just facts about the world). Iiuc, you would be happy saying you believe the next toss has 50%±20 chances of being heads. Why not add uncertainty to the 20% too since you are not sure it should be exactly 20%, as in 50%±(20±5)%? If that feels redundand in some sense, that's how a Bayesian feels about saying "coin will come up heads, I'm 50% sure, but I'm only 30% sure of how sure I am.". If it doesn't feel redundant, add another layer until it does :P

No, I say "I don't know" (uncertain), which cannot be represented with a single probability number.

Still, I think I see your point in part. There is clearly some relevant information that's not being given in the answer if the answer to "will this fair coin land heads?", 50%, is the same as the answer given to "plc ashetn ðßh sst?" (well-posed question in a language I just invented), now a lame 50% meaning "the whaat huuhhh?".

If it doesn't feel redundant, add another layer until it does :P

But they don't do that, they give a single number. Whatever uncertainty they had at the beginning is encoded in the number 0.5.

Later on when their decision turns out to be wrong, they claim it wasn't wrong, because they arrived at that number rationally, nobody would have arrived to a better number.

Still, I think I see your point in part. There is clearly some relevant information that's not being given in the answer if the answer to "will this fair coin land heads?

It's not just about the answer is given, it's about how the answer is encoded in your brain.

If the answer to some question is "blue", it may not be entirely incorrect, but later on when you are asked to recall a color you might very well pick any blue color. On the other hand if your answer was "sky blue", well then you might pick a more accurate color.

I claim the correct answer should be 50%±50%, but Bayesians give a single answer: 50%, in which case my answer uncertain is way better.

The correct answer depends on what the question is.

If the question is "what's the color of that thing you last saw 5 days ago?", Bayesians would be just like you and answer "blue" and not "sky blue #020fe8".

When you ask "how will the next coin toss land?", an answer disregarding uncertainty would be "it will be heads". An answer that takes uncertainty into account could be "I'm almost sure it will be heads", or "I suspect it will be tails", or "I haven't got a clue". A Bayesian would phrase those as "95%" (almost sure heads), "40%" (suspect tails), or "50%" (no idea).

In Bayesianese, answering that specific question with "50%+-50%" would mean something like "I have no clue if I have a clue whether the next coin toss will be heads or tails", which sounds weird. So I am inferring that you mean "50%+-50%" as an answer the a slightly different question, such as "how frequently would this coin land heads over many tosses?". Which one may phrase as "what's the probability that this coin comes up heads if I toss it?"; but then with this phrasing, a (subjective) Bayesian during a nitpicky philosophical discussion might parse it as "how will the next coin toss land (please, answer in a way that conveys your level of uncertainty)?". That's why I suspect there was talking past each other in your discussions with other people.

In Bayesianese, "50%+-50%"

But that's not Bayesian. That's the whole point. And you accepted they use a single number to answer.

You: They use a single number for probabilities. They should use 2 like 50%+-20%

Me: Yes, they use a single number. No they shouldn't use 2 when they interpret probability as meaning subjective uncertainty. They should if they interpret it to mean something obejctive.

You: They don't learn from multiple coin tosses, they would need more than one number for that.

Me: They do learn. They use many numbers to compute.

You: They don't take uncertainty into account.

Me: They do, the probability is the uncertainty of the event.

You: 50%+-20% is analogous to saying "blue" whereas saying 50% is analogous to saying "sky blue".

Me: Not if probability means uncertainty. Then 50% maps to "blue", and 50%+-20% maps to nonsense.

You: My answer is correct.

Me: It depends on the question.

I'm not sure what's left here to discuss. I didn't get this follow up.

But that's not Bayesian

Right. Which is what that very sentence you half quoted explains.

Me: They do learn. They use many numbers to compute.

They don't. The probability that the next coin flip is going to land heads is the same: 0/0, 50/50, 5000/5000 is 0.5. It does not get updated.

Me: They do, the probability is the uncertainty of the event.

No. It's not. p=0.5 is not the uncertainty.

You: 50%+-20% is analogous to saying "blue" whereas saying 50% is analogous to saying "sky blue".

I didn't say that.

Me: Not if probability means uncertainty.

Which is not the case.

Me: It depends on the question.

There is no other question. I am asking a specific question, and the answer is p=0.5, there's no "it depends".

p=0.5 is the probability the next coin flip is going to land heads. Period.

I'm going to attempt to calculate the values for n number of heads and tails with 95% confidence so there's no confusion about "the question":

  • 0/0: 0.5±∞

  • 5/5: 0.5±0.034

  • 50/50: 0.5±0.003

  • 5000/5000: 0.5±0.000

It should be clear now that there's no "the question". The answer for Bayesians is p=0.5, and they don't encode uncertainty at all.

the probability the next coin flip is going to land heads

0/0: 0.5±∞

5/5: 0.5±0.034

50/50: 0.5±0.003

5000/5000: 0.5±0.000

  1. If I ask for the probability that Putin is dead tomorrow, I'd say that fixes the date. You don't move "tomorrow" along with you so it never arrives. After the next coin flip happened, it either was heads or it wasn't, there's nothing left.

  2. There is that word "probability" in the question, so of course how one interprets that word changes the question. If you disagree, give an argument. Instead, you are just repeating that your way of interpreting the word is the only way. I'd ask you to rephrase the question without using the words "the probability/chances/odds" or any such synonym. Then ask how a Bayesian would answer that version of the question, and see if the disagreement persists.

Then ask how a Bayesian would answer that version of the question, and see if the disagreement persists.

I know the definitions of probability, I know what probability is according to a Bayesian, I know what a likelihood function is, and I know what the actual probability of this example is, because I wrote a computer simulation with the actual probability embedded in it.

You are just avoiding the facts.

You know what probability is according to a Bayesian, and you think they are factually wrong. The rest of of the problems stem from that. I'd suggest at least you focus your arguments to why you think they are objectively wrong. Instead, you inject your understanding of probability into their statements and conclude factually wrong things like how they don't consider uncertainty when they do.

because I wrote a computer simulation with the actual probability embedded in it.

Then a Bayesian would be willing to answer the question of what your that parameter you embedded in your simulation is, with answers like beta(51,51).

More comments

They don't. The probability that the next coin flip is going to land heads is the same: 0/0, 50/50, 5000/5000 is 0.5. It does not get updated.

Uff, I even told you how it's done. It's like I just pressed "new chat" on ChatGPT. Re-read or go Google "Bayesian inference coin flipping". It doesn't get more basic that that. I'm moving on, there's no progress to be made.

Uff, I even told you how it's done.

Show me step by step, I'll show you where you are wrong.

Say p∽beta(1, 1). Got 50/50 heads? Apply bayes rule, get posterior p∽beta(51,1), so next toss prob of heads went from 50% to 51/52 ~ 98%

More comments

The answer for Bayesians is p=0.5, and they don't encode uncertainty at all.

This is false. Bayesian calculations are quite capable of differentiating between epistemic and aleatory uncertainty. See the first Google result for "Bayes theorem biased coin" for an example.

(edit to add: not a mathematically perfect example; the real calculations here treat a bias as a continuous probability space, where a Bayesian update turns into an integral equation, and instead discretizing into 101 bins so you can use basic algebra is in the grey area between "numerical analysis" and "cheating".)

See the first Google result for "Bayes theorem biased coin" for an example.

Did you actually read that? It clearly says:

P(“Heads”): The evidence term is the overall probability of getting heads and is the sum of all 101 (prior * likelihood) products.

It's a single value.

I looked at the code of the simulation:

evidence = sum(likelihood .* prior);

It's a single value.

I printed the variable at the end of the simulation:

p=0.498334

It's a single value.

I read, and I understood, and I also looked at the graphs discretized over hundreds of values, and I'm able to understand that when a probability distribution has a mean value in ℝ, that does not mean that the probability distribution itself lies in ℝ, no matter how many times you repeat "single value" after you finish downvoting me. You seem to believe that any space on which a functional can give a single value is a one-dimensional space? This is again false.

Let's try going through an example more slowly:

A uniform prior on an unknown coin bias θ∈[0,1] (p₀≡1) will marginalize to a probability of 1/2 (∫p₀(θ)θdθ) for the next coin flip being heads, for example, and that mean is the exact same "single value" as a delta function (pᴅ≡δ(θ-1/2)) at p=1/2, for example (∫δ(θ-1/2)θdθ), but the delta function will give p=1/4 for the next two flips in a row being heads (∫δ(x-1/2)θ²dθ) and the uniform prior will give p=1/3 (∫p₀(θ)θ²dθ).

Do a Bayesian update on the uniform prior after one flip of heads (pʜ(θ) = θp₀(θ)/∫φp₀(φ)dφ = 2θ), and it'll now say that p(heads next) is 2/3 (∫pʜ(θ)θdθ) and p(2 heads in the next 2 flips) is 1/2 (∫pʜ(θ)θ²dθ); do the same update on the delta function prior and it'll say that the next p(heads) is still 1/2 and the next p(2 heads) is still 1/4, because updating the delta function to θpᴅ(θ)/∫φpᴅ(φ)dφ just gives back the exact same delta function. They started with the exact same "single value", but they updated in different ways because that single value was just an integral of an entire probability distribution function, and Bayesian analysis saves the whole distribution to bring to an update.

Do another Bayesian update after a flip of tails following the flip of heads, and the originally-uniform prior will be back to p(heads next)=1/2 (for a third time, the exact same single value!)... but it won't be back to uniform (pʜᴛ(θ) = (1-θ)pʜ(θ)/∫(1-φ)pʜ(φ)dφ = 6θ(1-θ)); ask the posterior here for p(2 heads on the 3rd and 4th flip) and it'll be at 3/10; it's now squeezing probability mass much closer to fair than the uniform prior did, though it's not the same output at the delta function either.

The difference between a probability distribution and a marginalization of a probability distribution is really important. There's actually a kernel of truth to complaining here: doing Bayes on "the entire universe" is intractable, so in practice we always marginalize away "anything we don't expect to matter" (notice even when representing a uniform prior over coin biases I still haven't accounted for "the coin might land on its edge" or "someone might grab it in midair", or...) when updating, and we have to watch out for accidentally ignoring too much. But just because you have to marginalize away a lot doesn't mean you're required to marginalize away everything!

You're the guy from "2+2≠4" who was having trouble with equivalence classes and Shannon information, right? I was feeling bad about how many people were just downvoting you and moving on, and at that point I wasn't one of them, but now I kind of get it: downvoting falsehoods and moving on is fast, correcting falsehoods is slow, and I fear sometimes dissuading further falsehoods is impossible. I'd love to find out that you can now compute a posterior pʜᴛʜ, apologize to everyone you've been rude to, and start asking questions rather than making false assertions when there's something you don't understand, but if not, well, I guess that is what the downvote button is for after all.

More comments