site banner

not-guilty is not the same as innocent

felipec.substack.com

In many discussions I'm pulled back to the distinction between not-guilty and innocent as a way to demonstrate how the burden of proof works and what the true default position should be in any given argument. A lot of people seem to not have any problem seeing the distinction, but many intelligent people for some reason don't see it.

In this article I explain why the distinction exists and why it matters, in particular why it matters in real-life scenarios, especially when people try to shift the burden of proof.

Essentially, in my view the universe we are talking about is {uncertain,guilty,innocent}, therefore not-guilty is guilty', which is {uncertain,innocent}. Therefore innocent ⇒ not-guilty, but not-guilty ⇏ innocent.

When O. J. Simpson was acquitted, that doesn’t mean he was found innocent, it means the prosecution could not prove his guilt beyond reasonable doubt. He was found not-guilty, which is not the same as innocent. It very well could be that the jury found the truth of the matter uncertain.

This notion has implications in many real-life scenarios when people want to shift the burden of proof if you reject a claim when it's not substantiated. They wrongly assume you claim their claim is false (equivalent to innocent), when in truth all you are doing is staying in the default position (uncertain).

Rejecting the claim that a god exists is not the same as claim a god doesn't exist: it doesn't require a burden of proof because it's the default position. Agnosticism is the default position. The burden of proof is on the people making the claim.

-2
Jump in the discussion.

No email address required.

A Bayesian would say that beliefs have continuous degrees, expressible on a scale from 0% to 100%.

I'm not overly familiar with the Bayesian way of thinking, I have seen it expressed very often in The Motte and similar circles, but I don't see why would anyone conclude that this is a valid way of reasoning, especially when it comes to beliefs. I do understand Bayes' theorem, and I understand the concept of updating a probability, what I don't understand is why anyone would jump to conclusions based on that probability.

Let's say through a process of Bayesian updating I arrive to a 83% probability of success, should I jump the gun? That to me is not nearly enough information.

Now let's say that if I "win" I get $100, and if I "lose" I pay $100. Well now I have a bit more information and I would say this bet is in my favor. But if we calculate the odds and adjust the numbers so that if I lose I pay $500, now it turns out that I don't gain anything by participating in this bet, the math doesn't add up: ((5 / 6) * 100) / ((1 / 6) * 500) = 1.

Even worse: let's say that if I win I get $100, but if I lose I get a bullet in my brain. I'm literally playing Russian roulette.

83% tells me absolutely nothing.

Real actions in real life are not percentages, they are: do you do it or not? and: how much are you willing to risk?

You can't say I'm 60% certain my wife is faithful, so I'm going to 40% divorce her. Either you believe something, or you don't. Period.

Even worse is the concept of the default position in Bayesian thinking, which as far as I understand it's 50%.

Edit: I mean the probability that the next coin toss is going to land heads is 50%.

So starting off if I don't know if a coin is fair or not, I would assume it is. If I throw the coin 100 times and 50 of those it lands head the final percentage is 50%. If I throw the coin 1,000,000 times and 500,000 of those times it land heads it's still 50%, so I have gained zero information. This does not map to the reality I live in at all.

My pants require at least two numbers to be measured properly, surely I can manage two numbers for a belief. So let's say before I have any evidence I believe a coin is fair 50%±50 (no idea), after throwing it a million times I would guess it's about 50%±0.01 (I'm pretty sure it's fair).

So no, I'm not sold on this Bayesian idea of a continuous belief, I can't divorce my wife 40%, or blow my brains 17%. In the real world I have to decide if I roll the dice or not.

Real actions in real life are not percentages, they are: do you do it or not? and: how much are you willing to risk?

In economics terms what you do is take your Bayesian beliefs and multiply each probability by the utility gained or lost by each state. Then choose which ever course of action gives the most utility in expected value.

So say a lottery that gave you a 99% chance of gaining a dollar, but a 1% chance of losing a thousand dollars would be a bad bet, but one that gave you a thousand dollars at 1% chance and lost you a dollar at 99% chance would be a good bet.

Beliefs about the world and actions we take on those beliefs are somewhat orthogonal. You need to multiply the probability by the expected benefits or losses. But, those gains or losses don't change our underlying beliefs about what is likely true or not.

In economics terms what you do is take your Bayesian beliefs and multiply each probability by the utility gained or lost by each state.

I know how expected value works. But this confirms what I said: a single percentage cannot tell me what I should believe.

Also, this still doesn't answer my scenario. Is the next toss of a coin going to land heads given that in previous instances there have been 50 heads / 50 tails? How about 0 heads / 0 tails?

I know there's a difference, but Bayesians assume they are the same.

I know how expected value works. But this confirms what I said: a single percentage cannot tell me what I should believe.

The single value is just the point estimate of your belief. That belief also has a distribution over possible states with each state having it's own percentage attached to it.

Also, this still doesn't answer my scenario. Is the next toss of a coin going to land heads given that in previous instances there have been 50 heads / 50 tails? How about 0 heads / 0 tails?

The more times you flip a coin the more concentrated your probability distribution becomes around that coin being actually fair.

You seem to believe Bayesians only care about the point estimate and not the whole probability distribution. I don't think you disagree with Bayesianism so much as misunderstand what it is.

The single value is just the point estimate of your belief.

There is no "point estimate" of my belief because I don't believe anything.

You are trying pinpoint my belief on a continuum, or determine it with a probability function, but you can't, because I don't have any belief.

You seem to believe Bayesians only care about the point estimate and not the whole probability distribution.

Do you have any source for that? Do you have any source that explains the difference between a coin flip with 0/0 priors vs 50/50?

There are probabilities concerning two separate questions that are being talked about here.

  1. Is the coin fair?

  2. What is the likelihood the next flip will be heads/tails?

If a Bayesian starts with no reason for a prior belief the coin is biased in a particular direction then their prior probability for the next flip being heads will be 50% (given that any uncertainty the coin is biased to heads is equally balanced by the possibility it is biased to tails)

But their prior belief the coin is fair may be at 90%.

If you flip 1000 times and it comes up 500 heads and 500 tails, then perhaps your belief the next flip is heads is still at 50%, but your belief the coin is fair has gone up to 99.9%

If a Bayesian starts with no reason for a prior belief the coin is biased in a particular direction then their prior probability for the next flip being heads will be 50%

If you flip 1000 times and it comes up 500 heads and 500 tails, then perhaps your belief the next flip is heads is still at 50%

That is precisely what I am saying.