site banner

not-guilty is not the same as innocent

felipec.substack.com

In many discussions I'm pulled back to the distinction between not-guilty and innocent as a way to demonstrate how the burden of proof works and what the true default position should be in any given argument. A lot of people seem to not have any problem seeing the distinction, but many intelligent people for some reason don't see it.

In this article I explain why the distinction exists and why it matters, in particular why it matters in real-life scenarios, especially when people try to shift the burden of proof.

Essentially, in my view the universe we are talking about is {uncertain,guilty,innocent}, therefore not-guilty is guilty', which is {uncertain,innocent}. Therefore innocent ⇒ not-guilty, but not-guilty ⇏ innocent.

When O. J. Simpson was acquitted, that doesn’t mean he was found innocent, it means the prosecution could not prove his guilt beyond reasonable doubt. He was found not-guilty, which is not the same as innocent. It very well could be that the jury found the truth of the matter uncertain.

This notion has implications in many real-life scenarios when people want to shift the burden of proof if you reject a claim when it's not substantiated. They wrongly assume you claim their claim is false (equivalent to innocent), when in truth all you are doing is staying in the default position (uncertain).

Rejecting the claim that a god exists is not the same as claim a god doesn't exist: it doesn't require a burden of proof because it's the default position. Agnosticism is the default position. The burden of proof is on the people making the claim.

-2
Jump in the discussion.

No email address required.

Real actions in real life are not percentages, they are: do you do it or not? and: how much are you willing to risk?

In economics terms what you do is take your Bayesian beliefs and multiply each probability by the utility gained or lost by each state. Then choose which ever course of action gives the most utility in expected value.

So say a lottery that gave you a 99% chance of gaining a dollar, but a 1% chance of losing a thousand dollars would be a bad bet, but one that gave you a thousand dollars at 1% chance and lost you a dollar at 99% chance would be a good bet.

Beliefs about the world and actions we take on those beliefs are somewhat orthogonal. You need to multiply the probability by the expected benefits or losses. But, those gains or losses don't change our underlying beliefs about what is likely true or not.

Expected value is not everything. For example, if you play the following game: you can choose a value n, then you will have a probability 1/n to get n^2 dollars and 1-(1/n^2) to give the other guy n/2 dollars. Your expected gain is approximately n/2 when n is large enough. You are playing with a billionaire. Is it really more rational to choose n=10000 than say n=1000 or n=2? More generally, does it makes sense to choose n=2^256 even though the other player can afford to pay if you win?

In economics terms what you do is take your Bayesian beliefs and multiply each probability by the utility gained or lost by each state.

I know how expected value works. But this confirms what I said: a single percentage cannot tell me what I should believe.

Also, this still doesn't answer my scenario. Is the next toss of a coin going to land heads given that in previous instances there have been 50 heads / 50 tails? How about 0 heads / 0 tails?

I know there's a difference, but Bayesians assume they are the same.

I know how expected value works. But this confirms what I said: a single percentage cannot tell me what I should believe.

The single value is just the point estimate of your belief. That belief also has a distribution over possible states with each state having it's own percentage attached to it.

Also, this still doesn't answer my scenario. Is the next toss of a coin going to land heads given that in previous instances there have been 50 heads / 50 tails? How about 0 heads / 0 tails?

The more times you flip a coin the more concentrated your probability distribution becomes around that coin being actually fair.

You seem to believe Bayesians only care about the point estimate and not the whole probability distribution. I don't think you disagree with Bayesianism so much as misunderstand what it is.

The single value is just the point estimate of your belief.

There is no "point estimate" of my belief because I don't believe anything.

You are trying pinpoint my belief on a continuum, or determine it with a probability function, but you can't, because I don't have any belief.

You seem to believe Bayesians only care about the point estimate and not the whole probability distribution.

Do you have any source for that? Do you have any source that explains the difference between a coin flip with 0/0 priors vs 50/50?

There are probabilities concerning two separate questions that are being talked about here.

  1. Is the coin fair?

  2. What is the likelihood the next flip will be heads/tails?

If a Bayesian starts with no reason for a prior belief the coin is biased in a particular direction then their prior probability for the next flip being heads will be 50% (given that any uncertainty the coin is biased to heads is equally balanced by the possibility it is biased to tails)

But their prior belief the coin is fair may be at 90%.

If you flip 1000 times and it comes up 500 heads and 500 tails, then perhaps your belief the next flip is heads is still at 50%, but your belief the coin is fair has gone up to 99.9%

If a Bayesian starts with no reason for a prior belief the coin is biased in a particular direction then their prior probability for the next flip being heads will be 50%

If you flip 1000 times and it comes up 500 heads and 500 tails, then perhaps your belief the next flip is heads is still at 50%

That is precisely what I am saying.