site banner

not-guilty is not the same as innocent

felipec.substack.com

In many discussions I'm pulled back to the distinction between not-guilty and innocent as a way to demonstrate how the burden of proof works and what the true default position should be in any given argument. A lot of people seem to not have any problem seeing the distinction, but many intelligent people for some reason don't see it.

In this article I explain why the distinction exists and why it matters, in particular why it matters in real-life scenarios, especially when people try to shift the burden of proof.

Essentially, in my view the universe we are talking about is {uncertain,guilty,innocent}, therefore not-guilty is guilty', which is {uncertain,innocent}. Therefore innocent ⇒ not-guilty, but not-guilty ⇏ innocent.

When O. J. Simpson was acquitted, that doesn’t mean he was found innocent, it means the prosecution could not prove his guilt beyond reasonable doubt. He was found not-guilty, which is not the same as innocent. It very well could be that the jury found the truth of the matter uncertain.

This notion has implications in many real-life scenarios when people want to shift the burden of proof if you reject a claim when it's not substantiated. They wrongly assume you claim their claim is false (equivalent to innocent), when in truth all you are doing is staying in the default position (uncertain).

Rejecting the claim that a god exists is not the same as claim a god doesn't exist: it doesn't require a burden of proof because it's the default position. Agnosticism is the default position. The burden of proof is on the people making the claim.

-2
Jump in the discussion.

No email address required.

Me: They do learn. They use many numbers to compute.

They don't. The probability that the next coin flip is going to land heads is the same: 0/0, 50/50, 5000/5000 is 0.5. It does not get updated.

Me: They do, the probability is the uncertainty of the event.

No. It's not. p=0.5 is not the uncertainty.

You: 50%+-20% is analogous to saying "blue" whereas saying 50% is analogous to saying "sky blue".

I didn't say that.

Me: Not if probability means uncertainty.

Which is not the case.

Me: It depends on the question.

There is no other question. I am asking a specific question, and the answer is p=0.5, there's no "it depends".

p=0.5 is the probability the next coin flip is going to land heads. Period.

I'm going to attempt to calculate the values for n number of heads and tails with 95% confidence so there's no confusion about "the question":

  • 0/0: 0.5±∞

  • 5/5: 0.5±0.034

  • 50/50: 0.5±0.003

  • 5000/5000: 0.5±0.000

It should be clear now that there's no "the question". The answer for Bayesians is p=0.5, and they don't encode uncertainty at all.

They don't. The probability that the next coin flip is going to land heads is the same: 0/0, 50/50, 5000/5000 is 0.5. It does not get updated.

Uff, I even told you how it's done. It's like I just pressed "new chat" on ChatGPT. Re-read or go Google "Bayesian inference coin flipping". It doesn't get more basic that that. I'm moving on, there's no progress to be made.

Uff, I even told you how it's done.

Show me step by step, I'll show you where you are wrong.

Say p∽beta(1, 1). Got 50/50 heads? Apply bayes rule, get posterior p∽beta(51,1), so next toss prob of heads went from 50% to 51/52 ~ 98%

Apply bayes rule, get posterior p∽beta(51,1)

Wrong. It's beta(51,51). It's beta(heads+1,tails+1).

I understood 50/50 to mean 50 heads out of 50 attempts.

You said: "it's not just about the answer is given, it's about how the answer is encoded in your brain."

Good. If it's about their brains, it went from beta(1, 1) --> ... -> beta(51, 51). They learned.

If it was just about the answer (it's not), then even your improbable hypothetical of 50 heads out of 100 tosses fails, since after every odd number of tosses, the answer is not 50%. But hey, you can always cherry pick it further and establish they clone the coin and throw it 100 times at once. And you'll have shown that they are able to not learn for a weird definition of learning that only cares about changes in the answer to the specific set of different but similar questions (1st toss outcome vs 100th toss outcome).

Good. If it's about their brains, it went from beta(1, 1) --> ... -> beta(51, 51). They learned.

No. A Bayseian doesn't answer beta(51, 51), he answers 0.5.

If it's about their brains and not just about the answer given...

I already explained how the encoding of the answer matters. If in 2021 they arrived to an answer of p=0.5, by 2023 it won't matter how their brains were when they arrived to that answer, because they already forgot. Brain states are not permanent.

More comments