site banner

not-guilty is not the same as innocent

felipec.substack.com

In many discussions I'm pulled back to the distinction between not-guilty and innocent as a way to demonstrate how the burden of proof works and what the true default position should be in any given argument. A lot of people seem to not have any problem seeing the distinction, but many intelligent people for some reason don't see it.

In this article I explain why the distinction exists and why it matters, in particular why it matters in real-life scenarios, especially when people try to shift the burden of proof.

Essentially, in my view the universe we are talking about is {uncertain,guilty,innocent}, therefore not-guilty is guilty', which is {uncertain,innocent}. Therefore innocent ⇒ not-guilty, but not-guilty ⇏ innocent.

When O. J. Simpson was acquitted, that doesn’t mean he was found innocent, it means the prosecution could not prove his guilt beyond reasonable doubt. He was found not-guilty, which is not the same as innocent. It very well could be that the jury found the truth of the matter uncertain.

This notion has implications in many real-life scenarios when people want to shift the burden of proof if you reject a claim when it's not substantiated. They wrongly assume you claim their claim is false (equivalent to innocent), when in truth all you are doing is staying in the default position (uncertain).

Rejecting the claim that a god exists is not the same as claim a god doesn't exist: it doesn't require a burden of proof because it's the default position. Agnosticism is the default position. The burden of proof is on the people making the claim.

-2
Jump in the discussion.

No email address required.

Say p∽beta(1, 1). Got 50/50 heads? Apply bayes rule, get posterior p∽beta(51,1), so next toss prob of heads went from 50% to 51/52 ~ 98%

Apply bayes rule, get posterior p∽beta(51,1)

Wrong. It's beta(51,51). It's beta(heads+1,tails+1).

I understood 50/50 to mean 50 heads out of 50 attempts.

You said: "it's not just about the answer is given, it's about how the answer is encoded in your brain."

Good. If it's about their brains, it went from beta(1, 1) --> ... -> beta(51, 51). They learned.

If it was just about the answer (it's not), then even your improbable hypothetical of 50 heads out of 100 tosses fails, since after every odd number of tosses, the answer is not 50%. But hey, you can always cherry pick it further and establish they clone the coin and throw it 100 times at once. And you'll have shown that they are able to not learn for a weird definition of learning that only cares about changes in the answer to the specific set of different but similar questions (1st toss outcome vs 100th toss outcome).

Good. If it's about their brains, it went from beta(1, 1) --> ... -> beta(51, 51). They learned.

No. A Bayseian doesn't answer beta(51, 51), he answers 0.5.

If it's about their brains and not just about the answer given...

I already explained how the encoding of the answer matters. If in 2021 they arrived to an answer of p=0.5, by 2023 it won't matter how their brains were when they arrived to that answer, because they already forgot. Brain states are not permanent.

Brain states are not permanent.

Yeah, wait long enough and worms start eating it