@felipec's banner p

felipec

unbelief

1 follower   follows 0 users  
joined 2022 November 04 19:55:17 UTC
Verified Email

				

User ID: 1796

felipec

unbelief

1 follower   follows 0 users   joined 2022 November 04 19:55:17 UTC

					

No bio...


					

User ID: 1796

Verified Email

Turns out USA did blew out Nord Stream: How America Took Out The Nord Stream Pipeline.

It was obvious to anyone paying attention, but now it's pretty much confirmed.

Of course I already see the people married to the opposite conclusion trying to discredit the journalist (on of the most decorated and impactful journalists of all time), and his sources: anonymous: (as if established publications didn't use anonymous sources).

  • -22

A Bayesian would say that beliefs have continuous degrees, expressible on a scale from 0% to 100%.

I'm not overly familiar with the Bayesian way of thinking, I have seen it expressed very often in The Motte and similar circles, but I don't see why would anyone conclude that this is a valid way of reasoning, especially when it comes to beliefs. I do understand Bayes' theorem, and I understand the concept of updating a probability, what I don't understand is why anyone would jump to conclusions based on that probability.

Let's say through a process of Bayesian updating I arrive to a 83% probability of success, should I jump the gun? That to me is not nearly enough information.

Now let's say that if I "win" I get $100, and if I "lose" I pay $100. Well now I have a bit more information and I would say this bet is in my favor. But if we calculate the odds and adjust the numbers so that if I lose I pay $500, now it turns out that I don't gain anything by participating in this bet, the math doesn't add up: ((5 / 6) * 100) / ((1 / 6) * 500) = 1.

Even worse: let's say that if I win I get $100, but if I lose I get a bullet in my brain. I'm literally playing Russian roulette.

83% tells me absolutely nothing.

Real actions in real life are not percentages, they are: do you do it or not? and: how much are you willing to risk?

You can't say I'm 60% certain my wife is faithful, so I'm going to 40% divorce her. Either you believe something, or you don't. Period.

Even worse is the concept of the default position in Bayesian thinking, which as far as I understand it's 50%.

Edit: I mean the probability that the next coin toss is going to land heads is 50%.

So starting off if I don't know if a coin is fair or not, I would assume it is. If I throw the coin 100 times and 50 of those it lands head the final percentage is 50%. If I throw the coin 1,000,000 times and 500,000 of those times it land heads it's still 50%, so I have gained zero information. This does not map to the reality I live in at all.

My pants require at least two numbers to be measured properly, surely I can manage two numbers for a belief. So let's say before I have any evidence I believe a coin is fair 50%±50 (no idea), after throwing it a million times I would guess it's about 50%±0.01 (I'm pretty sure it's fair).

So no, I'm not sold on this Bayesian idea of a continuous belief, I can't divorce my wife 40%, or blow my brains 17%. In the real world I have to decide if I roll the dice or not.

"2+2 = 4" is still actually true in Z4.

But not in 𝐙/4𝐙 (integers modulo 4).

OK. I'm not a mathematician, I'm a programmer, but from what I can see the set {0,1,2,3} is isomorphic to ℤ/4ℤ that means one can be mapped to the other and vice versa. The first element of ℤ/4ℤ is isomorphic to 0, but not 0, it's a coset. But the multiplicative group of integers modulo 4 (ℤ/4ℤ)* is this isomorphic set, so it is {0,1,2,3} with integers being the members of the set. Correct?

Either way 2+2=0 can be true.

Only because 4=0.

So 2+2=4=0="not what you think". Therefore the claim of my post is true.

But 0 is what we think, because 0 is 4.

Nobody thinks that 0 is 4.

"You" are less than 0.0001% of the population, so virtually nobody.

If so, well, a Bayesian wouldn't use just one number here either.

Do you have any source? Everyone I've debated says it's a single number: 50%.

This article in Stanford Encyclopedia of Philosophy goes to great lengths to explain why the standard view of degree of belief is limited and proposes alternative views using imprecise probabilities: Imprecise Probabilities. It seems to confirm my belief that Bayesians consider only a single probability.

Did you just claim less than 0.0001% of people think 2+2=4?

No. That less than 0.0001% of people think 2+2=0?

Then as coins are flipped and you get results, this distribution of P gets updated and sooner or later it gets narrower around the real coin bias, just like you said it should happen.

Are you sure about that? Maybe you consider the distribution, and maybe some Bayesians do consider the distribution, but I've debated Scott Alexander, and I'm pretty sure he used a single number to arrive to the conclusion that doing something was rational.

I've been writing about uncertainty in my substack and I've felt a substantial amount of pushback regarding established concepts such as the burden of proof, not-guilty is not the same as innocent, and the null hypothesis implies uncertainty. Even ChatGPT seems to be confused about this.

I'm pretty certain that most people--even rationalists--do not factor uncertainty by default, which is why I don't think Bayesians thoroughly consider the difference between: 0/0, 50/50, or 500/500.

My point being that in Z/4Z, 2+2=0 and 2+2=4 are the same statement.

Which most people do not understand. Most people don't know what Z/4Z is, and most people don't know there exists more than one arithmetic.

You are ignoring my point that most people don't know that integers modulo n exist.

But the coin bias in reality is not true or false.

I'm not asking if the coin is biased, I'm asking if the next coin flip will land heads. It's a yes-or-no question that Bayesians would use a single number to answer.

So, when there's a binary event, like the Russia nuke question, a Bayesian says 6% probability, but a "burden-of-proofer" may say "I think the people that claim Russia will throw a nuke have the burden of proof"

No, I say "I don't know" (uncertain), which cannot be represented with a single probability number.

I don't need to be thinking about modular arithmetic to doubt 2+2=4, I could do it without having a good reason to doubt.

And I explained in the article Bertrand Russell doubted something much more fundamental 1+1=2, wrote extensively about it, and it's considered serious and important work on the foundations of mathematics.

Do you think Bertrand Russell was "dishonest" for asking people to suspend their belief?

Most people consider the notation of integer arithmetic to be unambiguous in a general context

But that is the point: most people make assumptions. In this particular case it's easy to see what assumption is made for people who do understand modular arithmetic, but that excludes the vast majority of people who don't.

The whole point of the article is to raise doubt about more complicated subjects which are not so easy to mathematically prove.

It merely makes 2+2=0 another representation of the same statement.

Do you believe that (2+2=4) and (2+2=0 (mod 4)) is "the same statement"?

I suppose your larger point is true, but not particularly meaningful.

Are you 100% certain of that?

So a statement that seems easy and clear to interpret can actually be misleading when your interlocutor is deliberately trying to confuse and deceive you by omitting key information?

This is a loaded language claim, a rhetoric trick. You are intentionally adding the word "misleading" to prompt an emotional response.

Consider this exchange:

  1. If you don't denounce Russia's illegal war of aggression, that makes you a Putin apologist, that's as unequivocally true as 2+2=4

  2. Actually, 2+2=4 is not unequivocally true

My claim (2) is not "misleading", and I'm not "deliberately trying to confuse and deceive" anyone, it's the other person who made a false claim (1). The sole objective of me bringing up this abstract algebra notion is to increase doubt on the original claim about Russia sides. The factoid 2+2=4 is not used by me as an end, it's used by somebody else as a means to an end. 2+2=4 is often used as a tool to demonstrate 100% certainty, and it can be dismantled.

Your loaded language claim doesn't apply in this example. We can get rid of the loaded language and make a much more fair, generous and neutral claim:

"A statement that seems easy, clear to interpret, and is obviously 100% certain to be true can actually be not necessarily true when an unrealized assumption is present."

How is this more generous claim not correct?

  • -10

Can anybody who voted explain to me how the winner entry is superior to mine?

From what I can see this is what it said about intuition:

  • Grady Little may have made a decision based on intuition, Joe Maddon didn't

  • To improve intuition one must train

  • LBJ was intuitive, Obama wasn't

That's basically it.

This is what it didn't say:

  • What is intuition

  • What is the opposite of intuition

  • When is intuition helpful

  • When is intuition unhelpful

  • How complex intuition is

  • What intuition is comprised of

My essay at least attempted to answer these.

To me this is clear evidence of bias in this community.

And because Mottizens are very prone to commit converse error fallacies, I shall point out that this is not something specific to my essay, I also don't see how the winner is superior to this entry: Intuition in a Scientific Age, which also does attempt to answer some of the important questions, such was: what is intuition? I also would be interested in hearing why somebody who voted for the winner considered it superior to that one.

  • -12

If it doesn't feel redundant, add another layer until it does :P

But they don't do that, they give a single number. Whatever uncertainty they had at the beginning is encoded in the number 0.5.

Later on when their decision turns out to be wrong, they claim it wasn't wrong, because they arrived at that number rationally, nobody would have arrived to a better number.

Still, I think I see your point in part. There is clearly some relevant information that's not being given in the answer if the answer to "will this fair coin land heads?

It's not just about the answer is given, it's about how the answer is encoded in your brain.

If the answer to some question is "blue", it may not be entirely incorrect, but later on when you are asked to recall a color you might very well pick any blue color. On the other hand if your answer was "sky blue", well then you might pick a more accurate color.

I claim the correct answer should be 50%±50%, but Bayesians give a single answer: 50%, in which case my answer uncertain is way better.

Do you believe people here pretend to be smarter than they are?

I've seen many people in The Motte claim something along the lines of "that's basic" as if only high-brow discussions were interesting, or as if they were the arbiters of what's "basic" and what's "advanced", or even as if they completely understood the "basic" notion.

It's almost as if the opposite of bike-shedding was sought: everyone claims they want to discuss about the plans for a nuclear power plant (very complex), not the bicycle shed materials which are way too simple.

So everyone who aims to discuss about the nuclear power plant plans is rewarded (even if nobody really understands them), and everyone who wants to talk about something everyone can understand is punished (nobody wants to talk about what they can easily understand).

(2+2=4 (mod 4)) and (2+2=0 (mod 4)) is the same statement.

That is not what I asked.

Finally, and most importantly, law in general and international law in particular is much less clearly defined and broadly agreed upon than simple arithmetic over the natural numbers.

This supports my argument. If I demonstrate that a rational agent should doubt something very "clearly defined" such as 2+2=4, then it logically follows that something much less clearly defined should be doubted as well.

if I say “Waffles are better than pancakes, that's as clear as the sky is blue”, would you start arguing that the sky isn't always blue?

Yes. I start with the claims that are more easy to dismantle because I know that people virtually never doubt their beliefs in real time. It would be very hard for me to convince that person that waffles are not necessarily better than pancakes, but it would be easy to dismantle the auxiliary claim.

This person may attempt to find another more unequivocally true auxiliary claim, but I would easily dismantle that too. And sooner or later this person would be forced to realize that it's not easy to find an unequivocally true claim. And if it's not easy to find an unequivocally true claim, perhaps the unequivocally true claim that waffles are better than pancakes is not so unequivocally true.

If a person says "Bob is as racist as Alice", and I show that Alice is not racist, then says, "OK. Bob is as racist as Mary", and I show Mary is not racist, "OK. Bob is as racist as Linda", Linda isn't racist. Wouldn't it make sense to doubt whether or not Bob is actually racist?

Using metaphors to tackle deep philosophical problems isn't even fringe. The notion of a black swan is nowadays common in order to explain that the fact that something has never happened before is not a valid reason to think it will never happen in the future. It tackles the deep philosophical problem of induction.

Instead of saying "as clear as the sky is blue", people in the past used to say "as impossible as a black swan". To say "actually, the fact that we haven't seen a black swan doesn't necessarily mean black swans don't exist" is not pedantry, it's in fact valid reasoning, a deep philosophical notion (problem of induction), and something that should have made people doubt their 100% certainty on "impossible" events.

In Bayesianese, "50%+-50%"

But that's not Bayesian. That's the whole point. And you accepted they use a single number to answer.

For your coin example, your prior belief the coin is fair is most likely not 50%.

No, the probability that the next toss of a coin is going to land heads is 50%, regardless if the results have been 0/0, 50/50, or 500000/500000.

As for whether or not you divorce your wife, well that’s not Bayes Theorem that’s just how you choose to apply the beliefs you have.

My beliefs are binary. Either I believe in something or I don't. I believe everyone's beliefs are like that. But people who follow Bayesian thinking confuse certainty with belief.

It's a yes-or-no question:

Do you believe that (2+2=4) and (2+2=0 (mod 4)) are "the same statement"?

Conversely, if they say "Bob is as racist as Alice, because he's the author of the bobracial supremacy manifesto", pointing out Alice isn't racist just distracts from the point at hand. Yes, it's a bad metaphor, but the point stands.

Yes, but the premise of this line of thought is precisely the opposite: it's not easy to prove Bob isn't racist, other other hand it's extremely easy to prove Alice isn't racist.

I have refuted your argument that 2+2=4 is not unequivocally true, but I'm still willing to discuss the point you were trying to make without forcing you to come up with a new example.

But discussing is not accepting. You are arguing that Bob is a racist, but you are nowhere near accepting the possibility that he might not be.

You are not willing to accept that Alice might not be a racist, and Bob even less. Which proves my point.