site banner

2+2 = not what you think

felipec.substack.com

Changing someone's mind is very difficult, that's why I like puzzles most people get wrong: to try to open their mind. Challenging the claim that 2+2 is unequivocally 4 is one of my favorites to get people to reconsider what they think is true with 100% certainty.

-34
Jump in the discussion.

No email address required.

there is a difference between “you should have lingering doubt in the face of certainty that you know what exactly is trying to be communicated” and “you should have doubt about things you absolutely know are true”.

“I purposefully did not mention I was thinking about modulo math and let you assume I meant the common notion of + “ doesn’t really convince me of anything except that people disagree about what it means to be dishonest.

I don't need to be thinking about modular arithmetic to doubt 2+2=4, I could do it without having a good reason to doubt.

And I explained in the article Bertrand Russell doubted something much more fundamental 1+1=2, wrote extensively about it, and it's considered serious and important work on the foundations of mathematics.

Do you think Bertrand Russell was "dishonest" for asking people to suspend their belief?

I think you have a fundamental misunderstanding of what Bertrand Russel was doing when he proved 1+1=2. From an earlier work of his which effectively turned into a preface of the Prinicipa Mathematica:

The present work has two main objects. One of these, the proof that all pure mathematics deals exclusively with concepts definable in terms of a very small number of fundamental concepts, and that all its propositions are deducible from a very small number of fundamental logical principles, is undertaken in Parts II–VII of this work, and will be established by strict symbolic reasoning in Volume II.

The proof was not to dispel doubt about the statement 1+1=2, but to dispel doubt about the system of formal logic and axioms that he was using while constructing that proof. "1+1=2" was not a conundrum or a question to be answered, but a medal or trophy to hang on the mantle of mathematical logicism; much in the same way that the point of "coding Hello World in assembly" is not "coding Hello World in assembly" but "coding Hello World in assembly."

Russel was showing that you could lower the "basement" of mathematics and consider it as starting from another foundation deeper down from which you could construct all mathematical knowledge, and to do that he had to build towards mathematics where it already stood.

(Then Kurt Gödel came along and said "Nice logical system you've built there, seems very complete, shame if someone were to build a paradox in it...")

I think you have a fundamental misunderstanding of what Bertrand Russel was doing when he proved 1+1=2

No, I don't. In mathematics the word "proof" has a very precise meaning, and anything without a "proof" is held as tentative (i.e. not necessarily true), for example a conjecture.

This entirely depends on the set of axioms you choose as as foundation, and you certainly could choose 1+1=2 as one of those axioms, therefore it's an assumption that doesn't need to be substantiated. But if you get rid of that axiom, then 1+1=2 is held as tentative and thus lacking proof.

much in the same way that the point of "coding Hello World in assembly" is not "coding Hello World in assembly" but "coding Hello World in assembly."

You are making a very obvious assumption there.

Russel was showing that you could lower the "basement" of mathematics and consider it as starting from another foundation deeper down from which you could construct all mathematical knowledge, and to do that he had to build towards mathematics where it already stood.

I know.

Another way to think about it is that he tried to refactor the 1+1=2 axiom into more fundamental axioms. But this work necessitates the possibility that an axiomatic system that doesn't have 1+1=2 as an axiom is tenable. If such a system exists (which I think Bertrand Russell pretty much proved), that means that 1+1=2 does not need to be assumed to be true, it can be inferred.

I call "not assume" "doubt", but it doesn't matter what you call it, the fact is that to write Principia Mathematica Bertrand Russell had to not assume 1+1=2.

I call "not assume" "doubt", but it doesn't matter what you call it, the fact is that to write Principia Mathematica Bertrand Russell had to not assume 1+1=2.

It does matter what you call it, especially if you haven't explicitly defined what you mean when you use the term you're calling it by, because people will generally interpret you as using the most common meaning of that term. And we can see the communication issues that causes right here, because there are two relevant meanings of the word "assume" in this conversation and the word "doubt" is only a good antonym for one of them, so it looks like you're conflating those meanings, unintentionally or otherwise.

To assume(everyday) something means approximately to act as if that something were true, without feeling the need to personally verify it for oneself.

To assume(logic) something means to accept it as an axiom of your system (although potentially a provisional one) such that it can be used to construct further statements and the idea of "verifying" it doesn't make much sense.

Doubt is a reasonable word for "not assume(everyday)," thought it's usually used in a stronger sense, but it's a much poorer fit for "not assume(logic)." The technique of proof by contradiction is entirely based on assuming(logic) something that one is showing to be false, i.e. that one does not assume(everyday).

Russel himself is a good example of the inequivalence going the other direction. What would he have done if he had managed to prove 1+1=3 with his logical system? I can't be completely certain, but I don't think he'd have published it as a revolution in mathematical philosophy. More likely, he'd have gone over the proof looking for errors, and if he couldn't find any he'd start tinkering with the axioms themselves or the way in which they were identified with arithmetical statements to get them to a form which proved 1+1=2 instead, and if that failed he'd give them up as a foundation for mathematics, either with a grumbling "well I bet there's some other way it's possible even if I wasn't able to show it myself" or in an outright admission that primitive logic doesn't make a good model for math.

In other words, even though he didn't assume(logic) that 1+1=2, his assumption(everyday) that 1+1=2 would be so strong as to reverse all the logical implication he had been working on; a "proof" that 1+1 != 2 would instead be taken as a proof that the method he used reached that conclusion was flawed. This is not a state of mind I would refer to as "doubt."

much in the same way that the point of "coding Hello World in assembly" is not "coding Hello World in assembly" but "coding Hello World in assembly."

You are making a very obvious assumption there.

Yes. I assumed that you have enough in common with me culturally to know what "Hello World" and "assembly" are in the context of coding, why "Hello World" is a nearly useless program in the vast majority of contexts, and that people frequently practice new programming languages by writing programs in them with little regard for the practical use of those programs; that you are intelligent enough to comprehend those kinds of comparative asides and familiar enough with conversational English to understand that loading them with caveats would draw too much focus away from the point they are supporting; and that you are here to have a constructive conversation instead of deliberately wasting people's time. If I'm wrong about any of those I will be happy to be corrected.

It does matter what you call it

I did not say it doesn't matter what I call it, I said it doesn't matter what you call it.

And it seems pretty clear to me you are being intentionally obtuse. The purpose of me communicating to you is that you understand what I mean, it doesn't matter how. For any given idea I have there's a set of words I could use to transmit that idea, and any word I use has multiple meanings, but as long as you pick the meaning that correctly match the idea I want to transmit, we are communicating effectively. The "most common meaning" is completely irrelevant. The most common meaning of the word "get" is "to gain possession of", but if I say "do you get what I mean", I'm not using the most common meaning, and I don't have to.

I used multiple words, terms, and an explanation for you to understand what I meant, and if you understand it, I don't personally care what word you use to name that idea.

To assume(everyday) something means approximately to act as if that something were true, without feeling the need to personally verify it for oneself.

To assume(logic) something means to accept it as an axiom of your system (although potentially a provisional one) such that it can be used to construct further statements and the idea of "verifying" it doesn't make much sense.

I don't see the any difference. If you "assume X" it means you hold X as true without any justification, evidence, verification, or inference.

In other words, even though he didn't assume(logic) that 1+1=2, his assumption(everyday) that 1+1=2 would be so strong as to reverse all the logical implication he had been working on

I disagree. Every day he saw evidence that 1+1=2, so it would be reasonable to believe (not assume) that this was always true. Additionally he saw no way it could not be true, but he was rational enough to know this was not a good reason to assume it was true, as this would have been an argument from incredulity fallacy.

Maybe he did assume that 1+1=2 in everyday life, but you cannot know that unless you could read his mind.

This is a converse error fallacy. If I assume a chair is sound, I would sit down without checking it, but if I sit down without checking it doesn't necessarily mean I assumed it was sound.

In general rationalists try to not assume anything.


If I'm wrong about any of those I will be happy to be corrected.

I know helloworld is a nearly useless program in the vast majority of contexts, but not all, and I know that people frequently practice new programming languages by writing programs in them with little regard for the practical use of those programs, but not all people who write helloworld programs are practicing new programming languages.

You are assuming the general case. I can easily imagine somebody in the 1970s developing for a new hardware architecture for which there are no other compilers available trying to test that any software runs, in fact, I can even imagine somebody today doing that for a new hardware architecture like RISC-V.

And once again the point of these examples is not to "deliberately wasting people's time", it's to show they are making assumptions even if they can't possibly see how they could be making an assumption.

Every time I tell somebody that they are making an assumption they disagree, and every time I point to them the assumption they were making they come with a rationalization, like "you tricked me", or "that's very unlikely", or "everyone would have assumed the same". It's never "actually you are right, I didn't think about that".

I don't see the any difference. If you "assume X" it means you hold X as true without any justification, evidence, verification, or inference.

As I've seen the term used outside of logic, it only requires a lack of effort towards verification. You can have justification, evidence, or inference, as long as they are simple enough and easily-enough available. For example, I would find nothing unusual in a drive-by reply to this line consisting of the following sentence: I assume you didn't read the post very thoroughly, then, because the paragraph immediately below where your quote ends contains a distinguishing case.


You are assuming the general case.

Ah! I see the false assumption was "that you are intelligent enough to comprehend those kinds of comparative asides and familiar enough with conversational English to understand that loading them with caveats would draw too much focus away from the point they are supporting." Asides of that type are implicitly restricted to the general case, because they are intended to quickly illustrate a point by way of rough analogy, rather than present a rigorous isomorphism.

I assume you didn't read the post very thoroughly, then, because the paragraph immediately below where your quote ends contains a distinguishing case.

This is an equivocation fallacy. You are using a different definition of "assume", in particular using it exactly as "suppose". In my view assuming and supposing are two different things, even in the colloquial sense.

I see the false assumption was "that you are intelligent enough to comprehend those kinds of comparative asides and familiar enough with conversational English to understand that loading them with caveats would draw too much focus away from the point they are supporting."

Wrong. I can comprehend the notion without accepting it. This is a converse error fallacy.

Asides of that type are implicitly restricted to the general case, because they are intended to quickly illustrate a point by way of rough analogy, rather than present a rigorous isomorphism.

This is obviously a cop-out. If you were aware that your claim applied only to the general case, but you merely did not make it explicit, then the moment I mentioned there was an assumption you would immediately know what assumption I was talking about, because you were fully aware.

But you didn't know what assumption I was talking about, because you were not aware of the restriction. Now you want to pretend you always knew you were making an assumption, you merely didn't say it when I pointed it out, for some reason.

This is precisely what everyone does. Before they say they aren't making an assumption, and after I point it out to them they always knew. You did exactly what I said people do, and you went for one of the options I listed: "everyone would have assumed the same".

It seems like you don't, actually, understand what that comparative aside was doing, so let me restate it at more length, in different words, with the reasoning behind the various parts made more explicit.

I described a situation where a person generated object A by means of process B, but due to their circumstances the important part of their activity was process B, and object A was important mostly insofar as it allowed the engagement of process B. Since I judged this sort of process-driven dynamic may seem counterintuitive, I also decided to give an example that is clearly caused by similar considerations. Writing Hello World in a new language is a nearly prototypical instance of trivial output being used to verify that a process is being applied successfully. The choice of assembly further increased the relevance of "moderately experienced programmer checking that their build pipeline works and their understanding of fundamentals is correct".

In this context, the existence of the general case - and the fact that it is the typical example brought to mind by the description, as indicated by the name you selected - suffices to serve the purpose of the aside. I did not claim and did not need to claim anything about all instances of building Hello World in assembly; the idea that I was trying to is an assumption that you made.

More comments

Do you think Bertrand Russell was "dishonest" for asking people to suspend their belief?

No, merely exceedingly rigorous. He set out to prove 1+1=2, then after a lot of tedious work, he indeed proved that 1+1=2 is in fact true, settling the debate confirming what everyone already knew. He didn't actually doubt it, he merely wanted to put it on a formal foundation, and he did.

He wasn't an engineer who was worried bridges would fall if everyone computed 1+1 incorrectly, he wasn't a politician who got challenged on his fiscal plan and needed to double-check his assumptions. He was a nerd who wanted clarity for its own sake, operating at the intersection between pure math and philosophy. That's the field where you would doubt 1+1=2, not because you actually doubt it, but because you expect insight from dispelling that doubt. It's the same level of abstraction as wondering whether you're actually a brain in a vat. In politics or engineering, you can't do that.

That's the field where you would doubt 1+1=2, not because you actually doubt it, but because you expect insight from dispelling that doubt.

It doesn't matter if Bertrand Russell personally doubted it or not, he acted as if it was rational to not believe with 100% certainty something which had not been proven yet, and it was.

The reason he attempted to dispell that doubt, is that absent that proof, it was reasonable to doubt.

It's the same level of abstraction as wondering whether you're actually a brain in a vat.

Which is a valid doubt in philosophy.

In politics or engineering, you can't do that.

You have to doubt in engineering, for the same reason you have to doubt in every field. Bridges have fallen because engineers did not doubt enough.

Not really. I can guarantee you that Russell used 1+1=2 when calculating his daily expenses even before he formally proved it. Had he failed at his attempt to prove it, he would have gone on believing and using it. I can guarantee you he didn't scold any colleagues for using 1+1=2 without proof.

He wanted a formal proof for itself, not because one was needed.

I can guarantee you that Russell used 1+1=2 when calculating his daily expenses even before he formally proved it.

I literally said "it doesn't matter if Bertrand Russell personally doubted it or not".

If I'm not 100% certain a particular chair is not broken, but I sit on it anyway, and you conclude that therefore I believe with 100% certainty that it wasn't broken, you are committing a converse error fallacy.

You cannot read minds, you cannot know why I did sit on that chair, and assuming that you do know is an error in logic.

Even worse is to assume you do know why I checked the chair before sitting on it, and assuming it had nothing to do with my potential doubt.

Doubt is essential in all fields. 100% certainty is extremely dangerous. And I don't see you addressing this at all.

I literally said "it doesn't matter if Bertrand Russell personally doubted it or not".

No one doubted it, because it wasn't actually reasonable to doubt it. Russell wanted to formalize a foundation, he wanted to prove that arithmetics derived from logic, not that arithmetics was true.

Doubt is essential in all fields.

Not doubt about math or fundamental logic. That is only reasonable in philosophy. An engineer who doubts 1+1=2 will never build any bridges, and no bridges will crash because an engineer assumed 1+1=2.

If you doubt the fundamentals, you're doing philosophy. If you want to get anything done, you need to stop doing philosophy. You need to choose some axioms, build a knowledge base and then get to work on questions that are actually in doubt.

100% certainty is extremely dangerous. And I don't see you addressing this at all.

Because right now a fallacious argument is being made for too little certainty, not too much. I'm addressing the bad arguments that are actually on the table.

Not doubt about math or fundamental logic.

No? So nobody in mathematics doubts the Zermelo–Fraenkel set theory axiomatic system?

An engineer who doubts 1+1=2 will never build any bridges

Who said an engineer should doubt 1+1=2?

No? So nobody in mathematics doubts the Zermelo–Fraenkel set theory axiomatic system?

Doubt about axioms is basically mathematical philosophy.

Who said an engineer should doubt 1+1=2?

So you agree doubt about everything is not reasonable in every field?

More comments

Based on his reputation and without reading what he wrote, no I don’t think he was being dishonest. I assume he was doing some weird philosophy and never at any moment entertained the possibility that meat and potatoes real life counting was hanging in jeopardy. I don’t think he was going around and saying “you should worry about reality and trusting your lying eyes because of some fancy math that you probably don’t need and doesn’t apply to counting physical objects”

Based on his reputation and without reading what he wrote, no I don’t think he was being dishonest.

That's literally an argument from authority fallacy.

Plenty of philosophers have doubted even the most fundamental concepts of everything, including reality itself. Solipsism is a serious philosophical concept, which includes doubting that 1+1 is necessarily 2, and Bertrand Russell entertained that possibility.

It’s not a fallacy because I assume the content of his argument is not the content of your argument. I’m unable to comment on what he wrote. I have told you what I take issue with in your argument and it isn’t the part where you say Bertrand Russel said it too.

It’s not a fallacy because I assume the content of his argument is not the content of your argument.

But you are assuming his argument is valid merely on the basis of his credentials.

And you are assuming my argument is invalid merely on the basis of my credentials.

That's a fallacy.

No, this is a bizarre reading of the situation. Goodbye