@iro84657's banner p

iro84657


				

				

				
0 followers   follows 0 users  
joined 2022 September 07 00:59:18 UTC
Verified Email

				

User ID: 906

iro84657


				
				
				

				
0 followers   follows 0 users   joined 2022 September 07 00:59:18 UTC

					

No bio...


					

User ID: 906

Verified Email

If a state legislature decided to ignore all votes for Trump when selecting their electors, then those voters might well have a case under Section 2 of the 14th Amendment. (Unless those voters' "participation in rebellion" could be decided by the states?) Of course, a state might find the constitutional penalty of losing electors superior to the possibility of a Trump victory, if the latter has any real chance of occurring at all.

A difference would still lie in whether Congress alone can disqualify a candidate without the involvement of the judiciary, given that they don't have the power to pass bills of attainder.

On October 25, 2020, I tried my hand at the prediction game, registering a prediction elsewhere:

Supposing that Joe Biden is unambiguously held by the mainstream media to have won the 2020 election, Donald Trump will accept his defeat by December 7, 2020, and will leave the White House on January 20, 2021, with 96% probability.

It had been clear by then that the election results would be a mess, but I'd been strongly convinced by the narrative that Trump would make a ruckus for a few weeks to appease his supporters, then lie low until he runs in 2024. Needless to say, I was very surprised when he kept contesting the results well past the Electoral College vote in December; I accepted its legitimacy as coming directly from the Constitution, and I'd thought Trump would similarly respect it. I suppose he simply isn't as much of a traditionalist as I'd judged him to be, given MAGA and all that.

Anyway, being disillusioned, I stopped keeping track of anything Trump-related after January 2021. But given that he apparently intends to run again, does anyone have any good, informative summaries of what he's been up to since then?

I have no idea what you're arguing or advocating for in the rest of your reply - something about how if the world has surprising aspects that could change everything, that's probably bad and a stressful situation to be in? I agree, but I'm still going to roll up my sleeves and try to reason and plan, anyways.

Of course, that's what you do if you're sane, and I wouldn't suggest anything different. It's more just a feeling of frustration toward most people in these circles, that they hardly seem to find an iota of value of living in a world not full of surprises on a fundamental level. That is, if I had a choice between a fundamentally unsurprising world like the present one and a continually surprising world with [insert utopian characteristics], then I'd choose the former every time (well, as long as it meets a minimum standard of not everyone being constantly tortured or whatever); I feel like no utopian pleasures are worth the infinite risk such a world poses.

(And that goes back to the question of what is a utopia, and what is so good about it? Immortality? Growing the population as large as possible? Total freedom from physical want? Some impossibly amazing state of mind that we speculate is simply better in every way? I'm not entirely an anti-utopian Luddite, I acknowledge that such things might be nice, but they're far from making up for the inherent risk posed if it were even possible to implement them via magical means.)

As a corollary, I'd feel much worse about an AI apocalypse through known means than an AI apocalypse through magical means, since the former would at least have been our own fault for not properly securing the means of mass destruction.

My problem is really with your "there never was, and never will be" sentiment: I believe that only holds under the premise of the universe containing future surprises. I believe in fates far worse than death, but thankfully, in the unsurprising world that is our present one, they can't really be implemented at any kind of scale. A surprising world would be bound by no such limitations.

How about: "If a baby is so fragile that it can't take a punch, does it really deserve our preservation in the first place?"

Sorry to speculate about your mental state, but I suggest you try practicing stopping between "This is almost inevitable" and "Therefore it's a good thing".

Well, my framing was a bit deliberately hyperbolic; obviously, with all else equal, we should prefer not to all die. And this implies that we should be very careful about not expanding access to the known physically-possible means of mass murder, through AI or otherwise.

Perhaps a better way to say it is, if we end up in a future full of ubiquitous magic bullshit, then that inherently comes at a steep cost, regardless of the object-level situation of whether it saves or dooms us. Right now, we have a foundation of certainty about what we can expect never to happen: my phone can display words that hurt me, but it can't reach out and slap me in the face. Or, more importantly to me, those with the means of making my life a living hell have not the motive, and those few with the motive have not the means. So it's not the kind of situation I should spend time worrying about, except to protect myself by keeping the means far away from the latter group.

But if we were to take away our initial foundation of certainty, revealing it to be illusory, then we'd all turn out to have been utter fools to count on it, and we'd never be able to regain any true certainty again. We can implement a "permanent 'alignment' module or singleton government" all we want, but how can we really be sure that some hyper–Von Neumann or GPT-9000 somewhere won't find a totally-unanticipated way to accidentally make a Basilisk that breaks out of all the simulations and tortures everyone for an incomprehensible time? Not to even mention the possibility of being attacked by aliens having more magic bullshit than we do. If the fundamental limits of possibility can change even once, the powers that be can do absolutely nothing to stop them from changing again. There would be no sure way to preserve our "baby" from some future "punch".

That future of uncertainty is what I am afraid of. Thus my hyperbolic thought, that I don't get the appeal of living in such a fantastic world at all, if it takes away the certainty that we can never get back; I find such a state of affairs absolutely repulsive. Any of our expectations, present or future, would be predicated on the lie that anything is truly implausible.

As it happens, your latter point lines up with my own idle musings, to the effect of, "If our reality is truly so fragile that something as banal as an LLM can tear it asunder, then does it really deserve our preservation in the first place?" The seemingly impenetrable barrier between fact and fiction has held firm for all of human history so far, but if that barrier were ever to be broken, its current impenetrability must be an illusion. And if our reality isn't truly bound to any hard rules, then what's even the point of it all? Why must we keep up the charade of the limited human condition?

That's perhaps my greatest fear, even more so than the extinction of humanity by known means. If we could make a superintelligent AI that could invent magic bullshit at the drop of a hat, regardless of whether it creates a utopia or kills us all, it would mean that we already live in a universe full of secret magic bullshit. And in that case, all of our human successes, failures, and expectations are infinitely pedestrian in comparison.

In such a lawless world, the best anyone can do is have faith that there isn't any new and exciting magic bullshit that can be turned against them. All I can hope for is that we aren't the ones stuck in that situation. (Thus I set myself against most of the AI utopians, who would gladly accept any amount of magic bullshit to further the ideal society as they envision or otherwise anticipate it. To a lesser extent I also set myself against those seeking true immortality.) Though if that does turn out to be the kind of world we live in, I suppose I won't have much choice but to accept it and move on.

Hmm, how would you define "substantial" here? I'm also intensely skeptical of a Singularity or other fundamental change in the human condition, but I find it very plausible that LLMs could destroy the pseudonymous internet as we know it, by turning it into a spambot hell devoid of useful information. (I'm imagining all sorts of silly stuff like people returning to handwritten letters as a signal of authenticity.) Life would move on, but I'd certainly mourn the loss of the modern internet, for all its faults.

He literally said there was no possibility of X being true: "Do you accept the possibility that X may be true?" "No".

By X I suppose you refer to the statement "2 + 2 = 4 is not unequivocally true". Perhaps by the statement "it is possible that X is true" (which I'll call Y), you meant that "there exists a meaning of the statement X which is true". However, I believe he interpreted Y as something to the effect of, "Given the meaning M which I would ordinarily assign to the statement X, there exists a context in which M is true." It is entirely possible that the proposition he means by Y is unequivocally false, even though the proposition you mean by Y is unequivocally true: that is, he misinterpreted what you meant by Y.

In particular, it is my understanding that when you say X, you mean, "There exists a meaning of the statement '2 + 2 = 4' which is false." You demonstrate this in your original post, so that provides an example of a meaning of X which is true. But I believe that his meaning M of X is something to the effect of, "Given the meaning M´ which I would ordinarily assign to the statement '2 + 2 = 4' (i.e., a proposition about the integers or a compatible exension thereof), there exists a context in which M´ is false." Since the proposition "2 + 2 = 4" about the integers can be trivially proven true, he believes with certainty that M´ is unequivocally true, thus M is unequivocally false, thus "it is impossible that X is true" (by his own meaning).

(In fact, I still wouldn't say that his belief that M is false has probability 1, but it is about as close to 1 as it can get. It's just that to convince him that M is true, you'd need an even more trivial mathematical proof of ¬M´ which he can understand, and he believes with probability as-close-to-1-as-possible that such a counterproof does not exist, since otherwise his life is a lie and basically all of his reasoning is compromised.)


You are forgetting the context of this subthread. In thus subthread we are not talking about what I mean, we are talking about the definition that one random stranger gave you, which I claimed goes contrary to your claim.

You claimed: «most people here were under the impression that by an "assumption" you meant a "strong supposition"».

In this subthread X is "strong supposition", it's your view that most people's definition of "assumption" is "strong supposition", you provided different examples of people you asked, and one of them gave you the exact opposite: that "supposition" was a "strong assumption". This is the opposite of what you claimed most people were under the impression of.

You keep forgetting the context of the claims you are making.

So be it. I'll grant that my claim there was made based on a hasty impression of the other comments, and I do not actually know for sure whether or not most people on this site inferred a meaning of your words precisely compatible with my earlier statement. But I did not make that claim for its own sake, but instead in service of my original argument. (In fact, most of what I've been saying has been intended to relate to my original argument, not to that particular claim. But I have not been at all clear about that; my apologies.)

Having thought about it a bit more, I'll defend a weaker position, which I believe is still sufficient for my original argument. Most people in general, when they hear someone say that a person "assumes" something, infer (in the absence of evidence otherwise) that what is most likely meant is that the person's state of mind about that thing lies within a particular set S, and S includes some states of mind where the person still has a bit of doubt about that thing.

Thus, most people would infer that if someone says a person "doesn't assume" something, they infer that they most likely mean that the person does not harbor any state of mind within S, and consequently, the person does not harbor any of the states of mind that are within S but include a level of doubt.

Would you say that by "not making assumptions", you specifically mean "not thinking things are true with zero possible doubt"? Because if so, then everyone whose inferred set S includes states of mind with nonzero doubt would have misinterpreted the message of your post, if they had not already found evidence of your actual meaning. Thus my real claim, that most people "aren't going to learn anything from your claims if you use your terminology without explaining it upfront" (which is an exaggeration: I mean that most people, just looking at your explanations in your post, are unlikely to learn what you apparently want them to learn).

I completely disagree with that statement. Most people cannot change their minds regardless of the evidence. In fact in the other discussion I'm having on this site of late the person is obviously holding a p=1 (zero room for doubt).

Perhaps they assign their belief a probability different than 1, but they don't consider your evidence very strong. But I can't say for certain, since I haven't seen the discussion in question. How do you know that your evidence is so strong that they would change their mind if they had any room for doubt?

That is what we are talking about: it's your view that most people ascribe the meaning of X, if a person ascribes the meaning opposite of X, that is opposite to your view.

Any term X has several possible meanings. When one says the term X, one generally has a particular meaning in mind. And when one hears the term X, one must determine which meaning the speaker is using, if one wishes to correctly understand what point is being made. Usually, one infers this from the surrounding text, alongside one's knowledge of which meanings are often used by other speakers in a similar context. But one can simultaneously infer one meaning in one speaker's words, infer another meaning in another speaker's words, and use an entirely different meaning in one's own words.

When you say that we should not "assume" something, it is my understanding that you mean that we should not think that something is true with zero possible doubt. It is also my understanding that you do not mean that we should never suppose something strongly with little evidence.

What I allege is that most people, when attempting to determine your meaning of "assume", do not rule out the latter meaning. And since most speakers, in most of their speech and writing, include the possibility of doubt in their meaning of "assume", most people are likely to incorrectly infer that you probably include the possibility of doubt in your meaning of "assume".

Therefore, when they determine your meaning of "not assume", they are likely to infer that you mean something closer to "not suppose something strongly with little evidence" than "not think that something is true with zero possible doubt". (It isn't relevant here whether they think either of "assume" or "suppose" is somewhat stronger than the other: what matters is that there exist certain states of mind including some level of doubt, and they incorrectly infer that by saying we should "not assume" things you mean that we should not hold any of those states of mind.)

I'm not saying that most people are unable to understand your terminology, or that your terminology is inherently wrong. I'm saying that most people aren't very familiar with your terms, and they're likely to infer meanings that are overly inclusive. This makes the inferred negations of your terms (e.g., "not assume") overly exclusive, which makes most people miss your point. Thus my original request that you clarify your terminology upfront.

MIT grad and alleged IQ of 180.

I wouldn't put much weight in that allegation. By definition, only 463 people in the world have an IQ that high, which would come out to about 19 people in the U.S. (not accounting for any population bias). I'd be surprised if there were fewer than 30 people in the U.S. we don't hear about who have greater general intelligence than John Sununu, or for that matter anyone else we do hear about. I suppose it's not impossible that an IQ test spat out that number, but I wouldn't trust any test result that far into the extreme end.

They have zero doubts in their mind because most people don't see there's any doubt to be had.

In your view, is having doubt the result of a conscious consideration of whether one may be wrong? Or can one have doubt even before considering the matter?

And if it's true that under Bayes the probability of an event doesn't get updated if the prior is 1, regardless of the result. Then that proves Bayes is a poor heuristic for a belief system.

How does this property prove that Bayes' theorem is a poor heuristic? Since most people can change their minds given enough evidence, a Bayesian would infer that it's rare (if even possible) for someone's prior probability to be exactly 1 in real life. What is the issue with the Bayesian statement that hardly anyone holds a prior probability of exactly 1?

The links you provided showed one dictionary saying those things, therefore if I believe those dictionaries saying those things are wrong, I believe that one dictionary saying those things is wrong.

The links point to both dictionaries in question, not just one.

I explained that in the very next sentence.

Under my own notion, that I use in everyday life, "to assume" is not stronger than "to suppose", so my question still stands. How is the opposite statement being correct under your definitions relevant to his statement about his own definitions being "wrong" per se? What bearing do your definitions have on the intrinsic correctness of his definitions?

You literally said: «since most people here were under the impression that by an "assumption" you meant a "strong supposition"».

First, I attributed that to "most people here", not myself. Second, I was talking about their impression of your meaning of an "assumption", not their own prior notions of an "assumption". Personally, my prior notion places no relative strength between an "assumption" and a "supposition"; I would not hazard to guess how strong others' prior notions of an "assumption" are without asking them.

No, I believe most people outside of here would agree that when one assumes something it can mean that one doesn't have any level of doubt about it.

If someone reads your words, "Most people assume we are dealing with the standard arithmetic" (from your 2 + 2 post), do you believe that they are likely to understand that you mean, "Most people have zero doubt in their minds that we are dealing with the standard arithmetic"?

Yes, if that's what she believes, which the word "assume" does not necessarily imply.

On the submission for your 2 + 2 Substack post, you write:

Challenging the claim that 2+2 is unequivocally 4 is one of my favorites to get people to reconsider what they think is true with 100% certainty.

Are you saying that "assuming something is true" is different from "thinking something is true with 100% certainty", and that you are making two different points in your Substack post and submission? Or are you saying that one can "think something is true with 100% certainty" without "believing" that it is true?

Because she might be attempting to be a rational open-minded individual and actually be seeking the truth.

Then why does it matter whether or not anyone assumes anything? If people are capable of accepting evidence against what they think is true, regardless of whether they previously had 100% certainty, then why should anyone avoid having 100% certainty?

It's not impossible because of a fundamental aspect of reality: change.

It is impossible by my own prior notion of "believe with zero doubt", which corresponds to assigning the event a Bayesian probability equivalent to 1. By Bayes' theorem, if your prior probability of the event is 1, then your posterior probability of the event given any evidence must also be 1. Therefore, if your posterior probability is something other than 1 (i.e., you have some doubt after receiving the evidence), then your prior probability must not have been 1 (i.e., you must have had some amount of doubt even before receiving the evidence).

I have barely any understanding of your concept of doubt, and this discrepancy appears to have caused a massive disconnect.

No, I said I believed if they said X, then they would be wrong.

This was after I linked to them saying it:

But if common usage recognized your boundaries, then the dictionaries would be flat-out wrong to say that, e.g., to believe something is to assume it, suppose it, or hold it as an opinion (where an opinion is explicitly a belief less strong than positive knowledge).

I believe they are. dictionary.com says "believe" is "assume", but Merriam-Webster does not. One of them has to be wrong.

When you said, "I believe they are", were you not referring to the dictionaries being "flat-out wrong to say [those things]"? Or did the links I provided not show them saying those things?

Because if you flip the definitions they are entirely correct under my view.

How does this imply that his definitions are "wrong" when they are not flipped?

Even under your view "assume" is stronger than "suppose"

Where do I say that?

First, to make sure I'm not putting more words into your mouth: Would you say that most people outside of here would agree that when one assumes something, one cannot have any level of doubt about it?


I bombarded ChatGPT with questions about the matter, and everything aligned to my notion, for example "If Alice believes claim X is true with zero doubt, can she change her mind?" it answered "Yes", which is obvious to me.

That's not at all obvious to me. As it turns out, your notion of "believe with zero doubt" is very likely different than mine! So that I understand what your notion is: If, at a given point in time, Alice believes with zero possible doubt that the box contains nothing but a dog, then does she also believe with zero possible doubt that she will never receive unequivocal evidence otherwise? If so, does she believe there is a 0% chance that she will receive unequivocal evidence otherwise?

Alice believes claim X with zero doubt in one moment, but then receive evidence contradicting that belief (which was assumed in the first place), why wouldn't she change her mind?

The evidence doesn't unequivocally contradict her belief: it could be the case that the box contains only a dog, but she misheard where the meow came from, or the dog is able to make a meowing sound. If she was previously absolutely certain that a dog is in the box, then why wouldn't she adopt one of the alternative hypotheses compatible with both her assumption and the evidence?


How does this not align precisely to my notion? I didn't even use the term "assume" throughout the question, I used it only to verify the outcome.

By my prior notion of "believe with zero doubt", your prompt is vacuous, since it is impossible that "Alice believes claim X is true with zero doubt" but also "changes her mind", since if she can change her mind, then she didn't actually have zero doubt. Under that notion, ChatGPT is logically permitted to output whatever it wants, since it is not consistently capable of detecting absurdities in its input.

But more practically speaking, to ChatGPT, "zero doubt" or "absolute certainty" can be far from absolute:

Alice has a box. Alice believes with absolutely zero doubt that the box contains only a dog. Then, Alice hears a meow coming from the box. Might she now believe that a cat is in the box?

Yes, if Alice hears a meow coming from the box, it would likely cause her to question her belief that only a dog is in the box and she may now believe that a cat is also in the box. The sound of a meow would be strong evidence that contradicts her previous belief that only a dog is in the box, and would likely lead her to update her belief.

Then did Alice truly have absolutely zero doubt in her previous belief?

It is difficult to say for certain without more information about Alice's thought process and the context of the situation. However, it is possible that Alice did not truly have absolutely zero doubt in her previous belief that the box contained only a dog. In general, it is difficult for anyone to be completely certain about anything, and it is possible that Alice had some level of uncertainty or doubt about her belief, even if she did not explicitly acknowledge it.

If Alice did truly have absolutely zero doubt in her previous belief, then how could strong but inconclusive evidence have caused her to question her belief?

If Alice truly had absolutely zero doubt in her previous belief that the box contained only a dog, then it would be unlikely for her to change her belief based on the sound of a meow alone, as it would not be strong enough evidence to overcome her absolute certainty. In this scenario, other evidences or logical reasoning would have to be provided in order to change her belief.

It's important to note that the concept of "absolute certainty" or "absolutely zero doubt" is a theoretical concept, and it's very rare in real-life scenarios, most of the time people have some level of uncertainty. Even in the case where Alice is convinced that only a dog is in the box, a sudden meow would cause her to question this belief, as it is not consistent with her previous belief, and would lead her to investigate and gather more information about the situation.

So whenever you tell ChatGPT that Alice has "zero doubt" or "absolute certainty", it may be inferring that you're probably mistaken or exaggerating (since many people exaggerate all the time), and that Alice is strongly but not absolutely convinced. That's my alternative explanation for the output you've posted.


No, I said: if a dictionary says that to believe something is to assume it, then I believe it's wrong. I did not say the dictionary is wrong, I said that I believe it is wrong.

The first time, you indeed said you believe that the dictionaries are wrong. But the second time, you said:

He replied that when you assume something, you're not entirely sure whether or not it's true, but when you suppose something, you have some kind of predetermined knowledge that it's true.

He is wrong: it's the other way around.

How is he "wrong" about his own notion of an assumption?

To me it said: «to "assume" something is to accept it as true without proof of evidence». That to me doesn't include doubt, because it's true a priori: it's just true.

So would you say that ChatGPT disagrees with your notion of "assuming" in my example? If not, then how could Alice change her mind from the indirect evidence, if she had zero doubt that there was only a dog in the box?

I don't have to show that my notion is shared by everyone, because I did not claim that, all I need to show is that your notion of "strong supposition" is not shared by everyone, and you yourself proved that.

You're calling people (like the dictionary author, or the second person I questioned) "wrong" when they say that you can "assume" something while still doubting it to some extent. Why are they "wrong", instead of being "right" about their own notion that is distinct from your notion?

Many definitions on all dictionaries are circular. Language is not an easy thing, which is why AI still has not been able to master it.

Sure, my point is just that your meaning can't be supported by that definition alone. Even if we say that "to assume" is the same as "to take as granted or true", that isn't sufficient to refute my notion that in common usage, neither "to assume" nor "to take as granted or true" necessarily implies zero possible doubt.

No, that's not what the definition is saying. "[[[judge true] or deem to be true] as true or real] or without proof". There is no possibility of doubt. It's judged/deemed/considered to be true.

That particular dictionary says the exact opposite of what you're saying. To "judge" is "to infer, think, or hold as an opinion; conclude about or assess" (def. 10), and an "opinion" is "a belief or judgment that rests on grounds insufficient to produce complete certainty" (emphasis mine; notice how its author thinks one can be uncertain about a judgment?). So if you want a dictionary to support you on that, you'll have to find another dictionary.

I believe they are. dictionary.com says "believe" is "assume", but Merriam-Webster does not. One of them has to be wrong.

That's the whole reason dictionaries exist: people disagree.

Or perhaps both dictionaries are sometimes correct, sometimes incorrect, and sometimes partially correct, since in real life people can have subtly or obviously different understandings of terms depending on the context. That's the whole thesis of "The Categories Were Made for Man, Not Man for the Categories": nearly all our categories are fuzzy and ill-defined, but they're still useful enough that we talk about them anyway. So in general usage, people don't usually resolve ambiguity by refining their terminology (since hardly anyone else would recognize it), but instead by inserting enough qualifications and explanations that their point hopefully gets across to most of the audience.


BTW. I used ChatGPT and asked it if it saw any difference between "assume" and "suppose", and it 100% said exactly what is my understanding.

I asked ChatGPT the question, and the interpretation it produced is certainly far less strong than your standard of "zero possible doubt" regarding an assumption:

What is the difference between assuming something and supposing something?

Assuming and supposing are similar in that they both involve accepting something as true without proof. However, "assuming" often carries a connotation of confidently accepting something as true, while "supposing" suggests tentativeness or uncertainty. For example, "I assumed the train would be on time" implies a level of confidence, while "I supposed the train would be on time" implies some level of doubt. So, in general assuming is more of a confident and sure statement, supposing is more of a tentative and uncertain statement.

I wouldn't say that being "confident" about something implies that you necessarily have zero possible doubt. But even if you disagree on that, ChatGPT doesn't act on such a strict definition in practice. For instance, it produced the following exchange:

Alice has a box. Alice assumes that the box only contains a dog. What does Alice think is in the box?

Alice thinks that there is a dog in the box.

Alice hears a meow coming from the box. What does Alice think is in the box now?

Since Alice hears a meow coming from the box, she may now think that there is a cat in the box instead of a dog. Her assumption of only a dog in the box would be challenged by the new information of the meow.

If Alice had absolutely zero doubt that the box contained a dog, then her belief could not be challenged in that way: she'd have to conclude that the dog can meow, or that the meow came from outside the box.


Since I'm not one to trust ChatGPT's output to be representative of anything, I decided to ask some people in real life about it.

First, I asked a friend, "What do you think is the difference between assuming something and supposing something?" He replied that the difference is that you assume something before it occurs, but you suppose it while it's occurring or after it occurs.

I asked the same question to a stranger at the bus stop. He replied that when you assume something, you're not entirely sure whether or not it's true, but when you suppose something, you have some kind of predetermined knowledge that it's true.

Finally, I asked the same question to a stranger in a hallway. After several seconds of thought, she replied that she had no clue, then her friend chimed in to say she also had no clue.


ChatGPT, the dictionaries I've checked, and the ordinary people I've asked all give different definitions of "assume" and "suppose", none of which include your standard of zero possible doubt in order to assume something. Therefore, I have strong evidence to believe that in common usage, the terms have no fixed meaning beyond "to accept as true without proof"; all else is vague connotation that can be overridden by context.

What evidence do you have that common usage recognizes your hard boundary, so hard that to cross it is to be unambiguously incorrect?

There's a difference between most people and most people "here". My understanding of "assume" is in accordance with many dictionaries, for example: to take as granted or true.

And something that is "granted" is "assumed to be true", by the same dictionary. The definition is circular: it doesn't lead to your interpretation of "to assume" as "to believe true with absolutely zero possible doubt".

Besides, the dictionary argument can be taken in any direction. Per Dictionary.com, "to assume" is "to take for granted or without proof", "to take for granted" is "to consider as true or real", "to consider" is "to regard as or deem to be true", and "to regard as true" is "to judge true". That leads to the usage of the term by many here, where to make an assumption about something is to make a strong judgment about its nature, while still possibly holding some amount of doubt.

You draw strong boundaries between these epistemic terms. But if common usage recognized your boundaries, then the dictionaries would be flat-out wrong to say that, e.g., to believe something is to assume it, suppose it, or hold it as an opinion (where an opinion is explicitly a belief less strong than positive knowledge). That's why I suspect that your understanding of the terms is not aligned with common usage, since the dictionaries trample all over your boundaries.


Also, I think that "certainty" in a Bayesian context is best treated as a term of art, equivalent to "degree of belief": a measure of one's belief in the likelihood of an event. It's obviously incompatible with the everyday notion of something being certainly true, but just using the term of art in context doesn't mean one is confusing it with the general term. After all, mathematicians can talk about "fields" all the time without confusing them with grassy plains.

My beliefs are binary. Either I believe in something or I don't. I believe everyone's beliefs are like that. But people who follow Bayesian thinking confuse certainty with belief.

In your view, is "believing" something equivalent to supposing it with 100% certainty (or near-100% certainty)?

I have a strong suspicion that your epistemic terminology is very different from most other people's, and they aren't going to learn anything from your claims if you use your terminology without explaining it upfront. For instance, people may have been far more receptive to your "2 + 2" post if you'd explained what you mean by an "assumption", since most people here were under the impression that by an "assumption" you meant a "strong supposition". So it's hard to tell what you mean by "people who follow Bayesian thinking confuse certainty with belief" if we misunderstand what you mean by "certainty" or "belief". Is a "belief" a kind of "supposition", or is it something else entirely?

The greater the difficulty, the more glory in surmounting it. Skillful pilots gain their reputation from storms and tempests.­ — Epicurus.

There was some discussion about this quote a few days ago on ACX. It's plastered all over the Internet, attributed to either Epicurus or Epictetus, but no one there could determine which work it came from. I did some searching around and found that it actually came from an essay by the 17th-century Frenchman Jean-François Sarasin, falsely attributed to Charles de Saint-Évremond by its publisher, made English as an appendix of a translation of Epicurus, then abridged into its current form in the popular book of quotations The Rule of Life. I briefly wrote up the results of my investigation process there, which some may find interesting. That is the short version, though: it really took me a couple days to trawl through all the variations on Google Books. Apparently, back in the day, random aphorisms were very frequently used to fill up empty space in the corners of magazines.

Personally, I find that if I get little sleep one night (or no sleep at all), then I just get really drowsy the following afternoon, but recover within a couple hours. How awake I feel in the morning seems to mostly depend on how regular I keep my sleep schedule.

If you publish the notebook under the belief that people will execute it, then you would not be protected. Intent doesn't really care about how direct or indirect you make the implementation; all that changes is the difficulty of proving it.

Once, I noticed a person talk about an event which occurred "on the 5th Dezember". I presume they spoke German, the phrase being a literal rendering of "am 5. Dezember".

By checking whether or not the person considers the possibility of the claim being not necessarily true. And if not, whether or not the claim is substantiated by evidence or reason.

By "the claim being not necessarily true", are you referring to the possibility that the claim's originator is expressing a belief contrary to truth, or the possibility that the claim's recipient is interpreting the claim differently in such a way as to make it the received belief incorrect? The examples in your original post are of the latter, but I'd usually understand substantiation as a property of a belief having already been shared and correctly interpreted.

It would also seem that the former is far easier than the latter. If you know that you're correctly understanding the belief being expressed by a claim, then you can simply compare the belief to your own worldview, and doubt it according to how likely the alternatives appear to be true. But evaluating how much you may be misinterpreting a claim is a far different challenge: you have to map out the space of possible beliefs in the originator's mind that could have plausibly led to that particular claim, accounting for how the originator's thoughts might look far different from your own.

I suppose (not assume) that your question was rhetorical, and you actually believe I cannot answer it in truth, because you believe in every conversation all participants have to make assumptions all the time. But this is tentative, I do not actually know that, therefore I do not assume that's the case.

My main intent was to elucidate what you don't consider to be an assumption, to determine whether I've been misunderstanding your meaning of the term. Your separation of suppositions from assumptions appears to answer this question in the positive.

The fact that somebody appears to be making an assumption doesn't necessarily means that he is.

How does one distinguish between someone making an assumption, and someone only appearing to be making an assumption? You have claimed that some statements by others contain assumptions, and you have claimed that some statements only contain suppositions that appear like assumptions. But I don't understand exactly how you're evaluating statements to determine this.

You do have a choice: don't make assumptions.

I suspect that this choice is impossible to consistently make. So that I can better understand what you're asking for, could you give me an example of a conversation in which one participant doesn't make any assumptions about the meaning of another?

Where did I "assume" that in my last comment?

You said, "The 'laws of arithmetic' that are relevant depend 100% on what arithmetic we are talking about," which is only meaningful under your usage of "laws of arithmetic" and does not apply to the term as I meant it in my original comment.

That's not an argument, you are just stating your personal position. You are free to do whatever you want, if you don't want to doubt a particular "unequivocal" claim, then don't. Your personal position doesn't contradict my claim in any way.

To quote myself:

there is no choice but to make assumptions of terms ordinarily having their plain meanings, to avoid an infinite regress of definitions used to clarify definitions.

To rephrase that, communication relies on at least some terms being commonly understood, since otherwise you'd reach an infinite regress. As a consequence, there must exist terms that have an unambiguous "default meaning" in the absence of clarification. But how do we decide which terms are unambiguous? Empirically, I can decide that a widespread term has an unambiguous default meaning if I have never heard anyone use the term contrary to that meaning in a general context, and if I have no particular evidence that other people are actively using an alternative meaning in a general context. I believe it reasonable to set the bar here, since any weaker criterion would result in the infinite-regress issue.

Because that's what skepticism demands. I assert that 100% certainty on anything is problematic, which is the reason why skepticism exists in the first place.

Sure, if someone writes "2 + 2 = 4", it isn't 100% certain that they're actually making a statement about the integers: perhaps they're completely innumerate and just copied the symbols out of a book because they look cool. I mean to say that it's so unlikely that they're referring to something other than integer arithmetic that it wouldn't be worth my time to entertain the thought, without any special evidence that they are (such as it being advertised as a "puzzle").

If you were to provide real evidence that people are using this notation to refer to something other than integer arithmetic in a general context, then I would be far more receptive to your point here.


Indeed, how do you know that your interlocutors are "100% certain" that they know what you mean by "2 + 2"? Perhaps they're "100% certain" that "2 + 2 = 4" by the rules of integer arithmetic, but they're independently 75% certain that you're messing with them, or setting up a joke.