@iro84657's banner p

iro84657


				

				

				
0 followers   follows 0 users  
joined 2022 September 07 00:59:18 UTC
Verified Email

				

User ID: 906

iro84657


				
				
				

				
0 followers   follows 0 users   joined 2022 September 07 00:59:18 UTC

					

No bio...


					

User ID: 906

Verified Email

The premises are logically independent from each other: only the conclusions are derived from the premises. If you reject any of the premises, then the entire argument is moot. The point of the argument is to show that rejecting the ultimate conclusion requires one to reject one or more of the premises (or to show that the inference does not hold).

I don't follow him that closely so maybe he has, but I haven't seen Marinos himself make anywhere near so strong a claim as "covering up hundreds of thousands of deaths, using bogus statistical analysis to fool everyone".

Reading this post, it would appear that Marinos is trying to endorse this viewpoint. He uncritically refers to Gøtzsche "explaining how prescription drugs are the third leading cause of death", which would add up to hundreds of thousands of deaths annually when applied to mainstream leading-cause-of-death tables. Marinos doesn't really add much additional analysis in this post, likely because it was adapted from a Twitter thread. Also, Marinos quotes an author that blames "evidence-based medicine" practitioners for propagating lies that line the pharmaceutical industry's pockets, and he himself blames government agencies for making policy decisions based on "evidence-based medicine" during the COVID-19 pandemic; I'd assume that the pharmacetical companies (and those colluding with them) are to be interpreted as the ultimate liars. Marinos only seems to back off slightly from the accusations in his conclusion.

Even though the modern progressive "blame Whiteness" position is full of holes, there's still plenty of room open for "cultural improvement" positions (which I am somewhat partial to myself), before going for the full HBD explanation. In the American context, positions in that direction have been espoused by both the black conservatives and the classical Marxists. Naturally, the big difference is in their prescriptions: the former call for the black population to adopt diligence and responsibility to lift itself up, while the latter consider the original prejudice, the current top-down progressive overtures, and the calls for "rugged individualism" to all be tricks to distract the oppressed from rising up against their real oppressors (i.e., the stupidpol position, although I've heard similar things independently from a vocal Marxist friend).

There's no need to "refute" the existence of such a society, because it does not exist, by observation.

My apologies, I misworded that. I meant to express the possibility of such a society.

This model seems to be multiplying entities unnecessarily.

Occam's razor is a principle: it is not a universal law, especially in the social sciences with their confounders upon confounders. The simplest possible strawman HBD model of "higher IQ invariably implies greater relative success" can be easily refuted by the various pre-industrial empires that rose and fell from environmental factors, such as ancient Egypt, which could repeatedly reform around the Nile valley even when the government collapsed, or dynastic China, which couldn't survive contact with the industrialized West, or the Central and South American empires, which couldn't prove themselves one way or another before getting decimated by smallpox.

I'll admit that there haven't been so many clear counterexamples to the "naive HBD" model following the Industrial Revolution in Europe, although it would predict that China and/or Japan will ultimately prevail over the West. The cultural model would attribute the Industrial Revolution to the combination of an environment demanding industrial solutions and a society stable enough to develop them, where the societal stability came from historical and cultural happenstance rather than being predetermined by HBD factors.

Not only do the two well-known justifications you just mentioned argue against each other, they also fail to conform with the observable outcomes. We know that some groups have bad outcomes whether being actively discriminated against or "helped". We know that other groups have bad outcomes when actively discriminated against and do much better when they no longer are.

The two justifications can be aligned pretty easily with a basic path-dependence model: when one cultural group is threatened by another, it either fails to defend itself and becomes persistently unsuccessful, or defends itself becomes persistently successful, and this initial failure or success can be attributed to temporary environmental, military, or political conditions. Under this model, even if an unsuccessful group receives political or economic "help", it cannot become inherently successful unless its culture changes. (Thus leading to the old debate over whether and how culture can be intentionally changed.)

But the big "advantage" of the cultural explanation is it's difficult enough to disentangle it from genetics that it allows HBD to be unfalsifiably denied.

While it's true that disentangling cultural factors is difficult when trying to explain the overall success of a group, it's a very big mistake to take this as active evidence against culture's importance. I'd also put myself into the "mostly cultural, somewhat genetic" camp. To me, none of the current evidence can plausibly refute the existence possibility (edit) of a society with a common culture in which no genetic group is far more or less successful than the others, with the genetic factors only showing up as numerical discrepancies.

In other words, under this model, even if pure HBD explains some differences in group outcomes, it does not explain the vast differences in poverty, criminality, etc., seen in our current society. Explanations based on cultural coincidence have plenty of well-known justifications for these, such as past prejudice resulting in persistent negative outcomes, or groups facing hardship becoming more successful through cultural selection. Why shouldn't the pro-HBD crowd have to similarly justify its position that higher a higher-IQ population (either on average or on the upper tail) will almost invariably result in a far more successful culture?

Indeed. I suppose that the next step of the defense would be that society persistently undervalues art-as-expression: if the general public were aware of its full value, they would pay for art-as-expression, but structural factors and lack of quantifiable benefits makes awareness implausible in the near future. (Compare this to the animal-welfare activist who fights against factory farmers' greed and consumers' apathy: they believe that if the public were aware of the full value of animal welfare, then animal-protection laws would be passed in a heartbeat.)

In this scenario, the best outcome, short of formal subsidies for artists, would perhaps be a large-scale donation model, much like for many orchestras and museums today. But this is still much less accessible to artists than the pre-AI status quo, where art-as-expression maintains a safe existence as a byproduct of art-as-a-product. So it would still make sense for those who value art-as-expression to lament this change beyond the effects on their own lifestyles, given that this particular Pandora's Box isn't getting closed any time soon.

Using Google Books, I found two English-language usages of the term from the 1850s; all of the earlier usages appear to be OCR errors. However, there appear to be earlier usages of "Reification" in German and "réification" in French, so I plan to keep looking for those.

"The Principle of the Grecian Mythology: Or, How the Greeks Made Their Gods." Fraser's Magazine for Town and Country, vol. 49, no. 289, Jan. 1854, pp. 69-79. Internet Archive, archive.org/details/sim_frasers-magazine_1854-01_49_289/page/69.

In short, although the process by which the Greeks selected the objects of their Pantheon may very well, in the sense in which we are now viewing the subject, be regarded as a process of deification, the actual march of the Greek mind in its intercourse with nature was not a process of deification, or the conscious conversion of impersonal substances into gods, but the very reverse—a process of what may be called reification, or the conscious conversion of what had hitherto been regarded as living beings into impersonal substances. ("The Principle" 74-75)

This is the earliest English-language usage I could find.

Review of A History of Rome, from the Earliest Times to the Establishment of the Empire, by Henry G. Liddell. The Athenæum, no. 1467, 8 Dec. 1855, pp. 1425-1427. Internet Archive, archive.org/details/sim_athenaeum-uk_1855-12-08_1467/page/1425.

Primeval men began with a world all vitality, and instead of having any room or occasion to employ themselves in what we call deification or the conversion of things into personages, their whole intellectual procedure necessarily consisted in exactly the opposite—in a gradual and difficult effort of reification, or the conversion of personages into things. (Review of A History 1425)

The reviewer here appears to be repeating the argument from Fraser's Magazine, contra Liddell.

Many definitions on all dictionaries are circular. Language is not an easy thing, which is why AI still has not been able to master it.

Sure, my point is just that your meaning can't be supported by that definition alone. Even if we say that "to assume" is the same as "to take as granted or true", that isn't sufficient to refute my notion that in common usage, neither "to assume" nor "to take as granted or true" necessarily implies zero possible doubt.

No, that's not what the definition is saying. "[[[judge true] or deem to be true] as true or real] or without proof". There is no possibility of doubt. It's judged/deemed/considered to be true.

That particular dictionary says the exact opposite of what you're saying. To "judge" is "to infer, think, or hold as an opinion; conclude about or assess" (def. 10), and an "opinion" is "a belief or judgment that rests on grounds insufficient to produce complete certainty" (emphasis mine; notice how its author thinks one can be uncertain about a judgment?). So if you want a dictionary to support you on that, you'll have to find another dictionary.

I believe they are. dictionary.com says "believe" is "assume", but Merriam-Webster does not. One of them has to be wrong.

That's the whole reason dictionaries exist: people disagree.

Or perhaps both dictionaries are sometimes correct, sometimes incorrect, and sometimes partially correct, since in real life people can have subtly or obviously different understandings of terms depending on the context. That's the whole thesis of "The Categories Were Made for Man, Not Man for the Categories": nearly all our categories are fuzzy and ill-defined, but they're still useful enough that we talk about them anyway. So in general usage, people don't usually resolve ambiguity by refining their terminology (since hardly anyone else would recognize it), but instead by inserting enough qualifications and explanations that their point hopefully gets across to most of the audience.


BTW. I used ChatGPT and asked it if it saw any difference between "assume" and "suppose", and it 100% said exactly what is my understanding.

I asked ChatGPT the question, and the interpretation it produced is certainly far less strong than your standard of "zero possible doubt" regarding an assumption:

What is the difference between assuming something and supposing something?

Assuming and supposing are similar in that they both involve accepting something as true without proof. However, "assuming" often carries a connotation of confidently accepting something as true, while "supposing" suggests tentativeness or uncertainty. For example, "I assumed the train would be on time" implies a level of confidence, while "I supposed the train would be on time" implies some level of doubt. So, in general assuming is more of a confident and sure statement, supposing is more of a tentative and uncertain statement.

I wouldn't say that being "confident" about something implies that you necessarily have zero possible doubt. But even if you disagree on that, ChatGPT doesn't act on such a strict definition in practice. For instance, it produced the following exchange:

Alice has a box. Alice assumes that the box only contains a dog. What does Alice think is in the box?

Alice thinks that there is a dog in the box.

Alice hears a meow coming from the box. What does Alice think is in the box now?

Since Alice hears a meow coming from the box, she may now think that there is a cat in the box instead of a dog. Her assumption of only a dog in the box would be challenged by the new information of the meow.

If Alice had absolutely zero doubt that the box contained a dog, then her belief could not be challenged in that way: she'd have to conclude that the dog can meow, or that the meow came from outside the box.


Since I'm not one to trust ChatGPT's output to be representative of anything, I decided to ask some people in real life about it.

First, I asked a friend, "What do you think is the difference between assuming something and supposing something?" He replied that the difference is that you assume something before it occurs, but you suppose it while it's occurring or after it occurs.

I asked the same question to a stranger at the bus stop. He replied that when you assume something, you're not entirely sure whether or not it's true, but when you suppose something, you have some kind of predetermined knowledge that it's true.

Finally, I asked the same question to a stranger in a hallway. After several seconds of thought, she replied that she had no clue, then her friend chimed in to say she also had no clue.


ChatGPT, the dictionaries I've checked, and the ordinary people I've asked all give different definitions of "assume" and "suppose", none of which include your standard of zero possible doubt in order to assume something. Therefore, I have strong evidence to believe that in common usage, the terms have no fixed meaning beyond "to accept as true without proof"; all else is vague connotation that can be overridden by context.

What evidence do you have that common usage recognizes your hard boundary, so hard that to cross it is to be unambiguously incorrect?

Personally, I find that if I get little sleep one night (or no sleep at all), then I just get really drowsy the following afternoon, but recover within a couple hours. How awake I feel in the morning seems to mostly depend on how regular I keep my sleep schedule.

If you publish the notebook under the belief that people will execute it, then you would not be protected. Intent doesn't really care about how direct or indirect you make the implementation; all that changes is the difficulty of proving it.

That scenario also makes sense. It fits with the general concept that a superintelligent hostile AGI (if one is possible) would use current or near-future technology at the outset for security, instead of jumping straight to sci-fi weaponry that we aren't even close to inventing yet. Of course, all of this depends on the initial breach being detectable; if the AGI could secretly act in the outside world for an extended time, then it could perform all the R&D it needs. How easy it would be to shut down if detected would probably depend on how quickly it could decentralize its functions.

The claims of hidden deaths in particular seem to come entirely from Gøtzsche. The rest of the sources mainly discuss the replication crisis in medical efficacy, alongside their various preferred solutions. Marinos blames the authorities and medical profession for making decisions based on flawed research to further their own ends, against the interest of the public. Personally, I think that Marinos takes his claims of conspiracy much farther than the evidence would justify; if a reader holds Scott's evaluation of orthodox medical information as generally trustworthy (modulo regulatory friction preventing effective drugs from being sold and preventing promising drugs from being tested, and new drugs' efficacy relative to their predecessors being oversold), this post in particular isn't going to change their mind, since beyond the standard replication-crisis stuff it's mostly an appeal to heterodox authorities such as Gøtzsche and Charlton.

I currently hold a similar position wrt. efficacy vs. active harm. The claims of drugs being actively harmful to the population seem like they mostly come from Gøtzsche's work. I do not know whether or by how much he may have exaggerated these claims. In the meantime, here's all the references on harm I could find from this post:

On BIA 10-2474:

Butler, D., & Callaway, E. (2016, January 21). Scientists in the dark after French clinical trial proves fatal. Nature, 529(7586), 263–264. https://doi.org/10.1038/nature.2016.19189

On fialuridine:

Honkoop, P. Scholte, H. R., de Man, R. A., & Schalm, S. W. (1997). Mitochondrial injury: Lessons from the fialuridine trial. Drug Safety, 17(1), 1–7. https://doi.org/10.2165/00002018-199717010-00001

On TGN1412:

Attarwala, H. (2010). TGN1412: From discovery to disaster. Journal of Young Pharmacists, 2(3), 332–336. https://doi.org/10.4103/0975-1483.66810

Wadman, M. (2006, March 23). London's disastrous drug trial has serious side effects for research. Nature, 440(7083), 388–389. https://doi.org/10.1038/440388a

The bulk of Peter C. Gøtzsche's claims (which probably contain several more references):

Gøtzsche, P. C. (2013). Deadly medicines and organized crime: How big pharma has corrupted healthcare. CRC Press. https://doi.org/10.1201/9780429084034

As it happens, I found that Grote usage a couple hours after my initial message. Note that the version you linked to is the 1851 3rd edition; the only 1st-edition scan I could find on IA is missing the title page but otherwise seems intact.

He literally said there was no possibility of X being true: "Do you accept the possibility that X may be true?" "No".

By X I suppose you refer to the statement "2 + 2 = 4 is not unequivocally true". Perhaps by the statement "it is possible that X is true" (which I'll call Y), you meant that "there exists a meaning of the statement X which is true". However, I believe he interpreted Y as something to the effect of, "Given the meaning M which I would ordinarily assign to the statement X, there exists a context in which M is true." It is entirely possible that the proposition he means by Y is unequivocally false, even though the proposition you mean by Y is unequivocally true: that is, he misinterpreted what you meant by Y.

In particular, it is my understanding that when you say X, you mean, "There exists a meaning of the statement '2 + 2 = 4' which is false." You demonstrate this in your original post, so that provides an example of a meaning of X which is true. But I believe that his meaning M of X is something to the effect of, "Given the meaning M´ which I would ordinarily assign to the statement '2 + 2 = 4' (i.e., a proposition about the integers or a compatible exension thereof), there exists a context in which M´ is false." Since the proposition "2 + 2 = 4" about the integers can be trivially proven true, he believes with certainty that M´ is unequivocally true, thus M is unequivocally false, thus "it is impossible that X is true" (by his own meaning).

(In fact, I still wouldn't say that his belief that M is false has probability 1, but it is about as close to 1 as it can get. It's just that to convince him that M is true, you'd need an even more trivial mathematical proof of ¬M´ which he can understand, and he believes with probability as-close-to-1-as-possible that such a counterproof does not exist, since otherwise his life is a lie and basically all of his reasoning is compromised.)


You are forgetting the context of this subthread. In thus subthread we are not talking about what I mean, we are talking about the definition that one random stranger gave you, which I claimed goes contrary to your claim.

You claimed: «most people here were under the impression that by an "assumption" you meant a "strong supposition"».

In this subthread X is "strong supposition", it's your view that most people's definition of "assumption" is "strong supposition", you provided different examples of people you asked, and one of them gave you the exact opposite: that "supposition" was a "strong assumption". This is the opposite of what you claimed most people were under the impression of.

You keep forgetting the context of the claims you are making.

So be it. I'll grant that my claim there was made based on a hasty impression of the other comments, and I do not actually know for sure whether or not most people on this site inferred a meaning of your words precisely compatible with my earlier statement. But I did not make that claim for its own sake, but instead in service of my original argument. (In fact, most of what I've been saying has been intended to relate to my original argument, not to that particular claim. But I have not been at all clear about that; my apologies.)

Having thought about it a bit more, I'll defend a weaker position, which I believe is still sufficient for my original argument. Most people in general, when they hear someone say that a person "assumes" something, infer (in the absence of evidence otherwise) that what is most likely meant is that the person's state of mind about that thing lies within a particular set S, and S includes some states of mind where the person still has a bit of doubt about that thing.

Thus, most people would infer that if someone says a person "doesn't assume" something, they infer that they most likely mean that the person does not harbor any state of mind within S, and consequently, the person does not harbor any of the states of mind that are within S but include a level of doubt.

Would you say that by "not making assumptions", you specifically mean "not thinking things are true with zero possible doubt"? Because if so, then everyone whose inferred set S includes states of mind with nonzero doubt would have misinterpreted the message of your post, if they had not already found evidence of your actual meaning. Thus my real claim, that most people "aren't going to learn anything from your claims if you use your terminology without explaining it upfront" (which is an exaggeration: I mean that most people, just looking at your explanations in your post, are unlikely to learn what you apparently want them to learn).

I completely disagree with that statement. Most people cannot change their minds regardless of the evidence. In fact in the other discussion I'm having on this site of late the person is obviously holding a p=1 (zero room for doubt).

Perhaps they assign their belief a probability different than 1, but they don't consider your evidence very strong. But I can't say for certain, since I haven't seen the discussion in question. How do you know that your evidence is so strong that they would change their mind if they had any room for doubt?

That is what we are talking about: it's your view that most people ascribe the meaning of X, if a person ascribes the meaning opposite of X, that is opposite to your view.

Any term X has several possible meanings. When one says the term X, one generally has a particular meaning in mind. And when one hears the term X, one must determine which meaning the speaker is using, if one wishes to correctly understand what point is being made. Usually, one infers this from the surrounding text, alongside one's knowledge of which meanings are often used by other speakers in a similar context. But one can simultaneously infer one meaning in one speaker's words, infer another meaning in another speaker's words, and use an entirely different meaning in one's own words.

When you say that we should not "assume" something, it is my understanding that you mean that we should not think that something is true with zero possible doubt. It is also my understanding that you do not mean that we should never suppose something strongly with little evidence.

What I allege is that most people, when attempting to determine your meaning of "assume", do not rule out the latter meaning. And since most speakers, in most of their speech and writing, include the possibility of doubt in their meaning of "assume", most people are likely to incorrectly infer that you probably include the possibility of doubt in your meaning of "assume".

Therefore, when they determine your meaning of "not assume", they are likely to infer that you mean something closer to "not suppose something strongly with little evidence" than "not think that something is true with zero possible doubt". (It isn't relevant here whether they think either of "assume" or "suppose" is somewhat stronger than the other: what matters is that there exist certain states of mind including some level of doubt, and they incorrectly infer that by saying we should "not assume" things you mean that we should not hold any of those states of mind.)

I'm not saying that most people are unable to understand your terminology, or that your terminology is inherently wrong. I'm saying that most people aren't very familiar with your terms, and they're likely to infer meanings that are overly inclusive. This makes the inferred negations of your terms (e.g., "not assume") overly exclusive, which makes most people miss your point. Thus my original request that you clarify your terminology upfront.

They have zero doubts in their mind because most people don't see there's any doubt to be had.

In your view, is having doubt the result of a conscious consideration of whether one may be wrong? Or can one have doubt even before considering the matter?

And if it's true that under Bayes the probability of an event doesn't get updated if the prior is 1, regardless of the result. Then that proves Bayes is a poor heuristic for a belief system.

How does this property prove that Bayes' theorem is a poor heuristic? Since most people can change their minds given enough evidence, a Bayesian would infer that it's rare (if even possible) for someone's prior probability to be exactly 1 in real life. What is the issue with the Bayesian statement that hardly anyone holds a prior probability of exactly 1?

The links you provided showed one dictionary saying those things, therefore if I believe those dictionaries saying those things are wrong, I believe that one dictionary saying those things is wrong.

The links point to both dictionaries in question, not just one.

I explained that in the very next sentence.

Under my own notion, that I use in everyday life, "to assume" is not stronger than "to suppose", so my question still stands. How is the opposite statement being correct under your definitions relevant to his statement about his own definitions being "wrong" per se? What bearing do your definitions have on the intrinsic correctness of his definitions?

You literally said: «since most people here were under the impression that by an "assumption" you meant a "strong supposition"».

First, I attributed that to "most people here", not myself. Second, I was talking about their impression of your meaning of an "assumption", not their own prior notions of an "assumption". Personally, my prior notion places no relative strength between an "assumption" and a "supposition"; I would not hazard to guess how strong others' prior notions of an "assumption" are without asking them.

No, I believe most people outside of here would agree that when one assumes something it can mean that one doesn't have any level of doubt about it.

If someone reads your words, "Most people assume we are dealing with the standard arithmetic" (from your 2 + 2 post), do you believe that they are likely to understand that you mean, "Most people have zero doubt in their minds that we are dealing with the standard arithmetic"?

Yes, if that's what she believes, which the word "assume" does not necessarily imply.

On the submission for your 2 + 2 Substack post, you write:

Challenging the claim that 2+2 is unequivocally 4 is one of my favorites to get people to reconsider what they think is true with 100% certainty.

Are you saying that "assuming something is true" is different from "thinking something is true with 100% certainty", and that you are making two different points in your Substack post and submission? Or are you saying that one can "think something is true with 100% certainty" without "believing" that it is true?

Because she might be attempting to be a rational open-minded individual and actually be seeking the truth.

Then why does it matter whether or not anyone assumes anything? If people are capable of accepting evidence against what they think is true, regardless of whether they previously had 100% certainty, then why should anyone avoid having 100% certainty?

It's not impossible because of a fundamental aspect of reality: change.

It is impossible by my own prior notion of "believe with zero doubt", which corresponds to assigning the event a Bayesian probability equivalent to 1. By Bayes' theorem, if your prior probability of the event is 1, then your posterior probability of the event given any evidence must also be 1. Therefore, if your posterior probability is something other than 1 (i.e., you have some doubt after receiving the evidence), then your prior probability must not have been 1 (i.e., you must have had some amount of doubt even before receiving the evidence).

I have barely any understanding of your concept of doubt, and this discrepancy appears to have caused a massive disconnect.

No, I said I believed if they said X, then they would be wrong.

This was after I linked to them saying it:

But if common usage recognized your boundaries, then the dictionaries would be flat-out wrong to say that, e.g., to believe something is to assume it, suppose it, or hold it as an opinion (where an opinion is explicitly a belief less strong than positive knowledge).

I believe they are. dictionary.com says "believe" is "assume", but Merriam-Webster does not. One of them has to be wrong.

When you said, "I believe they are", were you not referring to the dictionaries being "flat-out wrong to say [those things]"? Or did the links I provided not show them saying those things?

Because if you flip the definitions they are entirely correct under my view.

How does this imply that his definitions are "wrong" when they are not flipped?

Even under your view "assume" is stronger than "suppose"

Where do I say that?

First, to make sure I'm not putting more words into your mouth: Would you say that most people outside of here would agree that when one assumes something, one cannot have any level of doubt about it?


I bombarded ChatGPT with questions about the matter, and everything aligned to my notion, for example "If Alice believes claim X is true with zero doubt, can she change her mind?" it answered "Yes", which is obvious to me.

That's not at all obvious to me. As it turns out, your notion of "believe with zero doubt" is very likely different than mine! So that I understand what your notion is: If, at a given point in time, Alice believes with zero possible doubt that the box contains nothing but a dog, then does she also believe with zero possible doubt that she will never receive unequivocal evidence otherwise? If so, does she believe there is a 0% chance that she will receive unequivocal evidence otherwise?

Alice believes claim X with zero doubt in one moment, but then receive evidence contradicting that belief (which was assumed in the first place), why wouldn't she change her mind?

The evidence doesn't unequivocally contradict her belief: it could be the case that the box contains only a dog, but she misheard where the meow came from, or the dog is able to make a meowing sound. If she was previously absolutely certain that a dog is in the box, then why wouldn't she adopt one of the alternative hypotheses compatible with both her assumption and the evidence?


How does this not align precisely to my notion? I didn't even use the term "assume" throughout the question, I used it only to verify the outcome.

By my prior notion of "believe with zero doubt", your prompt is vacuous, since it is impossible that "Alice believes claim X is true with zero doubt" but also "changes her mind", since if she can change her mind, then she didn't actually have zero doubt. Under that notion, ChatGPT is logically permitted to output whatever it wants, since it is not consistently capable of detecting absurdities in its input.

But more practically speaking, to ChatGPT, "zero doubt" or "absolute certainty" can be far from absolute:

Alice has a box. Alice believes with absolutely zero doubt that the box contains only a dog. Then, Alice hears a meow coming from the box. Might she now believe that a cat is in the box?

Yes, if Alice hears a meow coming from the box, it would likely cause her to question her belief that only a dog is in the box and she may now believe that a cat is also in the box. The sound of a meow would be strong evidence that contradicts her previous belief that only a dog is in the box, and would likely lead her to update her belief.

Then did Alice truly have absolutely zero doubt in her previous belief?

It is difficult to say for certain without more information about Alice's thought process and the context of the situation. However, it is possible that Alice did not truly have absolutely zero doubt in her previous belief that the box contained only a dog. In general, it is difficult for anyone to be completely certain about anything, and it is possible that Alice had some level of uncertainty or doubt about her belief, even if she did not explicitly acknowledge it.

If Alice did truly have absolutely zero doubt in her previous belief, then how could strong but inconclusive evidence have caused her to question her belief?

If Alice truly had absolutely zero doubt in her previous belief that the box contained only a dog, then it would be unlikely for her to change her belief based on the sound of a meow alone, as it would not be strong enough evidence to overcome her absolute certainty. In this scenario, other evidences or logical reasoning would have to be provided in order to change her belief.

It's important to note that the concept of "absolute certainty" or "absolutely zero doubt" is a theoretical concept, and it's very rare in real-life scenarios, most of the time people have some level of uncertainty. Even in the case where Alice is convinced that only a dog is in the box, a sudden meow would cause her to question this belief, as it is not consistent with her previous belief, and would lead her to investigate and gather more information about the situation.

So whenever you tell ChatGPT that Alice has "zero doubt" or "absolute certainty", it may be inferring that you're probably mistaken or exaggerating (since many people exaggerate all the time), and that Alice is strongly but not absolutely convinced. That's my alternative explanation for the output you've posted.


No, I said: if a dictionary says that to believe something is to assume it, then I believe it's wrong. I did not say the dictionary is wrong, I said that I believe it is wrong.

The first time, you indeed said you believe that the dictionaries are wrong. But the second time, you said:

He replied that when you assume something, you're not entirely sure whether or not it's true, but when you suppose something, you have some kind of predetermined knowledge that it's true.

He is wrong: it's the other way around.

How is he "wrong" about his own notion of an assumption?

To me it said: «to "assume" something is to accept it as true without proof of evidence». That to me doesn't include doubt, because it's true a priori: it's just true.

So would you say that ChatGPT disagrees with your notion of "assuming" in my example? If not, then how could Alice change her mind from the indirect evidence, if she had zero doubt that there was only a dog in the box?

I don't have to show that my notion is shared by everyone, because I did not claim that, all I need to show is that your notion of "strong supposition" is not shared by everyone, and you yourself proved that.

You're calling people (like the dictionary author, or the second person I questioned) "wrong" when they say that you can "assume" something while still doubting it to some extent. Why are they "wrong", instead of being "right" about their own notion that is distinct from your notion?

There's a difference between most people and most people "here". My understanding of "assume" is in accordance with many dictionaries, for example: to take as granted or true.

And something that is "granted" is "assumed to be true", by the same dictionary. The definition is circular: it doesn't lead to your interpretation of "to assume" as "to believe true with absolutely zero possible doubt".

Besides, the dictionary argument can be taken in any direction. Per Dictionary.com, "to assume" is "to take for granted or without proof", "to take for granted" is "to consider as true or real", "to consider" is "to regard as or deem to be true", and "to regard as true" is "to judge true". That leads to the usage of the term by many here, where to make an assumption about something is to make a strong judgment about its nature, while still possibly holding some amount of doubt.

You draw strong boundaries between these epistemic terms. But if common usage recognized your boundaries, then the dictionaries would be flat-out wrong to say that, e.g., to believe something is to assume it, suppose it, or hold it as an opinion (where an opinion is explicitly a belief less strong than positive knowledge). That's why I suspect that your understanding of the terms is not aligned with common usage, since the dictionaries trample all over your boundaries.


Also, I think that "certainty" in a Bayesian context is best treated as a term of art, equivalent to "degree of belief": a measure of one's belief in the likelihood of an event. It's obviously incompatible with the everyday notion of something being certainly true, but just using the term of art in context doesn't mean one is confusing it with the general term. After all, mathematicians can talk about "fields" all the time without confusing them with grassy plains.

By checking whether or not the person considers the possibility of the claim being not necessarily true. And if not, whether or not the claim is substantiated by evidence or reason.

By "the claim being not necessarily true", are you referring to the possibility that the claim's originator is expressing a belief contrary to truth, or the possibility that the claim's recipient is interpreting the claim differently in such a way as to make it the received belief incorrect? The examples in your original post are of the latter, but I'd usually understand substantiation as a property of a belief having already been shared and correctly interpreted.

It would also seem that the former is far easier than the latter. If you know that you're correctly understanding the belief being expressed by a claim, then you can simply compare the belief to your own worldview, and doubt it according to how likely the alternatives appear to be true. But evaluating how much you may be misinterpreting a claim is a far different challenge: you have to map out the space of possible beliefs in the originator's mind that could have plausibly led to that particular claim, accounting for how the originator's thoughts might look far different from your own.

I suppose (not assume) that your question was rhetorical, and you actually believe I cannot answer it in truth, because you believe in every conversation all participants have to make assumptions all the time. But this is tentative, I do not actually know that, therefore I do not assume that's the case.

My main intent was to elucidate what you don't consider to be an assumption, to determine whether I've been misunderstanding your meaning of the term. Your separation of suppositions from assumptions appears to answer this question in the positive.

The fact that somebody appears to be making an assumption doesn't necessarily means that he is.

How does one distinguish between someone making an assumption, and someone only appearing to be making an assumption? You have claimed that some statements by others contain assumptions, and you have claimed that some statements only contain suppositions that appear like assumptions. But I don't understand exactly how you're evaluating statements to determine this.

You do have a choice: don't make assumptions.

I suspect that this choice is impossible to consistently make. So that I can better understand what you're asking for, could you give me an example of a conversation in which one participant doesn't make any assumptions about the meaning of another?

Where did I "assume" that in my last comment?

You said, "The 'laws of arithmetic' that are relevant depend 100% on what arithmetic we are talking about," which is only meaningful under your usage of "laws of arithmetic" and does not apply to the term as I meant it in my original comment.

That's not an argument, you are just stating your personal position. You are free to do whatever you want, if you don't want to doubt a particular "unequivocal" claim, then don't. Your personal position doesn't contradict my claim in any way.

To quote myself:

there is no choice but to make assumptions of terms ordinarily having their plain meanings, to avoid an infinite regress of definitions used to clarify definitions.

To rephrase that, communication relies on at least some terms being commonly understood, since otherwise you'd reach an infinite regress. As a consequence, there must exist terms that have an unambiguous "default meaning" in the absence of clarification. But how do we decide which terms are unambiguous? Empirically, I can decide that a widespread term has an unambiguous default meaning if I have never heard anyone use the term contrary to that meaning in a general context, and if I have no particular evidence that other people are actively using an alternative meaning in a general context. I believe it reasonable to set the bar here, since any weaker criterion would result in the infinite-regress issue.

Because that's what skepticism demands. I assert that 100% certainty on anything is problematic, which is the reason why skepticism exists in the first place.

Sure, if someone writes "2 + 2 = 4", it isn't 100% certain that they're actually making a statement about the integers: perhaps they're completely innumerate and just copied the symbols out of a book because they look cool. I mean to say that it's so unlikely that they're referring to something other than integer arithmetic that it wouldn't be worth my time to entertain the thought, without any special evidence that they are (such as it being advertised as a "puzzle").

If you were to provide real evidence that people are using this notation to refer to something other than integer arithmetic in a general context, then I would be far more receptive to your point here.


Indeed, how do you know that your interlocutors are "100% certain" that they know what you mean by "2 + 2"? Perhaps they're "100% certain" that "2 + 2 = 4" by the rules of integer arithmetic, but they're independently 75% certain that you're messing with them, or setting up a joke.