@iro84657's banner p

iro84657


				

				

				
0 followers   follows 0 users  
joined 2022 September 07 00:59:18 UTC
Verified Email

				

User ID: 906

iro84657


				
				
				

				
0 followers   follows 0 users   joined 2022 September 07 00:59:18 UTC

					

No bio...


					

User ID: 906

Verified Email

It's not. Nevertheless, when you're willing to give yourself as many entities as you need to save your theory, you add nothing to the world's store of knowledge.

True, the cultural model has plenty of free variables around the creation, transmission, evolution, and effects of cultural factors, as well as their importance relative to temporary environmental factors. But while the HBD model of IQ as a driver of success focuses more on individuals, it has its own free variables around the mechanism of how general intelligence produces pro-social behavior. That is, since a society made entirely of high-IQ Machiavellian schemers wouldn't last very long, it requires that general intelligence tends to amplify a population's positive traits over its negative traits.

Yes, you can make this model. Can you, in principle, back it or refute it with evidence? If not, the model is vacuous. If you can.... well, does it fit with the evidence? I think it does not.

I'd argue that such a cultural model of societal success is no less vacuous than the HBD model, since both can plausibly explain the historical evidence: neither kind of model has truly been tested to an extent that it has made falsifiable predictions.

Regardless, the question we're trying to ask is, "Is it possible for a human cultural group to become persistently more or less successful than would be predicted from its members' IQ distribution, in the absence of some massive redistribution scheme (e.g., widespread affirmative action) biasing the results?" The cultural model would affirm this, and the stronger HBD models (that I'm aware of) would deny this.

The most direct experiment, of course, would be to abduct a random selection of infants from different genetic groups and get surrogate parents from different cultures to raise them in isolation from the outside world, wait a few generations to see whether the different cultures can maintain their success independently from the genetic groups, and repeat ad nauseam to account for random variation. But this is unethical and would take far more time than most of us would care to spend.

Perhaps a more plausible experiment to affirm the cultural model would be to find a cultural intervention to improve the success of some underperforming genetic group, then successfully implement it in the real world. Then, the question comes down to whether such an effective intervention exists and is practical. As I mentioned, the conservatives and Marxists in the U.S. have their own ideas of a proper intervention, but neither has been able to successfully implement it. A strong HBD model would deny that such an intervention exists, but the cultural model would be consistent with such an intervention existing but being impractical to implement. So an HBD model would have to take the position of the null hypothesis in such an experiment. But due to the sheer number of potential cultural interventions, it would take a lot of failed attempts to provide strong evidence in favor of an HBD model.

He literally said there was no possibility of X being true: "Do you accept the possibility that X may be true?" "No".

By X I suppose you refer to the statement "2 + 2 = 4 is not unequivocally true". Perhaps by the statement "it is possible that X is true" (which I'll call Y), you meant that "there exists a meaning of the statement X which is true". However, I believe he interpreted Y as something to the effect of, "Given the meaning M which I would ordinarily assign to the statement X, there exists a context in which M is true." It is entirely possible that the proposition he means by Y is unequivocally false, even though the proposition you mean by Y is unequivocally true: that is, he misinterpreted what you meant by Y.

In particular, it is my understanding that when you say X, you mean, "There exists a meaning of the statement '2 + 2 = 4' which is false." You demonstrate this in your original post, so that provides an example of a meaning of X which is true. But I believe that his meaning M of X is something to the effect of, "Given the meaning M´ which I would ordinarily assign to the statement '2 + 2 = 4' (i.e., a proposition about the integers or a compatible exension thereof), there exists a context in which M´ is false." Since the proposition "2 + 2 = 4" about the integers can be trivially proven true, he believes with certainty that M´ is unequivocally true, thus M is unequivocally false, thus "it is impossible that X is true" (by his own meaning).

(In fact, I still wouldn't say that his belief that M is false has probability 1, but it is about as close to 1 as it can get. It's just that to convince him that M is true, you'd need an even more trivial mathematical proof of ¬M´ which he can understand, and he believes with probability as-close-to-1-as-possible that such a counterproof does not exist, since otherwise his life is a lie and basically all of his reasoning is compromised.)


You are forgetting the context of this subthread. In thus subthread we are not talking about what I mean, we are talking about the definition that one random stranger gave you, which I claimed goes contrary to your claim.

You claimed: «most people here were under the impression that by an "assumption" you meant a "strong supposition"».

In this subthread X is "strong supposition", it's your view that most people's definition of "assumption" is "strong supposition", you provided different examples of people you asked, and one of them gave you the exact opposite: that "supposition" was a "strong assumption". This is the opposite of what you claimed most people were under the impression of.

You keep forgetting the context of the claims you are making.

So be it. I'll grant that my claim there was made based on a hasty impression of the other comments, and I do not actually know for sure whether or not most people on this site inferred a meaning of your words precisely compatible with my earlier statement. But I did not make that claim for its own sake, but instead in service of my original argument. (In fact, most of what I've been saying has been intended to relate to my original argument, not to that particular claim. But I have not been at all clear about that; my apologies.)

Having thought about it a bit more, I'll defend a weaker position, which I believe is still sufficient for my original argument. Most people in general, when they hear someone say that a person "assumes" something, infer (in the absence of evidence otherwise) that what is most likely meant is that the person's state of mind about that thing lies within a particular set S, and S includes some states of mind where the person still has a bit of doubt about that thing.

Thus, most people would infer that if someone says a person "doesn't assume" something, they infer that they most likely mean that the person does not harbor any state of mind within S, and consequently, the person does not harbor any of the states of mind that are within S but include a level of doubt.

Would you say that by "not making assumptions", you specifically mean "not thinking things are true with zero possible doubt"? Because if so, then everyone whose inferred set S includes states of mind with nonzero doubt would have misinterpreted the message of your post, if they had not already found evidence of your actual meaning. Thus my real claim, that most people "aren't going to learn anything from your claims if you use your terminology without explaining it upfront" (which is an exaggeration: I mean that most people, just looking at your explanations in your post, are unlikely to learn what you apparently want them to learn).

I completely disagree with that statement. Most people cannot change their minds regardless of the evidence. In fact in the other discussion I'm having on this site of late the person is obviously holding a p=1 (zero room for doubt).

Perhaps they assign their belief a probability different than 1, but they don't consider your evidence very strong. But I can't say for certain, since I haven't seen the discussion in question. How do you know that your evidence is so strong that they would change their mind if they had any room for doubt?

That is what we are talking about: it's your view that most people ascribe the meaning of X, if a person ascribes the meaning opposite of X, that is opposite to your view.

Any term X has several possible meanings. When one says the term X, one generally has a particular meaning in mind. And when one hears the term X, one must determine which meaning the speaker is using, if one wishes to correctly understand what point is being made. Usually, one infers this from the surrounding text, alongside one's knowledge of which meanings are often used by other speakers in a similar context. But one can simultaneously infer one meaning in one speaker's words, infer another meaning in another speaker's words, and use an entirely different meaning in one's own words.

When you say that we should not "assume" something, it is my understanding that you mean that we should not think that something is true with zero possible doubt. It is also my understanding that you do not mean that we should never suppose something strongly with little evidence.

What I allege is that most people, when attempting to determine your meaning of "assume", do not rule out the latter meaning. And since most speakers, in most of their speech and writing, include the possibility of doubt in their meaning of "assume", most people are likely to incorrectly infer that you probably include the possibility of doubt in your meaning of "assume".

Therefore, when they determine your meaning of "not assume", they are likely to infer that you mean something closer to "not suppose something strongly with little evidence" than "not think that something is true with zero possible doubt". (It isn't relevant here whether they think either of "assume" or "suppose" is somewhat stronger than the other: what matters is that there exist certain states of mind including some level of doubt, and they incorrectly infer that by saying we should "not assume" things you mean that we should not hold any of those states of mind.)

I'm not saying that most people are unable to understand your terminology, or that your terminology is inherently wrong. I'm saying that most people aren't very familiar with your terms, and they're likely to infer meanings that are overly inclusive. This makes the inferred negations of your terms (e.g., "not assume") overly exclusive, which makes most people miss your point. Thus my original request that you clarify your terminology upfront.

They have zero doubts in their mind because most people don't see there's any doubt to be had.

In your view, is having doubt the result of a conscious consideration of whether one may be wrong? Or can one have doubt even before considering the matter?

And if it's true that under Bayes the probability of an event doesn't get updated if the prior is 1, regardless of the result. Then that proves Bayes is a poor heuristic for a belief system.

How does this property prove that Bayes' theorem is a poor heuristic? Since most people can change their minds given enough evidence, a Bayesian would infer that it's rare (if even possible) for someone's prior probability to be exactly 1 in real life. What is the issue with the Bayesian statement that hardly anyone holds a prior probability of exactly 1?

The links you provided showed one dictionary saying those things, therefore if I believe those dictionaries saying those things are wrong, I believe that one dictionary saying those things is wrong.

The links point to both dictionaries in question, not just one.

I explained that in the very next sentence.

Under my own notion, that I use in everyday life, "to assume" is not stronger than "to suppose", so my question still stands. How is the opposite statement being correct under your definitions relevant to his statement about his own definitions being "wrong" per se? What bearing do your definitions have on the intrinsic correctness of his definitions?

You literally said: «since most people here were under the impression that by an "assumption" you meant a "strong supposition"».

First, I attributed that to "most people here", not myself. Second, I was talking about their impression of your meaning of an "assumption", not their own prior notions of an "assumption". Personally, my prior notion places no relative strength between an "assumption" and a "supposition"; I would not hazard to guess how strong others' prior notions of an "assumption" are without asking them.

No, I believe most people outside of here would agree that when one assumes something it can mean that one doesn't have any level of doubt about it.

If someone reads your words, "Most people assume we are dealing with the standard arithmetic" (from your 2 + 2 post), do you believe that they are likely to understand that you mean, "Most people have zero doubt in their minds that we are dealing with the standard arithmetic"?

Yes, if that's what she believes, which the word "assume" does not necessarily imply.

On the submission for your 2 + 2 Substack post, you write:

Challenging the claim that 2+2 is unequivocally 4 is one of my favorites to get people to reconsider what they think is true with 100% certainty.

Are you saying that "assuming something is true" is different from "thinking something is true with 100% certainty", and that you are making two different points in your Substack post and submission? Or are you saying that one can "think something is true with 100% certainty" without "believing" that it is true?

Because she might be attempting to be a rational open-minded individual and actually be seeking the truth.

Then why does it matter whether or not anyone assumes anything? If people are capable of accepting evidence against what they think is true, regardless of whether they previously had 100% certainty, then why should anyone avoid having 100% certainty?

It's not impossible because of a fundamental aspect of reality: change.

It is impossible by my own prior notion of "believe with zero doubt", which corresponds to assigning the event a Bayesian probability equivalent to 1. By Bayes' theorem, if your prior probability of the event is 1, then your posterior probability of the event given any evidence must also be 1. Therefore, if your posterior probability is something other than 1 (i.e., you have some doubt after receiving the evidence), then your prior probability must not have been 1 (i.e., you must have had some amount of doubt even before receiving the evidence).

I have barely any understanding of your concept of doubt, and this discrepancy appears to have caused a massive disconnect.

No, I said I believed if they said X, then they would be wrong.

This was after I linked to them saying it:

But if common usage recognized your boundaries, then the dictionaries would be flat-out wrong to say that, e.g., to believe something is to assume it, suppose it, or hold it as an opinion (where an opinion is explicitly a belief less strong than positive knowledge).

I believe they are. dictionary.com says "believe" is "assume", but Merriam-Webster does not. One of them has to be wrong.

When you said, "I believe they are", were you not referring to the dictionaries being "flat-out wrong to say [those things]"? Or did the links I provided not show them saying those things?

Because if you flip the definitions they are entirely correct under my view.

How does this imply that his definitions are "wrong" when they are not flipped?

Even under your view "assume" is stronger than "suppose"

Where do I say that?

First, to make sure I'm not putting more words into your mouth: Would you say that most people outside of here would agree that when one assumes something, one cannot have any level of doubt about it?


I bombarded ChatGPT with questions about the matter, and everything aligned to my notion, for example "If Alice believes claim X is true with zero doubt, can she change her mind?" it answered "Yes", which is obvious to me.

That's not at all obvious to me. As it turns out, your notion of "believe with zero doubt" is very likely different than mine! So that I understand what your notion is: If, at a given point in time, Alice believes with zero possible doubt that the box contains nothing but a dog, then does she also believe with zero possible doubt that she will never receive unequivocal evidence otherwise? If so, does she believe there is a 0% chance that she will receive unequivocal evidence otherwise?

Alice believes claim X with zero doubt in one moment, but then receive evidence contradicting that belief (which was assumed in the first place), why wouldn't she change her mind?

The evidence doesn't unequivocally contradict her belief: it could be the case that the box contains only a dog, but she misheard where the meow came from, or the dog is able to make a meowing sound. If she was previously absolutely certain that a dog is in the box, then why wouldn't she adopt one of the alternative hypotheses compatible with both her assumption and the evidence?


How does this not align precisely to my notion? I didn't even use the term "assume" throughout the question, I used it only to verify the outcome.

By my prior notion of "believe with zero doubt", your prompt is vacuous, since it is impossible that "Alice believes claim X is true with zero doubt" but also "changes her mind", since if she can change her mind, then she didn't actually have zero doubt. Under that notion, ChatGPT is logically permitted to output whatever it wants, since it is not consistently capable of detecting absurdities in its input.

But more practically speaking, to ChatGPT, "zero doubt" or "absolute certainty" can be far from absolute:

Alice has a box. Alice believes with absolutely zero doubt that the box contains only a dog. Then, Alice hears a meow coming from the box. Might she now believe that a cat is in the box?

Yes, if Alice hears a meow coming from the box, it would likely cause her to question her belief that only a dog is in the box and she may now believe that a cat is also in the box. The sound of a meow would be strong evidence that contradicts her previous belief that only a dog is in the box, and would likely lead her to update her belief.

Then did Alice truly have absolutely zero doubt in her previous belief?

It is difficult to say for certain without more information about Alice's thought process and the context of the situation. However, it is possible that Alice did not truly have absolutely zero doubt in her previous belief that the box contained only a dog. In general, it is difficult for anyone to be completely certain about anything, and it is possible that Alice had some level of uncertainty or doubt about her belief, even if she did not explicitly acknowledge it.

If Alice did truly have absolutely zero doubt in her previous belief, then how could strong but inconclusive evidence have caused her to question her belief?

If Alice truly had absolutely zero doubt in her previous belief that the box contained only a dog, then it would be unlikely for her to change her belief based on the sound of a meow alone, as it would not be strong enough evidence to overcome her absolute certainty. In this scenario, other evidences or logical reasoning would have to be provided in order to change her belief.

It's important to note that the concept of "absolute certainty" or "absolutely zero doubt" is a theoretical concept, and it's very rare in real-life scenarios, most of the time people have some level of uncertainty. Even in the case where Alice is convinced that only a dog is in the box, a sudden meow would cause her to question this belief, as it is not consistent with her previous belief, and would lead her to investigate and gather more information about the situation.

So whenever you tell ChatGPT that Alice has "zero doubt" or "absolute certainty", it may be inferring that you're probably mistaken or exaggerating (since many people exaggerate all the time), and that Alice is strongly but not absolutely convinced. That's my alternative explanation for the output you've posted.


No, I said: if a dictionary says that to believe something is to assume it, then I believe it's wrong. I did not say the dictionary is wrong, I said that I believe it is wrong.

The first time, you indeed said you believe that the dictionaries are wrong. But the second time, you said:

He replied that when you assume something, you're not entirely sure whether or not it's true, but when you suppose something, you have some kind of predetermined knowledge that it's true.

He is wrong: it's the other way around.

How is he "wrong" about his own notion of an assumption?

To me it said: «to "assume" something is to accept it as true without proof of evidence». That to me doesn't include doubt, because it's true a priori: it's just true.

So would you say that ChatGPT disagrees with your notion of "assuming" in my example? If not, then how could Alice change her mind from the indirect evidence, if she had zero doubt that there was only a dog in the box?

I don't have to show that my notion is shared by everyone, because I did not claim that, all I need to show is that your notion of "strong supposition" is not shared by everyone, and you yourself proved that.

You're calling people (like the dictionary author, or the second person I questioned) "wrong" when they say that you can "assume" something while still doubting it to some extent. Why are they "wrong", instead of being "right" about their own notion that is distinct from your notion?

There's a difference between most people and most people "here". My understanding of "assume" is in accordance with many dictionaries, for example: to take as granted or true.

And something that is "granted" is "assumed to be true", by the same dictionary. The definition is circular: it doesn't lead to your interpretation of "to assume" as "to believe true with absolutely zero possible doubt".

Besides, the dictionary argument can be taken in any direction. Per Dictionary.com, "to assume" is "to take for granted or without proof", "to take for granted" is "to consider as true or real", "to consider" is "to regard as or deem to be true", and "to regard as true" is "to judge true". That leads to the usage of the term by many here, where to make an assumption about something is to make a strong judgment about its nature, while still possibly holding some amount of doubt.

You draw strong boundaries between these epistemic terms. But if common usage recognized your boundaries, then the dictionaries would be flat-out wrong to say that, e.g., to believe something is to assume it, suppose it, or hold it as an opinion (where an opinion is explicitly a belief less strong than positive knowledge). That's why I suspect that your understanding of the terms is not aligned with common usage, since the dictionaries trample all over your boundaries.


Also, I think that "certainty" in a Bayesian context is best treated as a term of art, equivalent to "degree of belief": a measure of one's belief in the likelihood of an event. It's obviously incompatible with the everyday notion of something being certainly true, but just using the term of art in context doesn't mean one is confusing it with the general term. After all, mathematicians can talk about "fields" all the time without confusing them with grassy plains.

By checking whether or not the person considers the possibility of the claim being not necessarily true. And if not, whether or not the claim is substantiated by evidence or reason.

By "the claim being not necessarily true", are you referring to the possibility that the claim's originator is expressing a belief contrary to truth, or the possibility that the claim's recipient is interpreting the claim differently in such a way as to make it the received belief incorrect? The examples in your original post are of the latter, but I'd usually understand substantiation as a property of a belief having already been shared and correctly interpreted.

It would also seem that the former is far easier than the latter. If you know that you're correctly understanding the belief being expressed by a claim, then you can simply compare the belief to your own worldview, and doubt it according to how likely the alternatives appear to be true. But evaluating how much you may be misinterpreting a claim is a far different challenge: you have to map out the space of possible beliefs in the originator's mind that could have plausibly led to that particular claim, accounting for how the originator's thoughts might look far different from your own.

I suppose (not assume) that your question was rhetorical, and you actually believe I cannot answer it in truth, because you believe in every conversation all participants have to make assumptions all the time. But this is tentative, I do not actually know that, therefore I do not assume that's the case.

My main intent was to elucidate what you don't consider to be an assumption, to determine whether I've been misunderstanding your meaning of the term. Your separation of suppositions from assumptions appears to answer this question in the positive.

The fact that somebody appears to be making an assumption doesn't necessarily means that he is.

How does one distinguish between someone making an assumption, and someone only appearing to be making an assumption? You have claimed that some statements by others contain assumptions, and you have claimed that some statements only contain suppositions that appear like assumptions. But I don't understand exactly how you're evaluating statements to determine this.

You do have a choice: don't make assumptions.

I suspect that this choice is impossible to consistently make. So that I can better understand what you're asking for, could you give me an example of a conversation in which one participant doesn't make any assumptions about the meaning of another?

Where did I "assume" that in my last comment?

You said, "The 'laws of arithmetic' that are relevant depend 100% on what arithmetic we are talking about," which is only meaningful under your usage of "laws of arithmetic" and does not apply to the term as I meant it in my original comment.

That's not an argument, you are just stating your personal position. You are free to do whatever you want, if you don't want to doubt a particular "unequivocal" claim, then don't. Your personal position doesn't contradict my claim in any way.

To quote myself:

there is no choice but to make assumptions of terms ordinarily having their plain meanings, to avoid an infinite regress of definitions used to clarify definitions.

To rephrase that, communication relies on at least some terms being commonly understood, since otherwise you'd reach an infinite regress. As a consequence, there must exist terms that have an unambiguous "default meaning" in the absence of clarification. But how do we decide which terms are unambiguous? Empirically, I can decide that a widespread term has an unambiguous default meaning if I have never heard anyone use the term contrary to that meaning in a general context, and if I have no particular evidence that other people are actively using an alternative meaning in a general context. I believe it reasonable to set the bar here, since any weaker criterion would result in the infinite-regress issue.

Because that's what skepticism demands. I assert that 100% certainty on anything is problematic, which is the reason why skepticism exists in the first place.

Sure, if someone writes "2 + 2 = 4", it isn't 100% certain that they're actually making a statement about the integers: perhaps they're completely innumerate and just copied the symbols out of a book because they look cool. I mean to say that it's so unlikely that they're referring to something other than integer arithmetic that it wouldn't be worth my time to entertain the thought, without any special evidence that they are (such as it being advertised as a "puzzle").

If you were to provide real evidence that people are using this notation to refer to something other than integer arithmetic in a general context, then I would be far more receptive to your point here.


Indeed, how do you know that your interlocutors are "100% certain" that they know what you mean by "2 + 2"? Perhaps they're "100% certain" that "2 + 2 = 4" by the rules of integer arithmetic, but they're independently 75% certain that you're messing with them, or setting up a joke.

There are no axioms that apply to all arithmetics. There are no such "laws".

Are you getting hung up on my use of the term "laws of arithmetic"? I'm not trying to say that there's a single set of rules that applies to all systems of arithmetic. I'm using "laws of arithmetic" as a general term for the class containing each individual system of arithmetic's set of rules. You'd probably call it the "laws of each arithmetic". The "laws of one arithmetic" (by your definition) can share common features with the "laws of another arithmetic" (by your definition), so it makes sense to talk about "laws of all the different arithmetics" as a class. I've just personally shortened this to the "laws of arithmetic" because I don't recognize your usage of "arithmetic" as a countable noun.

Also, you seem to be conflating "integer arithmetic" with normal arithmetic. 2.5 + 2.1 is not integer arithmetic, and yet follows the traditional arithmetic everyone knows. I'm not even sure if normal arithmetic has a standard name, I just call it "normal arithmetic" to distinguish it from all the other arithmetics. Integer arithmetic is just a subset.

I was focusing on integer arithmetic since that was sufficient to cover your original statement. The natural generalization is group or field arithmetic to define the operations, and real-number arithmetic (a specialization of field arithmetic) to define the field elements. The notation associated with integer arithmetic is the same as the notation associated with real-number arithmetic, since the integers under addition or multiplication are a subgroup of the real numbers.


To repeat my actual argument, I assert that, without prior clarification, almost no one uses the notation associated with real-number arithmetic in a way contrary to real-number arithmetic, which implies that almost no one uses it in a way contrary to integer arithmetic. Therefore, I refuse to entertain the notion that someone is actually referring to some system of arithmetic incompatible with real-number arithmetic when they use the notation associated with real-number arithmetic, unless they first clarify this.

But the big "advantage" of the cultural explanation is it's difficult enough to disentangle it from genetics that it allows HBD to be unfalsifiably denied.

While it's true that disentangling cultural factors is difficult when trying to explain the overall success of a group, it's a very big mistake to take this as active evidence against culture's importance. I'd also put myself into the "mostly cultural, somewhat genetic" camp. To me, none of the current evidence can plausibly refute the existence possibility (edit) of a society with a common culture in which no genetic group is far more or less successful than the others, with the genetic factors only showing up as numerical discrepancies.

In other words, under this model, even if pure HBD explains some differences in group outcomes, it does not explain the vast differences in poverty, criminality, etc., seen in our current society. Explanations based on cultural coincidence have plenty of well-known justifications for these, such as past prejudice resulting in persistent negative outcomes, or groups facing hardship becoming more successful through cultural selection. Why shouldn't the pro-HBD crowd have to similarly justify its position that higher a higher-IQ population (either on average or on the upper tail) will almost invariably result in a far more successful culture?

I have no idea what you're arguing or advocating for in the rest of your reply - something about how if the world has surprising aspects that could change everything, that's probably bad and a stressful situation to be in? I agree, but I'm still going to roll up my sleeves and try to reason and plan, anyways.

Of course, that's what you do if you're sane, and I wouldn't suggest anything different. It's more just a feeling of frustration toward most people in these circles, that they hardly seem to find an iota of value of living in a world not full of surprises on a fundamental level. That is, if I had a choice between a fundamentally unsurprising world like the present one and a continually surprising world with [insert utopian characteristics], then I'd choose the former every time (well, as long as it meets a minimum standard of not everyone being constantly tortured or whatever); I feel like no utopian pleasures are worth the infinite risk such a world poses.

(And that goes back to the question of what is a utopia, and what is so good about it? Immortality? Growing the population as large as possible? Total freedom from physical want? Some impossibly amazing state of mind that we speculate is simply better in every way? I'm not entirely an anti-utopian Luddite, I acknowledge that such things might be nice, but they're far from making up for the inherent risk posed if it were even possible to implement them via magical means.)

As a corollary, I'd feel much worse about an AI apocalypse through known means than an AI apocalypse through magical means, since the former would at least have been our own fault for not properly securing the means of mass destruction.

My problem is really with your "there never was, and never will be" sentiment: I believe that only holds under the premise of the universe containing future surprises. I believe in fates far worse than death, but thankfully, in the unsurprising world that is our present one, they can't really be implemented at any kind of scale. A surprising world would be bound by no such limitations.

Many definitions on all dictionaries are circular. Language is not an easy thing, which is why AI still has not been able to master it.

Sure, my point is just that your meaning can't be supported by that definition alone. Even if we say that "to assume" is the same as "to take as granted or true", that isn't sufficient to refute my notion that in common usage, neither "to assume" nor "to take as granted or true" necessarily implies zero possible doubt.

No, that's not what the definition is saying. "[[[judge true] or deem to be true] as true or real] or without proof". There is no possibility of doubt. It's judged/deemed/considered to be true.

That particular dictionary says the exact opposite of what you're saying. To "judge" is "to infer, think, or hold as an opinion; conclude about or assess" (def. 10), and an "opinion" is "a belief or judgment that rests on grounds insufficient to produce complete certainty" (emphasis mine; notice how its author thinks one can be uncertain about a judgment?). So if you want a dictionary to support you on that, you'll have to find another dictionary.

I believe they are. dictionary.com says "believe" is "assume", but Merriam-Webster does not. One of them has to be wrong.

That's the whole reason dictionaries exist: people disagree.

Or perhaps both dictionaries are sometimes correct, sometimes incorrect, and sometimes partially correct, since in real life people can have subtly or obviously different understandings of terms depending on the context. That's the whole thesis of "The Categories Were Made for Man, Not Man for the Categories": nearly all our categories are fuzzy and ill-defined, but they're still useful enough that we talk about them anyway. So in general usage, people don't usually resolve ambiguity by refining their terminology (since hardly anyone else would recognize it), but instead by inserting enough qualifications and explanations that their point hopefully gets across to most of the audience.


BTW. I used ChatGPT and asked it if it saw any difference between "assume" and "suppose", and it 100% said exactly what is my understanding.

I asked ChatGPT the question, and the interpretation it produced is certainly far less strong than your standard of "zero possible doubt" regarding an assumption:

What is the difference between assuming something and supposing something?

Assuming and supposing are similar in that they both involve accepting something as true without proof. However, "assuming" often carries a connotation of confidently accepting something as true, while "supposing" suggests tentativeness or uncertainty. For example, "I assumed the train would be on time" implies a level of confidence, while "I supposed the train would be on time" implies some level of doubt. So, in general assuming is more of a confident and sure statement, supposing is more of a tentative and uncertain statement.

I wouldn't say that being "confident" about something implies that you necessarily have zero possible doubt. But even if you disagree on that, ChatGPT doesn't act on such a strict definition in practice. For instance, it produced the following exchange:

Alice has a box. Alice assumes that the box only contains a dog. What does Alice think is in the box?

Alice thinks that there is a dog in the box.

Alice hears a meow coming from the box. What does Alice think is in the box now?

Since Alice hears a meow coming from the box, she may now think that there is a cat in the box instead of a dog. Her assumption of only a dog in the box would be challenged by the new information of the meow.

If Alice had absolutely zero doubt that the box contained a dog, then her belief could not be challenged in that way: she'd have to conclude that the dog can meow, or that the meow came from outside the box.


Since I'm not one to trust ChatGPT's output to be representative of anything, I decided to ask some people in real life about it.

First, I asked a friend, "What do you think is the difference between assuming something and supposing something?" He replied that the difference is that you assume something before it occurs, but you suppose it while it's occurring or after it occurs.

I asked the same question to a stranger at the bus stop. He replied that when you assume something, you're not entirely sure whether or not it's true, but when you suppose something, you have some kind of predetermined knowledge that it's true.

Finally, I asked the same question to a stranger in a hallway. After several seconds of thought, she replied that she had no clue, then her friend chimed in to say she also had no clue.


ChatGPT, the dictionaries I've checked, and the ordinary people I've asked all give different definitions of "assume" and "suppose", none of which include your standard of zero possible doubt in order to assume something. Therefore, I have strong evidence to believe that in common usage, the terms have no fixed meaning beyond "to accept as true without proof"; all else is vague connotation that can be overridden by context.

What evidence do you have that common usage recognizes your hard boundary, so hard that to cross it is to be unambiguously incorrect?

Personally, I find that if I get little sleep one night (or no sleep at all), then I just get really drowsy the following afternoon, but recover within a couple hours. How awake I feel in the morning seems to mostly depend on how regular I keep my sleep schedule.

If you publish the notebook under the belief that people will execute it, then you would not be protected. Intent doesn't really care about how direct or indirect you make the implementation; all that changes is the difficulty of proving it.

That scenario also makes sense. It fits with the general concept that a superintelligent hostile AGI (if one is possible) would use current or near-future technology at the outset for security, instead of jumping straight to sci-fi weaponry that we aren't even close to inventing yet. Of course, all of this depends on the initial breach being detectable; if the AGI could secretly act in the outside world for an extended time, then it could perform all the R&D it needs. How easy it would be to shut down if detected would probably depend on how quickly it could decentralize its functions.

The claims of hidden deaths in particular seem to come entirely from Gøtzsche. The rest of the sources mainly discuss the replication crisis in medical efficacy, alongside their various preferred solutions. Marinos blames the authorities and medical profession for making decisions based on flawed research to further their own ends, against the interest of the public. Personally, I think that Marinos takes his claims of conspiracy much farther than the evidence would justify; if a reader holds Scott's evaluation of orthodox medical information as generally trustworthy (modulo regulatory friction preventing effective drugs from being sold and preventing promising drugs from being tested, and new drugs' efficacy relative to their predecessors being oversold), this post in particular isn't going to change their mind, since beyond the standard replication-crisis stuff it's mostly an appeal to heterodox authorities such as Gøtzsche and Charlton.

I currently hold a similar position wrt. efficacy vs. active harm. The claims of drugs being actively harmful to the population seem like they mostly come from Gøtzsche's work. I do not know whether or by how much he may have exaggerated these claims. In the meantime, here's all the references on harm I could find from this post:

On BIA 10-2474:

Butler, D., & Callaway, E. (2016, January 21). Scientists in the dark after French clinical trial proves fatal. Nature, 529(7586), 263–264. https://doi.org/10.1038/nature.2016.19189

On fialuridine:

Honkoop, P. Scholte, H. R., de Man, R. A., & Schalm, S. W. (1997). Mitochondrial injury: Lessons from the fialuridine trial. Drug Safety, 17(1), 1–7. https://doi.org/10.2165/00002018-199717010-00001

On TGN1412:

Attarwala, H. (2010). TGN1412: From discovery to disaster. Journal of Young Pharmacists, 2(3), 332–336. https://doi.org/10.4103/0975-1483.66810

Wadman, M. (2006, March 23). London's disastrous drug trial has serious side effects for research. Nature, 440(7083), 388–389. https://doi.org/10.1038/440388a

The bulk of Peter C. Gøtzsche's claims (which probably contain several more references):

Gøtzsche, P. C. (2013). Deadly medicines and organized crime: How big pharma has corrupted healthcare. CRC Press. https://doi.org/10.1201/9780429084034

As it happens, I found that Grote usage a couple hours after my initial message. Note that the version you linked to is the 1851 3rd edition; the only 1st-edition scan I could find on IA is missing the title page but otherwise seems intact.

On October 25, 2020, I tried my hand at the prediction game, registering a prediction elsewhere:

Supposing that Joe Biden is unambiguously held by the mainstream media to have won the 2020 election, Donald Trump will accept his defeat by December 7, 2020, and will leave the White House on January 20, 2021, with 96% probability.

It had been clear by then that the election results would be a mess, but I'd been strongly convinced by the narrative that Trump would make a ruckus for a few weeks to appease his supporters, then lie low until he runs in 2024. Needless to say, I was very surprised when he kept contesting the results well past the Electoral College vote in December; I accepted its legitimacy as coming directly from the Constitution, and I'd thought Trump would similarly respect it. I suppose he simply isn't as much of a traditionalist as I'd judged him to be, given MAGA and all that.

Anyway, being disillusioned, I stopped keeping track of anything Trump-related after January 2021. But given that he apparently intends to run again, does anyone have any good, informative summaries of what he's been up to since then?

My beliefs are binary. Either I believe in something or I don't. I believe everyone's beliefs are like that. But people who follow Bayesian thinking confuse certainty with belief.

In your view, is "believing" something equivalent to supposing it with 100% certainty (or near-100% certainty)?

I have a strong suspicion that your epistemic terminology is very different from most other people's, and they aren't going to learn anything from your claims if you use your terminology without explaining it upfront. For instance, people may have been far more receptive to your "2 + 2" post if you'd explained what you mean by an "assumption", since most people here were under the impression that by an "assumption" you meant a "strong supposition". So it's hard to tell what you mean by "people who follow Bayesian thinking confuse certainty with belief" if we misunderstand what you mean by "certainty" or "belief". Is a "belief" a kind of "supposition", or is it something else entirely?

The "laws of arithmetic" that are relevant depend 100% on what arithmetic we are talking about, therefore it's imperative to know which arithmetic we are talking about.

Then please stop assuming that my uncountable usage of "the concept of arithmetic in general" in that sentence is secretly referring to your countable idea of "a single arithmetic". I've clarified my meaning twice now, I'd appreciate it if you actually responded to my argument instead of repeatedly hammering on that initial miscommunication.

People assume it's the normal arithmetic and cannot possibly be any other one. There is zero doubt in their minds, and that's the problem I'm pointing out.

Why should there be any doubt in their minds, if other systems of arithmetic are never denoted with that notation without prior clarification?