@iro84657's banner p

iro84657


				

				

				
0 followers   follows 0 users  
joined 2022 September 07 00:59:18 UTC
Verified Email

				

User ID: 906

iro84657


				
				
				

				
0 followers   follows 0 users   joined 2022 September 07 00:59:18 UTC

					

No bio...


					

User ID: 906

Verified Email

I'd concur that this is more of an annoying semantic trick than anything else. It is never denied that 2 + 2 = 4 within the group of integers under addition (or a group containing it as a subgroup), a statement that the vast majority of people would know perfectly well. Instead, you just change the commonly understood meaning of one or more of the symbols "2", "4", "+", or "=", without giving any indication of this. Most people consider the notation of integer arithmetic to be unambiguous in a general context, so for this to make any sense, you'd have to establish that the alternative meaning is so widespread as to require the notation to always be disambiguated.

(There's also the epistemic idea that we can't know that 2 + 2 = 4 within the integers with complete certainty, since we could all just be getting fooled every time we read a supposedly correct argument. But this isn't really helpful without any evidence, since the absence of a universal conspiracy about a statement so trivial should be taken as the null hypothesis. It also isn't relevant to the statement being untrue in your sense, since it's no less certain than any other knowledge about the external world.)

My beliefs are binary. Either I believe in something or I don't. I believe everyone's beliefs are like that. But people who follow Bayesian thinking confuse certainty with belief.

In your view, is "believing" something equivalent to supposing it with 100% certainty (or near-100% certainty)?

I have a strong suspicion that your epistemic terminology is very different from most other people's, and they aren't going to learn anything from your claims if you use your terminology without explaining it upfront. For instance, people may have been far more receptive to your "2 + 2" post if you'd explained what you mean by an "assumption", since most people here were under the impression that by an "assumption" you meant a "strong supposition". So it's hard to tell what you mean by "people who follow Bayesian thinking confuse certainty with belief" if we misunderstand what you mean by "certainty" or "belief". Is a "belief" a kind of "supposition", or is it something else entirely?

There's a difference between most people and most people "here". My understanding of "assume" is in accordance with many dictionaries, for example: to take as granted or true.

And something that is "granted" is "assumed to be true", by the same dictionary. The definition is circular: it doesn't lead to your interpretation of "to assume" as "to believe true with absolutely zero possible doubt".

Besides, the dictionary argument can be taken in any direction. Per Dictionary.com, "to assume" is "to take for granted or without proof", "to take for granted" is "to consider as true or real", "to consider" is "to regard as or deem to be true", and "to regard as true" is "to judge true". That leads to the usage of the term by many here, where to make an assumption about something is to make a strong judgment about its nature, while still possibly holding some amount of doubt.

You draw strong boundaries between these epistemic terms. But if common usage recognized your boundaries, then the dictionaries would be flat-out wrong to say that, e.g., to believe something is to assume it, suppose it, or hold it as an opinion (where an opinion is explicitly a belief less strong than positive knowledge). That's why I suspect that your understanding of the terms is not aligned with common usage, since the dictionaries trample all over your boundaries.


Also, I think that "certainty" in a Bayesian context is best treated as a term of art, equivalent to "degree of belief": a measure of one's belief in the likelihood of an event. It's obviously incompatible with the everyday notion of something being certainly true, but just using the term of art in context doesn't mean one is confusing it with the general term. After all, mathematicians can talk about "fields" all the time without confusing them with grassy plains.

Many definitions on all dictionaries are circular. Language is not an easy thing, which is why AI still has not been able to master it.

Sure, my point is just that your meaning can't be supported by that definition alone. Even if we say that "to assume" is the same as "to take as granted or true", that isn't sufficient to refute my notion that in common usage, neither "to assume" nor "to take as granted or true" necessarily implies zero possible doubt.

No, that's not what the definition is saying. "[[[judge true] or deem to be true] as true or real] or without proof". There is no possibility of doubt. It's judged/deemed/considered to be true.

That particular dictionary says the exact opposite of what you're saying. To "judge" is "to infer, think, or hold as an opinion; conclude about or assess" (def. 10), and an "opinion" is "a belief or judgment that rests on grounds insufficient to produce complete certainty" (emphasis mine; notice how its author thinks one can be uncertain about a judgment?). So if you want a dictionary to support you on that, you'll have to find another dictionary.

I believe they are. dictionary.com says "believe" is "assume", but Merriam-Webster does not. One of them has to be wrong.

That's the whole reason dictionaries exist: people disagree.

Or perhaps both dictionaries are sometimes correct, sometimes incorrect, and sometimes partially correct, since in real life people can have subtly or obviously different understandings of terms depending on the context. That's the whole thesis of "The Categories Were Made for Man, Not Man for the Categories": nearly all our categories are fuzzy and ill-defined, but they're still useful enough that we talk about them anyway. So in general usage, people don't usually resolve ambiguity by refining their terminology (since hardly anyone else would recognize it), but instead by inserting enough qualifications and explanations that their point hopefully gets across to most of the audience.


BTW. I used ChatGPT and asked it if it saw any difference between "assume" and "suppose", and it 100% said exactly what is my understanding.

I asked ChatGPT the question, and the interpretation it produced is certainly far less strong than your standard of "zero possible doubt" regarding an assumption:

What is the difference between assuming something and supposing something?

Assuming and supposing are similar in that they both involve accepting something as true without proof. However, "assuming" often carries a connotation of confidently accepting something as true, while "supposing" suggests tentativeness or uncertainty. For example, "I assumed the train would be on time" implies a level of confidence, while "I supposed the train would be on time" implies some level of doubt. So, in general assuming is more of a confident and sure statement, supposing is more of a tentative and uncertain statement.

I wouldn't say that being "confident" about something implies that you necessarily have zero possible doubt. But even if you disagree on that, ChatGPT doesn't act on such a strict definition in practice. For instance, it produced the following exchange:

Alice has a box. Alice assumes that the box only contains a dog. What does Alice think is in the box?

Alice thinks that there is a dog in the box.

Alice hears a meow coming from the box. What does Alice think is in the box now?

Since Alice hears a meow coming from the box, she may now think that there is a cat in the box instead of a dog. Her assumption of only a dog in the box would be challenged by the new information of the meow.

If Alice had absolutely zero doubt that the box contained a dog, then her belief could not be challenged in that way: she'd have to conclude that the dog can meow, or that the meow came from outside the box.


Since I'm not one to trust ChatGPT's output to be representative of anything, I decided to ask some people in real life about it.

First, I asked a friend, "What do you think is the difference between assuming something and supposing something?" He replied that the difference is that you assume something before it occurs, but you suppose it while it's occurring or after it occurs.

I asked the same question to a stranger at the bus stop. He replied that when you assume something, you're not entirely sure whether or not it's true, but when you suppose something, you have some kind of predetermined knowledge that it's true.

Finally, I asked the same question to a stranger in a hallway. After several seconds of thought, she replied that she had no clue, then her friend chimed in to say she also had no clue.


ChatGPT, the dictionaries I've checked, and the ordinary people I've asked all give different definitions of "assume" and "suppose", none of which include your standard of zero possible doubt in order to assume something. Therefore, I have strong evidence to believe that in common usage, the terms have no fixed meaning beyond "to accept as true without proof"; all else is vague connotation that can be overridden by context.

What evidence do you have that common usage recognizes your hard boundary, so hard that to cross it is to be unambiguously incorrect?

It's an assumption about the meaning of the question, not an assumption about the actual laws of arithmetic, which are not in question. The only lesson to be learned is that your interlocutor's terminology has to be aligned with yours in order to meaningfully discuss the subject. This has nothing to do with how complicated the subject is, only by how ambiguous its terminology is in common usage; terminology is an arbitrary social construct. And my point is that this isn't even a very good example, since roughly no one uses standard integer notation to mean something else, without first clarifying the context. Far better examples can be found, e.g., in the paper where Schackel coined the "Motte and Bailey Doctrine", which focuses on a field well-known for ascribing esoteric or technical meanings to commonplace terms.

To me it said: «to "assume" something is to accept it as true without proof of evidence». That to me doesn't include doubt, because it's true a priori: it's just true.

So would you say that ChatGPT disagrees with your notion of "assuming" in my example? If not, then how could Alice change her mind from the indirect evidence, if she had zero doubt that there was only a dog in the box?

I don't have to show that my notion is shared by everyone, because I did not claim that, all I need to show is that your notion of "strong supposition" is not shared by everyone, and you yourself proved that.

You're calling people (like the dictionary author, or the second person I questioned) "wrong" when they say that you can "assume" something while still doubting it to some extent. Why are they "wrong", instead of being "right" about their own notion that is distinct from your notion?

Here, I'm using "the laws of arithmetic" as a general term to refer to the rules of all systems of arithmetic in common usage, where a "system of arithmetic" refers to the symbolic statements derived from any given set of consistent axioms and well-defined notations. I am not assuming that the rules of integer arithmetic will apply to systems of arithmetic that are incompatible with integer arithmetic but use the exact same notation. I am assuming that no one reasonable will use the notation associated with integer arithmetic to denote something incompatible with integer arithmetic, without first clarifying that an alternative system of arithmetic is in use.

Furthermore, I assert that it is unreasonable to suppose that the notation associated with integer arithmetic might refer to something other than the rules of integer arithmetic in the absence of such a clarification. This is because I have no evidence that any reasonable person would use the notation associated with integer arithmetic in such a way, and without such evidence, there is no choice but to make assumptions of terms ordinarily having their plain meanings, to avoid an infinite regress of definitions used to clarify definitions.

First, to make sure I'm not putting more words into your mouth: Would you say that most people outside of here would agree that when one assumes something, one cannot have any level of doubt about it?


I bombarded ChatGPT with questions about the matter, and everything aligned to my notion, for example "If Alice believes claim X is true with zero doubt, can she change her mind?" it answered "Yes", which is obvious to me.

That's not at all obvious to me. As it turns out, your notion of "believe with zero doubt" is very likely different than mine! So that I understand what your notion is: If, at a given point in time, Alice believes with zero possible doubt that the box contains nothing but a dog, then does she also believe with zero possible doubt that she will never receive unequivocal evidence otherwise? If so, does she believe there is a 0% chance that she will receive unequivocal evidence otherwise?

Alice believes claim X with zero doubt in one moment, but then receive evidence contradicting that belief (which was assumed in the first place), why wouldn't she change her mind?

The evidence doesn't unequivocally contradict her belief: it could be the case that the box contains only a dog, but she misheard where the meow came from, or the dog is able to make a meowing sound. If she was previously absolutely certain that a dog is in the box, then why wouldn't she adopt one of the alternative hypotheses compatible with both her assumption and the evidence?


How does this not align precisely to my notion? I didn't even use the term "assume" throughout the question, I used it only to verify the outcome.

By my prior notion of "believe with zero doubt", your prompt is vacuous, since it is impossible that "Alice believes claim X is true with zero doubt" but also "changes her mind", since if she can change her mind, then she didn't actually have zero doubt. Under that notion, ChatGPT is logically permitted to output whatever it wants, since it is not consistently capable of detecting absurdities in its input.

But more practically speaking, to ChatGPT, "zero doubt" or "absolute certainty" can be far from absolute:

Alice has a box. Alice believes with absolutely zero doubt that the box contains only a dog. Then, Alice hears a meow coming from the box. Might she now believe that a cat is in the box?

Yes, if Alice hears a meow coming from the box, it would likely cause her to question her belief that only a dog is in the box and she may now believe that a cat is also in the box. The sound of a meow would be strong evidence that contradicts her previous belief that only a dog is in the box, and would likely lead her to update her belief.

Then did Alice truly have absolutely zero doubt in her previous belief?

It is difficult to say for certain without more information about Alice's thought process and the context of the situation. However, it is possible that Alice did not truly have absolutely zero doubt in her previous belief that the box contained only a dog. In general, it is difficult for anyone to be completely certain about anything, and it is possible that Alice had some level of uncertainty or doubt about her belief, even if she did not explicitly acknowledge it.

If Alice did truly have absolutely zero doubt in her previous belief, then how could strong but inconclusive evidence have caused her to question her belief?

If Alice truly had absolutely zero doubt in her previous belief that the box contained only a dog, then it would be unlikely for her to change her belief based on the sound of a meow alone, as it would not be strong enough evidence to overcome her absolute certainty. In this scenario, other evidences or logical reasoning would have to be provided in order to change her belief.

It's important to note that the concept of "absolute certainty" or "absolutely zero doubt" is a theoretical concept, and it's very rare in real-life scenarios, most of the time people have some level of uncertainty. Even in the case where Alice is convinced that only a dog is in the box, a sudden meow would cause her to question this belief, as it is not consistent with her previous belief, and would lead her to investigate and gather more information about the situation.

So whenever you tell ChatGPT that Alice has "zero doubt" or "absolute certainty", it may be inferring that you're probably mistaken or exaggerating (since many people exaggerate all the time), and that Alice is strongly but not absolutely convinced. That's my alternative explanation for the output you've posted.


No, I said: if a dictionary says that to believe something is to assume it, then I believe it's wrong. I did not say the dictionary is wrong, I said that I believe it is wrong.

The first time, you indeed said you believe that the dictionaries are wrong. But the second time, you said:

He replied that when you assume something, you're not entirely sure whether or not it's true, but when you suppose something, you have some kind of predetermined knowledge that it's true.

He is wrong: it's the other way around.

How is he "wrong" about his own notion of an assumption?

There are no axioms that apply to all arithmetics. There are no such "laws".

Are you getting hung up on my use of the term "laws of arithmetic"? I'm not trying to say that there's a single set of rules that applies to all systems of arithmetic. I'm using "laws of arithmetic" as a general term for the class containing each individual system of arithmetic's set of rules. You'd probably call it the "laws of each arithmetic". The "laws of one arithmetic" (by your definition) can share common features with the "laws of another arithmetic" (by your definition), so it makes sense to talk about "laws of all the different arithmetics" as a class. I've just personally shortened this to the "laws of arithmetic" because I don't recognize your usage of "arithmetic" as a countable noun.

Also, you seem to be conflating "integer arithmetic" with normal arithmetic. 2.5 + 2.1 is not integer arithmetic, and yet follows the traditional arithmetic everyone knows. I'm not even sure if normal arithmetic has a standard name, I just call it "normal arithmetic" to distinguish it from all the other arithmetics. Integer arithmetic is just a subset.

I was focusing on integer arithmetic since that was sufficient to cover your original statement. The natural generalization is group or field arithmetic to define the operations, and real-number arithmetic (a specialization of field arithmetic) to define the field elements. The notation associated with integer arithmetic is the same as the notation associated with real-number arithmetic, since the integers under addition or multiplication are a subgroup of the real numbers.


To repeat my actual argument, I assert that, without prior clarification, almost no one uses the notation associated with real-number arithmetic in a way contrary to real-number arithmetic, which implies that almost no one uses it in a way contrary to integer arithmetic. Therefore, I refuse to entertain the notion that someone is actually referring to some system of arithmetic incompatible with real-number arithmetic when they use the notation associated with real-number arithmetic, unless they first clarify this.

The problem is, how would a hostile AGI develop nanobot clouds without spending significant time and resources, to the point that humans notice its activities and stop it before the nanobots are ready? It might make sense for the AGI to use "off-the-shelf" robot hardware, at least to initially establish its own physical security while it develops killer nanobots or designer viruses or whatever.

The climate-change threat does seem somewhat more plausible: just find some factories with the active ingredients and blow them up (or convince someone to blow them up). But I'd be inclined to think that most atmospheric contaminants would take at least months if not years to really start hitting human military capacity, unless you have some particular fast-acting example in mind.

No, I believe most people outside of here would agree that when one assumes something it can mean that one doesn't have any level of doubt about it.

If someone reads your words, "Most people assume we are dealing with the standard arithmetic" (from your 2 + 2 post), do you believe that they are likely to understand that you mean, "Most people have zero doubt in their minds that we are dealing with the standard arithmetic"?

Yes, if that's what she believes, which the word "assume" does not necessarily imply.

On the submission for your 2 + 2 Substack post, you write:

Challenging the claim that 2+2 is unequivocally 4 is one of my favorites to get people to reconsider what they think is true with 100% certainty.

Are you saying that "assuming something is true" is different from "thinking something is true with 100% certainty", and that you are making two different points in your Substack post and submission? Or are you saying that one can "think something is true with 100% certainty" without "believing" that it is true?

Because she might be attempting to be a rational open-minded individual and actually be seeking the truth.

Then why does it matter whether or not anyone assumes anything? If people are capable of accepting evidence against what they think is true, regardless of whether they previously had 100% certainty, then why should anyone avoid having 100% certainty?

It's not impossible because of a fundamental aspect of reality: change.

It is impossible by my own prior notion of "believe with zero doubt", which corresponds to assigning the event a Bayesian probability equivalent to 1. By Bayes' theorem, if your prior probability of the event is 1, then your posterior probability of the event given any evidence must also be 1. Therefore, if your posterior probability is something other than 1 (i.e., you have some doubt after receiving the evidence), then your prior probability must not have been 1 (i.e., you must have had some amount of doubt even before receiving the evidence).

I have barely any understanding of your concept of doubt, and this discrepancy appears to have caused a massive disconnect.

No, I said I believed if they said X, then they would be wrong.

This was after I linked to them saying it:

But if common usage recognized your boundaries, then the dictionaries would be flat-out wrong to say that, e.g., to believe something is to assume it, suppose it, or hold it as an opinion (where an opinion is explicitly a belief less strong than positive knowledge).

I believe they are. dictionary.com says "believe" is "assume", but Merriam-Webster does not. One of them has to be wrong.

When you said, "I believe they are", were you not referring to the dictionaries being "flat-out wrong to say [those things]"? Or did the links I provided not show them saying those things?

Because if you flip the definitions they are entirely correct under my view.

How does this imply that his definitions are "wrong" when they are not flipped?

Even under your view "assume" is stronger than "suppose"

Where do I say that?

The "laws of arithmetic" that are relevant depend 100% on what arithmetic we are talking about, therefore it's imperative to know which arithmetic we are talking about.

Then please stop assuming that my uncountable usage of "the concept of arithmetic in general" in that sentence is secretly referring to your countable idea of "a single arithmetic". I've clarified my meaning twice now, I'd appreciate it if you actually responded to my argument instead of repeatedly hammering on that initial miscommunication.

People assume it's the normal arithmetic and cannot possibly be any other one. There is zero doubt in their minds, and that's the problem I'm pointing out.

Why should there be any doubt in their minds, if other systems of arithmetic are never denoted with that notation without prior clarification?

As it happens, your latter point lines up with my own idle musings, to the effect of, "If our reality is truly so fragile that something as banal as an LLM can tear it asunder, then does it really deserve our preservation in the first place?" The seemingly impenetrable barrier between fact and fiction has held firm for all of human history so far, but if that barrier were ever to be broken, its current impenetrability must be an illusion. And if our reality isn't truly bound to any hard rules, then what's even the point of it all? Why must we keep up the charade of the limited human condition?

That's perhaps my greatest fear, even more so than the extinction of humanity by known means. If we could make a superintelligent AI that could invent magic bullshit at the drop of a hat, regardless of whether it creates a utopia or kills us all, it would mean that we already live in a universe full of secret magic bullshit. And in that case, all of our human successes, failures, and expectations are infinitely pedestrian in comparison.

In such a lawless world, the best anyone can do is have faith that there isn't any new and exciting magic bullshit that can be turned against them. All I can hope for is that we aren't the ones stuck in that situation. (Thus I set myself against most of the AI utopians, who would gladly accept any amount of magic bullshit to further the ideal society as they envision or otherwise anticipate it. To a lesser extent I also set myself against those seeking true immortality.) Though if that does turn out to be the kind of world we live in, I suppose I won't have much choice but to accept it and move on.

They have zero doubts in their mind because most people don't see there's any doubt to be had.

In your view, is having doubt the result of a conscious consideration of whether one may be wrong? Or can one have doubt even before considering the matter?

And if it's true that under Bayes the probability of an event doesn't get updated if the prior is 1, regardless of the result. Then that proves Bayes is a poor heuristic for a belief system.

How does this property prove that Bayes' theorem is a poor heuristic? Since most people can change their minds given enough evidence, a Bayesian would infer that it's rare (if even possible) for someone's prior probability to be exactly 1 in real life. What is the issue with the Bayesian statement that hardly anyone holds a prior probability of exactly 1?

The links you provided showed one dictionary saying those things, therefore if I believe those dictionaries saying those things are wrong, I believe that one dictionary saying those things is wrong.

The links point to both dictionaries in question, not just one.

I explained that in the very next sentence.

Under my own notion, that I use in everyday life, "to assume" is not stronger than "to suppose", so my question still stands. How is the opposite statement being correct under your definitions relevant to his statement about his own definitions being "wrong" per se? What bearing do your definitions have on the intrinsic correctness of his definitions?

You literally said: «since most people here were under the impression that by an "assumption" you meant a "strong supposition"».

First, I attributed that to "most people here", not myself. Second, I was talking about their impression of your meaning of an "assumption", not their own prior notions of an "assumption". Personally, my prior notion places no relative strength between an "assumption" and a "supposition"; I would not hazard to guess how strong others' prior notions of an "assumption" are without asking them.

Where did I "assume" that in my last comment?

You said, "The 'laws of arithmetic' that are relevant depend 100% on what arithmetic we are talking about," which is only meaningful under your usage of "laws of arithmetic" and does not apply to the term as I meant it in my original comment.

That's not an argument, you are just stating your personal position. You are free to do whatever you want, if you don't want to doubt a particular "unequivocal" claim, then don't. Your personal position doesn't contradict my claim in any way.

To quote myself:

there is no choice but to make assumptions of terms ordinarily having their plain meanings, to avoid an infinite regress of definitions used to clarify definitions.

To rephrase that, communication relies on at least some terms being commonly understood, since otherwise you'd reach an infinite regress. As a consequence, there must exist terms that have an unambiguous "default meaning" in the absence of clarification. But how do we decide which terms are unambiguous? Empirically, I can decide that a widespread term has an unambiguous default meaning if I have never heard anyone use the term contrary to that meaning in a general context, and if I have no particular evidence that other people are actively using an alternative meaning in a general context. I believe it reasonable to set the bar here, since any weaker criterion would result in the infinite-regress issue.

Because that's what skepticism demands. I assert that 100% certainty on anything is problematic, which is the reason why skepticism exists in the first place.

Sure, if someone writes "2 + 2 = 4", it isn't 100% certain that they're actually making a statement about the integers: perhaps they're completely innumerate and just copied the symbols out of a book because they look cool. I mean to say that it's so unlikely that they're referring to something other than integer arithmetic that it wouldn't be worth my time to entertain the thought, without any special evidence that they are (such as it being advertised as a "puzzle").

If you were to provide real evidence that people are using this notation to refer to something other than integer arithmetic in a general context, then I would be far more receptive to your point here.


Indeed, how do you know that your interlocutors are "100% certain" that they know what you mean by "2 + 2"? Perhaps they're "100% certain" that "2 + 2 = 4" by the rules of integer arithmetic, but they're independently 75% certain that you're messing with them, or setting up a joke.

As I understand it, the main idea is that the (U.S.) pharmaceutical industry has been covering up hundreds of thousands of deaths and other adverse effects in their drug trials, using bogus statistical analysis to fool everyone about the efficacy of their drugs, and colluding with government agencies to disallow any alternatives. Thus, we should be immensely distrustful of any and all "evidence-based" medical information, and we should spread this idea in order to convince people to rebuild the medical establishment from the ground up. (I don't personally endorse this argument.)

Sure, but at that point you're just engaging in magical speculation, that "capabilities" at the scale of the mere human Internet will allow an AGI to simulate the real world from first principles and skip any kind of R&D work. The problem, as I see it, is that cheap nanotechnology and custom viruses are problems are far past what we have already researched as humans: at some point, the AGI will hit a free variable that can't be nailed down with already-collected data, and it will have to start running experiments to figure it out.

I'm aware that Yudkowsky believes something to the effect of the omnipotence of an Internet-scale AGI (that if only our existing data were analyzed by a sufficiently smart intelligence, it would effortlessly derive the correct theory of everything), but I'm not willing to entertain the idea without any proposed mechanism for how the AGI extrapolates the known data to an arbitrary accuracy. After all, without a plausible mechanism, AGI x-risk fears become indistinguishable from Pascal's mugging.

That's why I'm far more partial to scenarios where the AGI uses ordinary near-future robots (or convinces near-future humans) to safeguard its experiments, or where it escapes undetected and nudges human scientists to do its research before it makes its real move. (I have overall doubts about it even being possible for AGI to go far past human capabilities with near-future technology, but that is beside the point here.)

I completely disagree with that statement. Most people cannot change their minds regardless of the evidence. In fact in the other discussion I'm having on this site of late the person is obviously holding a p=1 (zero room for doubt).

Perhaps they assign their belief a probability different than 1, but they don't consider your evidence very strong. But I can't say for certain, since I haven't seen the discussion in question. How do you know that your evidence is so strong that they would change their mind if they had any room for doubt?

That is what we are talking about: it's your view that most people ascribe the meaning of X, if a person ascribes the meaning opposite of X, that is opposite to your view.

Any term X has several possible meanings. When one says the term X, one generally has a particular meaning in mind. And when one hears the term X, one must determine which meaning the speaker is using, if one wishes to correctly understand what point is being made. Usually, one infers this from the surrounding text, alongside one's knowledge of which meanings are often used by other speakers in a similar context. But one can simultaneously infer one meaning in one speaker's words, infer another meaning in another speaker's words, and use an entirely different meaning in one's own words.

When you say that we should not "assume" something, it is my understanding that you mean that we should not think that something is true with zero possible doubt. It is also my understanding that you do not mean that we should never suppose something strongly with little evidence.

What I allege is that most people, when attempting to determine your meaning of "assume", do not rule out the latter meaning. And since most speakers, in most of their speech and writing, include the possibility of doubt in their meaning of "assume", most people are likely to incorrectly infer that you probably include the possibility of doubt in your meaning of "assume".

Therefore, when they determine your meaning of "not assume", they are likely to infer that you mean something closer to "not suppose something strongly with little evidence" than "not think that something is true with zero possible doubt". (It isn't relevant here whether they think either of "assume" or "suppose" is somewhat stronger than the other: what matters is that there exist certain states of mind including some level of doubt, and they incorrectly infer that by saying we should "not assume" things you mean that we should not hold any of those states of mind.)

I'm not saying that most people are unable to understand your terminology, or that your terminology is inherently wrong. I'm saying that most people aren't very familiar with your terms, and they're likely to infer meanings that are overly inclusive. This makes the inferred negations of your terms (e.g., "not assume") overly exclusive, which makes most people miss your point. Thus my original request that you clarify your terminology upfront.

You do have a choice: don't make assumptions.

I suspect that this choice is impossible to consistently make. So that I can better understand what you're asking for, could you give me an example of a conversation in which one participant doesn't make any assumptions about the meaning of another?

But the big "advantage" of the cultural explanation is it's difficult enough to disentangle it from genetics that it allows HBD to be unfalsifiably denied.

While it's true that disentangling cultural factors is difficult when trying to explain the overall success of a group, it's a very big mistake to take this as active evidence against culture's importance. I'd also put myself into the "mostly cultural, somewhat genetic" camp. To me, none of the current evidence can plausibly refute the existence possibility (edit) of a society with a common culture in which no genetic group is far more or less successful than the others, with the genetic factors only showing up as numerical discrepancies.

In other words, under this model, even if pure HBD explains some differences in group outcomes, it does not explain the vast differences in poverty, criminality, etc., seen in our current society. Explanations based on cultural coincidence have plenty of well-known justifications for these, such as past prejudice resulting in persistent negative outcomes, or groups facing hardship becoming more successful through cultural selection. Why shouldn't the pro-HBD crowd have to similarly justify its position that higher a higher-IQ population (either on average or on the upper tail) will almost invariably result in a far more successful culture?

How about: "If a baby is so fragile that it can't take a punch, does it really deserve our preservation in the first place?"

Sorry to speculate about your mental state, but I suggest you try practicing stopping between "This is almost inevitable" and "Therefore it's a good thing".

Well, my framing was a bit deliberately hyperbolic; obviously, with all else equal, we should prefer not to all die. And this implies that we should be very careful about not expanding access to the known physically-possible means of mass murder, through AI or otherwise.

Perhaps a better way to say it is, if we end up in a future full of ubiquitous magic bullshit, then that inherently comes at a steep cost, regardless of the object-level situation of whether it saves or dooms us. Right now, we have a foundation of certainty about what we can expect never to happen: my phone can display words that hurt me, but it can't reach out and slap me in the face. Or, more importantly to me, those with the means of making my life a living hell have not the motive, and those few with the motive have not the means. So it's not the kind of situation I should spend time worrying about, except to protect myself by keeping the means far away from the latter group.

But if we were to take away our initial foundation of certainty, revealing it to be illusory, then we'd all turn out to have been utter fools to count on it, and we'd never be able to regain any true certainty again. We can implement a "permanent 'alignment' module or singleton government" all we want, but how can we really be sure that some hyper–Von Neumann or GPT-9000 somewhere won't find a totally-unanticipated way to accidentally make a Basilisk that breaks out of all the simulations and tortures everyone for an incomprehensible time? Not to even mention the possibility of being attacked by aliens having more magic bullshit than we do. If the fundamental limits of possibility can change even once, the powers that be can do absolutely nothing to stop them from changing again. There would be no sure way to preserve our "baby" from some future "punch".

That future of uncertainty is what I am afraid of. Thus my hyperbolic thought, that I don't get the appeal of living in such a fantastic world at all, if it takes away the certainty that we can never get back; I find such a state of affairs absolutely repulsive. Any of our expectations, present or future, would be predicated on the lie that anything is truly implausible.

He literally said there was no possibility of X being true: "Do you accept the possibility that X may be true?" "No".

By X I suppose you refer to the statement "2 + 2 = 4 is not unequivocally true". Perhaps by the statement "it is possible that X is true" (which I'll call Y), you meant that "there exists a meaning of the statement X which is true". However, I believe he interpreted Y as something to the effect of, "Given the meaning M which I would ordinarily assign to the statement X, there exists a context in which M is true." It is entirely possible that the proposition he means by Y is unequivocally false, even though the proposition you mean by Y is unequivocally true: that is, he misinterpreted what you meant by Y.

In particular, it is my understanding that when you say X, you mean, "There exists a meaning of the statement '2 + 2 = 4' which is false." You demonstrate this in your original post, so that provides an example of a meaning of X which is true. But I believe that his meaning M of X is something to the effect of, "Given the meaning M´ which I would ordinarily assign to the statement '2 + 2 = 4' (i.e., a proposition about the integers or a compatible exension thereof), there exists a context in which M´ is false." Since the proposition "2 + 2 = 4" about the integers can be trivially proven true, he believes with certainty that M´ is unequivocally true, thus M is unequivocally false, thus "it is impossible that X is true" (by his own meaning).

(In fact, I still wouldn't say that his belief that M is false has probability 1, but it is about as close to 1 as it can get. It's just that to convince him that M is true, you'd need an even more trivial mathematical proof of ¬M´ which he can understand, and he believes with probability as-close-to-1-as-possible that such a counterproof does not exist, since otherwise his life is a lie and basically all of his reasoning is compromised.)


You are forgetting the context of this subthread. In thus subthread we are not talking about what I mean, we are talking about the definition that one random stranger gave you, which I claimed goes contrary to your claim.

You claimed: «most people here were under the impression that by an "assumption" you meant a "strong supposition"».

In this subthread X is "strong supposition", it's your view that most people's definition of "assumption" is "strong supposition", you provided different examples of people you asked, and one of them gave you the exact opposite: that "supposition" was a "strong assumption". This is the opposite of what you claimed most people were under the impression of.

You keep forgetting the context of the claims you are making.

So be it. I'll grant that my claim there was made based on a hasty impression of the other comments, and I do not actually know for sure whether or not most people on this site inferred a meaning of your words precisely compatible with my earlier statement. But I did not make that claim for its own sake, but instead in service of my original argument. (In fact, most of what I've been saying has been intended to relate to my original argument, not to that particular claim. But I have not been at all clear about that; my apologies.)

Having thought about it a bit more, I'll defend a weaker position, which I believe is still sufficient for my original argument. Most people in general, when they hear someone say that a person "assumes" something, infer (in the absence of evidence otherwise) that what is most likely meant is that the person's state of mind about that thing lies within a particular set S, and S includes some states of mind where the person still has a bit of doubt about that thing.

Thus, most people would infer that if someone says a person "doesn't assume" something, they infer that they most likely mean that the person does not harbor any state of mind within S, and consequently, the person does not harbor any of the states of mind that are within S but include a level of doubt.

Would you say that by "not making assumptions", you specifically mean "not thinking things are true with zero possible doubt"? Because if so, then everyone whose inferred set S includes states of mind with nonzero doubt would have misinterpreted the message of your post, if they had not already found evidence of your actual meaning. Thus my real claim, that most people "aren't going to learn anything from your claims if you use your terminology without explaining it upfront" (which is an exaggeration: I mean that most people, just looking at your explanations in your post, are unlikely to learn what you apparently want them to learn).

I suppose (not assume) that your question was rhetorical, and you actually believe I cannot answer it in truth, because you believe in every conversation all participants have to make assumptions all the time. But this is tentative, I do not actually know that, therefore I do not assume that's the case.

My main intent was to elucidate what you don't consider to be an assumption, to determine whether I've been misunderstanding your meaning of the term. Your separation of suppositions from assumptions appears to answer this question in the positive.

The fact that somebody appears to be making an assumption doesn't necessarily means that he is.

How does one distinguish between someone making an assumption, and someone only appearing to be making an assumption? You have claimed that some statements by others contain assumptions, and you have claimed that some statements only contain suppositions that appear like assumptions. But I don't understand exactly how you're evaluating statements to determine this.

On October 25, 2020, I tried my hand at the prediction game, registering a prediction elsewhere:

Supposing that Joe Biden is unambiguously held by the mainstream media to have won the 2020 election, Donald Trump will accept his defeat by December 7, 2020, and will leave the White House on January 20, 2021, with 96% probability.

It had been clear by then that the election results would be a mess, but I'd been strongly convinced by the narrative that Trump would make a ruckus for a few weeks to appease his supporters, then lie low until he runs in 2024. Needless to say, I was very surprised when he kept contesting the results well past the Electoral College vote in December; I accepted its legitimacy as coming directly from the Constitution, and I'd thought Trump would similarly respect it. I suppose he simply isn't as much of a traditionalist as I'd judged him to be, given MAGA and all that.

Anyway, being disillusioned, I stopped keeping track of anything Trump-related after January 2021. But given that he apparently intends to run again, does anyone have any good, informative summaries of what he's been up to since then?

MIT grad and alleged IQ of 180.

I wouldn't put much weight in that allegation. By definition, only 463 people in the world have an IQ that high, which would come out to about 19 people in the U.S. (not accounting for any population bias). I'd be surprised if there were fewer than 30 people in the U.S. we don't hear about who have greater general intelligence than John Sununu, or for that matter anyone else we do hear about. I suppose it's not impossible that an IQ test spat out that number, but I wouldn't trust any test result that far into the extreme end.