@iro84657's banner p

iro84657


				

				

				
0 followers   follows 0 users  
joined 2022 September 07 00:59:18 UTC
Verified Email

				

User ID: 906

iro84657


				
				
				

				
0 followers   follows 0 users   joined 2022 September 07 00:59:18 UTC

					

No bio...


					

User ID: 906

Verified Email

The "laws of arithmetic" that are relevant depend 100% on what arithmetic we are talking about, therefore it's imperative to know which arithmetic we are talking about.

Then please stop assuming that my uncountable usage of "the concept of arithmetic in general" in that sentence is secretly referring to your countable idea of "a single arithmetic". I've clarified my meaning twice now, I'd appreciate it if you actually responded to my argument instead of repeatedly hammering on that initial miscommunication.

People assume it's the normal arithmetic and cannot possibly be any other one. There is zero doubt in their minds, and that's the problem I'm pointing out.

Why should there be any doubt in their minds, if other systems of arithmetic are never denoted with that notation without prior clarification?

There are no axioms that apply to all arithmetics. There are no such "laws".

Are you getting hung up on my use of the term "laws of arithmetic"? I'm not trying to say that there's a single set of rules that applies to all systems of arithmetic. I'm using "laws of arithmetic" as a general term for the class containing each individual system of arithmetic's set of rules. You'd probably call it the "laws of each arithmetic". The "laws of one arithmetic" (by your definition) can share common features with the "laws of another arithmetic" (by your definition), so it makes sense to talk about "laws of all the different arithmetics" as a class. I've just personally shortened this to the "laws of arithmetic" because I don't recognize your usage of "arithmetic" as a countable noun.

Also, you seem to be conflating "integer arithmetic" with normal arithmetic. 2.5 + 2.1 is not integer arithmetic, and yet follows the traditional arithmetic everyone knows. I'm not even sure if normal arithmetic has a standard name, I just call it "normal arithmetic" to distinguish it from all the other arithmetics. Integer arithmetic is just a subset.

I was focusing on integer arithmetic since that was sufficient to cover your original statement. The natural generalization is group or field arithmetic to define the operations, and real-number arithmetic (a specialization of field arithmetic) to define the field elements. The notation associated with integer arithmetic is the same as the notation associated with real-number arithmetic, since the integers under addition or multiplication are a subgroup of the real numbers.


To repeat my actual argument, I assert that, without prior clarification, almost no one uses the notation associated with real-number arithmetic in a way contrary to real-number arithmetic, which implies that almost no one uses it in a way contrary to integer arithmetic. Therefore, I refuse to entertain the notion that someone is actually referring to some system of arithmetic incompatible with real-number arithmetic when they use the notation associated with real-number arithmetic, unless they first clarify this.

Here, I'm using "the laws of arithmetic" as a general term to refer to the rules of all systems of arithmetic in common usage, where a "system of arithmetic" refers to the symbolic statements derived from any given set of consistent axioms and well-defined notations. I am not assuming that the rules of integer arithmetic will apply to systems of arithmetic that are incompatible with integer arithmetic but use the exact same notation. I am assuming that no one reasonable will use the notation associated with integer arithmetic to denote something incompatible with integer arithmetic, without first clarifying that an alternative system of arithmetic is in use.

Furthermore, I assert that it is unreasonable to suppose that the notation associated with integer arithmetic might refer to something other than the rules of integer arithmetic in the absence of such a clarification. This is because I have no evidence that any reasonable person would use the notation associated with integer arithmetic in such a way, and without such evidence, there is no choice but to make assumptions of terms ordinarily having their plain meanings, to avoid an infinite regress of definitions used to clarify definitions.

It's an assumption about the meaning of the question, not an assumption about the actual laws of arithmetic, which are not in question. The only lesson to be learned is that your interlocutor's terminology has to be aligned with yours in order to meaningfully discuss the subject. This has nothing to do with how complicated the subject is, only by how ambiguous its terminology is in common usage; terminology is an arbitrary social construct. And my point is that this isn't even a very good example, since roughly no one uses standard integer notation to mean something else, without first clarifying the context. Far better examples can be found, e.g., in the paper where Schackel coined the "Motte and Bailey Doctrine", which focuses on a field well-known for ascribing esoteric or technical meanings to commonplace terms.

I'd concur that this is more of an annoying semantic trick than anything else. It is never denied that 2 + 2 = 4 within the group of integers under addition (or a group containing it as a subgroup), a statement that the vast majority of people would know perfectly well. Instead, you just change the commonly understood meaning of one or more of the symbols "2", "4", "+", or "=", without giving any indication of this. Most people consider the notation of integer arithmetic to be unambiguous in a general context, so for this to make any sense, you'd have to establish that the alternative meaning is so widespread as to require the notation to always be disambiguated.

(There's also the epistemic idea that we can't know that 2 + 2 = 4 within the integers with complete certainty, since we could all just be getting fooled every time we read a supposedly correct argument. But this isn't really helpful without any evidence, since the absence of a universal conspiracy about a statement so trivial should be taken as the null hypothesis. It also isn't relevant to the statement being untrue in your sense, since it's no less certain than any other knowledge about the external world.)

The premises are logically independent from each other: only the conclusions are derived from the premises. If you reject any of the premises, then the entire argument is moot. The point of the argument is to show that rejecting the ultimate conclusion requires one to reject one or more of the premises (or to show that the inference does not hold).

I suppose that the idea here is to work backwards: given that the argument is correct and that the premises imply the conclusion, it is inconsistent to accept the premises and reject the conclusion. So if you do reject the conclusion (as most people do), then the reader is challenged to either reject one or more of the premises, or to find a fault in the argument that makes the implication not hold. This is a standard case of working backward from moral intuitions to check that the foundations make any sense.

That scenario also makes sense. It fits with the general concept that a superintelligent hostile AGI (if one is possible) would use current or near-future technology at the outset for security, instead of jumping straight to sci-fi weaponry that we aren't even close to inventing yet. Of course, all of this depends on the initial breach being detectable; if the AGI could secretly act in the outside world for an extended time, then it could perform all the R&D it needs. How easy it would be to shut down if detected would probably depend on how quickly it could decentralize its functions.

Sure, but at that point you're just engaging in magical speculation, that "capabilities" at the scale of the mere human Internet will allow an AGI to simulate the real world from first principles and skip any kind of R&D work. The problem, as I see it, is that cheap nanotechnology and custom viruses are problems are far past what we have already researched as humans: at some point, the AGI will hit a free variable that can't be nailed down with already-collected data, and it will have to start running experiments to figure it out.

I'm aware that Yudkowsky believes something to the effect of the omnipotence of an Internet-scale AGI (that if only our existing data were analyzed by a sufficiently smart intelligence, it would effortlessly derive the correct theory of everything), but I'm not willing to entertain the idea without any proposed mechanism for how the AGI extrapolates the known data to an arbitrary accuracy. After all, without a plausible mechanism, AGI x-risk fears become indistinguishable from Pascal's mugging.

That's why I'm far more partial to scenarios where the AGI uses ordinary near-future robots (or convinces near-future humans) to safeguard its experiments, or where it escapes undetected and nudges human scientists to do its research before it makes its real move. (I have overall doubts about it even being possible for AGI to go far past human capabilities with near-future technology, but that is beside the point here.)

Where exactly is the boundary on assets and liabilities that go into net worth? For instance, the sum total of all my future labor is valuable, and events in the present can increase or decrease that value, but it generally wouldn't be included in my net worth (except under the utilitarian accounting that some like to use in these circles). Is the distinction based solely on risk? Are valid sources of personal wealth just enumerated in a list somewhere? Is there some other metric that everyone uses?

The problem is, how would a hostile AGI develop nanobot clouds without spending significant time and resources, to the point that humans notice its activities and stop it before the nanobots are ready? It might make sense for the AGI to use "off-the-shelf" robot hardware, at least to initially establish its own physical security while it develops killer nanobots or designer viruses or whatever.

The climate-change threat does seem somewhat more plausible: just find some factories with the active ingredients and blow them up (or convince someone to blow them up). But I'd be inclined to think that most atmospheric contaminants would take at least months if not years to really start hitting human military capacity, unless you have some particular fast-acting example in mind.

The claims of hidden deaths in particular seem to come entirely from Gøtzsche. The rest of the sources mainly discuss the replication crisis in medical efficacy, alongside their various preferred solutions. Marinos blames the authorities and medical profession for making decisions based on flawed research to further their own ends, against the interest of the public. Personally, I think that Marinos takes his claims of conspiracy much farther than the evidence would justify; if a reader holds Scott's evaluation of orthodox medical information as generally trustworthy (modulo regulatory friction preventing effective drugs from being sold and preventing promising drugs from being tested, and new drugs' efficacy relative to their predecessors being oversold), this post in particular isn't going to change their mind, since beyond the standard replication-crisis stuff it's mostly an appeal to heterodox authorities such as Gøtzsche and Charlton.

I don't follow him that closely so maybe he has, but I haven't seen Marinos himself make anywhere near so strong a claim as "covering up hundreds of thousands of deaths, using bogus statistical analysis to fool everyone".

Reading this post, it would appear that Marinos is trying to endorse this viewpoint. He uncritically refers to Gøtzsche "explaining how prescription drugs are the third leading cause of death", which would add up to hundreds of thousands of deaths annually when applied to mainstream leading-cause-of-death tables. Marinos doesn't really add much additional analysis in this post, likely because it was adapted from a Twitter thread. Also, Marinos quotes an author that blames "evidence-based medicine" practitioners for propagating lies that line the pharmaceutical industry's pockets, and he himself blames government agencies for making policy decisions based on "evidence-based medicine" during the COVID-19 pandemic; I'd assume that the pharmacetical companies (and those colluding with them) are to be interpreted as the ultimate liars. Marinos only seems to back off slightly from the accusations in his conclusion.

I currently hold a similar position wrt. efficacy vs. active harm. The claims of drugs being actively harmful to the population seem like they mostly come from Gøtzsche's work. I do not know whether or by how much he may have exaggerated these claims. In the meantime, here's all the references on harm I could find from this post:

On BIA 10-2474:

Butler, D., & Callaway, E. (2016, January 21). Scientists in the dark after French clinical trial proves fatal. Nature, 529(7586), 263–264. https://doi.org/10.1038/nature.2016.19189

On fialuridine:

Honkoop, P. Scholte, H. R., de Man, R. A., & Schalm, S. W. (1997). Mitochondrial injury: Lessons from the fialuridine trial. Drug Safety, 17(1), 1–7. https://doi.org/10.2165/00002018-199717010-00001

On TGN1412:

Attarwala, H. (2010). TGN1412: From discovery to disaster. Journal of Young Pharmacists, 2(3), 332–336. https://doi.org/10.4103/0975-1483.66810

Wadman, M. (2006, March 23). London's disastrous drug trial has serious side effects for research. Nature, 440(7083), 388–389. https://doi.org/10.1038/440388a

The bulk of Peter C. Gøtzsche's claims (which probably contain several more references):

Gøtzsche, P. C. (2013). Deadly medicines and organized crime: How big pharma has corrupted healthcare. CRC Press. https://doi.org/10.1201/9780429084034

As I understand it, the main idea is that the (U.S.) pharmaceutical industry has been covering up hundreds of thousands of deaths and other adverse effects in their drug trials, using bogus statistical analysis to fool everyone about the efficacy of their drugs, and colluding with government agencies to disallow any alternatives. Thus, we should be immensely distrustful of any and all "evidence-based" medical information, and we should spread this idea in order to convince people to rebuild the medical establishment from the ground up. (I don't personally endorse this argument.)

It's not. Nevertheless, when you're willing to give yourself as many entities as you need to save your theory, you add nothing to the world's store of knowledge.

True, the cultural model has plenty of free variables around the creation, transmission, evolution, and effects of cultural factors, as well as their importance relative to temporary environmental factors. But while the HBD model of IQ as a driver of success focuses more on individuals, it has its own free variables around the mechanism of how general intelligence produces pro-social behavior. That is, since a society made entirely of high-IQ Machiavellian schemers wouldn't last very long, it requires that general intelligence tends to amplify a population's positive traits over its negative traits.

Yes, you can make this model. Can you, in principle, back it or refute it with evidence? If not, the model is vacuous. If you can.... well, does it fit with the evidence? I think it does not.

I'd argue that such a cultural model of societal success is no less vacuous than the HBD model, since both can plausibly explain the historical evidence: neither kind of model has truly been tested to an extent that it has made falsifiable predictions.

Regardless, the question we're trying to ask is, "Is it possible for a human cultural group to become persistently more or less successful than would be predicted from its members' IQ distribution, in the absence of some massive redistribution scheme (e.g., widespread affirmative action) biasing the results?" The cultural model would affirm this, and the stronger HBD models (that I'm aware of) would deny this.

The most direct experiment, of course, would be to abduct a random selection of infants from different genetic groups and get surrogate parents from different cultures to raise them in isolation from the outside world, wait a few generations to see whether the different cultures can maintain their success independently from the genetic groups, and repeat ad nauseam to account for random variation. But this is unethical and would take far more time than most of us would care to spend.

Perhaps a more plausible experiment to affirm the cultural model would be to find a cultural intervention to improve the success of some underperforming genetic group, then successfully implement it in the real world. Then, the question comes down to whether such an effective intervention exists and is practical. As I mentioned, the conservatives and Marxists in the U.S. have their own ideas of a proper intervention, but neither has been able to successfully implement it. A strong HBD model would deny that such an intervention exists, but the cultural model would be consistent with such an intervention existing but being impractical to implement. So an HBD model would have to take the position of the null hypothesis in such an experiment. But due to the sheer number of potential cultural interventions, it would take a lot of failed attempts to provide strong evidence in favor of an HBD model.

Even though the modern progressive "blame Whiteness" position is full of holes, there's still plenty of room open for "cultural improvement" positions (which I am somewhat partial to myself), before going for the full HBD explanation. In the American context, positions in that direction have been espoused by both the black conservatives and the classical Marxists. Naturally, the big difference is in their prescriptions: the former call for the black population to adopt diligence and responsibility to lift itself up, while the latter consider the original prejudice, the current top-down progressive overtures, and the calls for "rugged individualism" to all be tricks to distract the oppressed from rising up against their real oppressors (i.e., the stupidpol position, although I've heard similar things independently from a vocal Marxist friend).

There's no need to "refute" the existence of such a society, because it does not exist, by observation.

My apologies, I misworded that. I meant to express the possibility of such a society.

This model seems to be multiplying entities unnecessarily.

Occam's razor is a principle: it is not a universal law, especially in the social sciences with their confounders upon confounders. The simplest possible strawman HBD model of "higher IQ invariably implies greater relative success" can be easily refuted by the various pre-industrial empires that rose and fell from environmental factors, such as ancient Egypt, which could repeatedly reform around the Nile valley even when the government collapsed, or dynastic China, which couldn't survive contact with the industrialized West, or the Central and South American empires, which couldn't prove themselves one way or another before getting decimated by smallpox.

I'll admit that there haven't been so many clear counterexamples to the "naive HBD" model following the Industrial Revolution in Europe, although it would predict that China and/or Japan will ultimately prevail over the West. The cultural model would attribute the Industrial Revolution to the combination of an environment demanding industrial solutions and a society stable enough to develop them, where the societal stability came from historical and cultural happenstance rather than being predetermined by HBD factors.

Not only do the two well-known justifications you just mentioned argue against each other, they also fail to conform with the observable outcomes. We know that some groups have bad outcomes whether being actively discriminated against or "helped". We know that other groups have bad outcomes when actively discriminated against and do much better when they no longer are.

The two justifications can be aligned pretty easily with a basic path-dependence model: when one cultural group is threatened by another, it either fails to defend itself and becomes persistently unsuccessful, or defends itself becomes persistently successful, and this initial failure or success can be attributed to temporary environmental, military, or political conditions. Under this model, even if an unsuccessful group receives political or economic "help", it cannot become inherently successful unless its culture changes. (Thus leading to the old debate over whether and how culture can be intentionally changed.)

But the big "advantage" of the cultural explanation is it's difficult enough to disentangle it from genetics that it allows HBD to be unfalsifiably denied.

While it's true that disentangling cultural factors is difficult when trying to explain the overall success of a group, it's a very big mistake to take this as active evidence against culture's importance. I'd also put myself into the "mostly cultural, somewhat genetic" camp. To me, none of the current evidence can plausibly refute the existence possibility (edit) of a society with a common culture in which no genetic group is far more or less successful than the others, with the genetic factors only showing up as numerical discrepancies.

In other words, under this model, even if pure HBD explains some differences in group outcomes, it does not explain the vast differences in poverty, criminality, etc., seen in our current society. Explanations based on cultural coincidence have plenty of well-known justifications for these, such as past prejudice resulting in persistent negative outcomes, or groups facing hardship becoming more successful through cultural selection. Why shouldn't the pro-HBD crowd have to similarly justify its position that higher a higher-IQ population (either on average or on the upper tail) will almost invariably result in a far more successful culture?

Indeed. I suppose that the next step of the defense would be that society persistently undervalues art-as-expression: if the general public were aware of its full value, they would pay for art-as-expression, but structural factors and lack of quantifiable benefits makes awareness implausible in the near future. (Compare this to the animal-welfare activist who fights against factory farmers' greed and consumers' apathy: they believe that if the public were aware of the full value of animal welfare, then animal-protection laws would be passed in a heartbeat.)

In this scenario, the best outcome, short of formal subsidies for artists, would perhaps be a large-scale donation model, much like for many orchestras and museums today. But this is still much less accessible to artists than the pre-AI status quo, where art-as-expression maintains a safe existence as a byproduct of art-as-a-product. So it would still make sense for those who value art-as-expression to lament this change beyond the effects on their own lifestyles, given that this particular Pandora's Box isn't getting closed any time soon.

I think there is a certain line of thinking that can plausibly be raised as a defense:

  • The process of freely creating art is valuable as a form of human expression, either per se or because it enriches the human experience in some way. (This position is apparently one which you do not hold, but let's assume for the moment that a large portion of the population does sincerely hold it.)

  • So far, the process of creating art has been subsidized by its products which can be sold: corporate art, commissions, etc. However, in the future, these products are poised to be far more efficiently generated by AI.

  • Without revenue from these products, many of today's artists will be forced to move into other fields, and perhaps curtail their personal output due to no longer having enough time, supplies, or practice. This is bad, since it decreases the quality and quantity of valuable art creation.

  • Similarly, once the creation of art becomes no longer profitable, the second-order effects start to occur: the entire industry of art education gradually falls apart, and many people become unable to learn the skills to express themselves through art in the way they would prefer.

That is, the process of art-as-human-expression will be impacted negatively by the AI-driven devaluing of art-as-a-commercial-product.

I recall a discussion on LW or SSC (that I am now unable to find), about how many try to find economic justifications for avoiding animal stress, looking for evidence that less-stressed cows (for instance) produce better meat, since that kind of justification is the only form our society will accept: if no such justification can be found, then animal welfare will inevitably get tossed out the window. I interpret @Primaprimaprima's perspective in a similar light; if there is no more value in humans creating art as a product, then there will be nothing left to prop up the tradition of art as an expression, and the world will be worse off for it.

(Whether this assumption of art-as-expression depending on the existence of art-as-a-product holds up in reality is a different question. But it certainly seems like a plausible enough risk to worry about, assuming one values art-as-expression.)

As it happens, I found that Grote usage a couple hours after my initial message. Note that the version you linked to is the 1851 3rd edition; the only 1st-edition scan I could find on IA is missing the title page but otherwise seems intact.

Have people here been reading a different HN than I have? There seems to have always been plenty of anti-lockdown sentiment in the comment section; parts of the HN userbase have a long history of hating government surveillance and control, and I thought I saw plenty of that get translated into anti-lockdown sentiment. Are you claiming that anti-lockdown comments get downvoted less than they used to?

I think I found the 1846 usage! A German translation of a later edition of this book contained "Re-ification" on Google Books.

Grote, George. History of Greece. Vol. 1, John Murray, 1846. Internet Archive, archive.org/details/dli.granth.36583.

Tacitus, in reporting the speech, accompanies it with the glossary "quasi coram," to mark that the speaker here passes into a different order of ideas from that to which himself or his readers were accustomed. If Boiocalus could have heard, and reported to his tribe, an astronomical lecture, he would have introduced some explanation, in order to facilitate to his tribe the comprehension of Hêlios under a point of view so new to them. While Tacitus finds it necessary to illustrate by a comment the personification of the sun, Boiocalus would have had some trouble to make his tribe comprehend the re-ification of the god Hêlios. (Grote 466)

The derivation appears rather simple here: "re-ification" is constructed by analogy with "personification", going from Latin persōna to rēs. Meanwhile, the other foreign-language hits turned out to be OCR errors, except for an 1855 French usage.

Jullien, B. Thèses de grammaire. Librairie de L. Hachette et cie, 1855. Internet Archive, archive.org/details/thsesdegrammaire00jull.

Nous réi-fions, si l'on peut ainsi parler, les êtres animés, c'est-à-dire que nous les prnons comme des choses quand, par notre sentiment actuel, nous considérons en eux plutôt l'être matériel que l'être intelligent. C'est ainsi qu'on dit tous les jours, en parlant d'un enfant, d'un domestique:

C'est propre, c'est rangé;

C'est tranquille, c'est studieux, etc.;

Cela ne fera jamais que ce que je voudrai. (Jullien 149)

In the index, this passage is cited as "réification opposée à la personnification" (501). Looking at the inflected form réi-fions, it would probably be a good idea to check for variations on the verb "reify". (Edit: No luck on that front.)

Using Google Books, I found two English-language usages of the term from the 1850s; all of the earlier usages appear to be OCR errors. However, there appear to be earlier usages of "Reification" in German and "réification" in French, so I plan to keep looking for those.

"The Principle of the Grecian Mythology: Or, How the Greeks Made Their Gods." Fraser's Magazine for Town and Country, vol. 49, no. 289, Jan. 1854, pp. 69-79. Internet Archive, archive.org/details/sim_frasers-magazine_1854-01_49_289/page/69.

In short, although the process by which the Greeks selected the objects of their Pantheon may very well, in the sense in which we are now viewing the subject, be regarded as a process of deification, the actual march of the Greek mind in its intercourse with nature was not a process of deification, or the conscious conversion of impersonal substances into gods, but the very reverse—a process of what may be called reification, or the conscious conversion of what had hitherto been regarded as living beings into impersonal substances. ("The Principle" 74-75)

This is the earliest English-language usage I could find.

Review of A History of Rome, from the Earliest Times to the Establishment of the Empire, by Henry G. Liddell. The Athenæum, no. 1467, 8 Dec. 1855, pp. 1425-1427. Internet Archive, archive.org/details/sim_athenaeum-uk_1855-12-08_1467/page/1425.

Primeval men began with a world all vitality, and instead of having any room or occasion to employ themselves in what we call deification or the conversion of things into personages, their whole intellectual procedure necessarily consisted in exactly the opposite—in a gradual and difficult effort of reification, or the conversion of personages into things. (Review of A History 1425)

The reviewer here appears to be repeating the argument from Fraser's Magazine, contra Liddell.