@iro84657's banner p

iro84657


				

				

				
0 followers   follows 0 users  
joined 2022 September 07 00:59:18 UTC
Verified Email

				

User ID: 906

iro84657


				
				
				

				
0 followers   follows 0 users   joined 2022 September 07 00:59:18 UTC

					

No bio...


					

User ID: 906

Verified Email

The greater the difficulty, the more glory in surmounting it. Skillful pilots gain their reputation from storms and tempests.­ — Epicurus.

There was some discussion about this quote a few days ago on ACX. It's plastered all over the Internet, attributed to either Epicurus or Epictetus, but no one there could determine which work it came from. I did some searching around and found that it actually came from an essay by the 17th-century Frenchman Jean-François Sarasin, falsely attributed to Charles de Saint-Évremond by its publisher, made English as an appendix of a translation of Epicurus, then abridged into its current form in the popular book of quotations The Rule of Life. I briefly wrote up the results of my investigation process there, which some may find interesting. That is the short version, though: it really took me a couple days to trawl through all the variations on Google Books. Apparently, back in the day, random aphorisms were very frequently used to fill up empty space in the corners of magazines.

MIT grad and alleged IQ of 180.

I wouldn't put much weight in that allegation. By definition, only 463 people in the world have an IQ that high, which would come out to about 19 people in the U.S. (not accounting for any population bias). I'd be surprised if there were fewer than 30 people in the U.S. we don't hear about who have greater general intelligence than John Sununu, or for that matter anyone else we do hear about. I suppose it's not impossible that an IQ test spat out that number, but I wouldn't trust any test result that far into the extreme end.

Hmm, how would you define "substantial" here? I'm also intensely skeptical of a Singularity or other fundamental change in the human condition, but I find it very plausible that LLMs could destroy the pseudonymous internet as we know it, by turning it into a spambot hell devoid of useful information. (I'm imagining all sorts of silly stuff like people returning to handwritten letters as a signal of authenticity.) Life would move on, but I'd certainly mourn the loss of the modern internet, for all its faults.

As it happens, your latter point lines up with my own idle musings, to the effect of, "If our reality is truly so fragile that something as banal as an LLM can tear it asunder, then does it really deserve our preservation in the first place?" The seemingly impenetrable barrier between fact and fiction has held firm for all of human history so far, but if that barrier were ever to be broken, its current impenetrability must be an illusion. And if our reality isn't truly bound to any hard rules, then what's even the point of it all? Why must we keep up the charade of the limited human condition?

That's perhaps my greatest fear, even more so than the extinction of humanity by known means. If we could make a superintelligent AI that could invent magic bullshit at the drop of a hat, regardless of whether it creates a utopia or kills us all, it would mean that we already live in a universe full of secret magic bullshit. And in that case, all of our human successes, failures, and expectations are infinitely pedestrian in comparison.

In such a lawless world, the best anyone can do is have faith that there isn't any new and exciting magic bullshit that can be turned against them. All I can hope for is that we aren't the ones stuck in that situation. (Thus I set myself against most of the AI utopians, who would gladly accept any amount of magic bullshit to further the ideal society as they envision or otherwise anticipate it. To a lesser extent I also set myself against those seeking true immortality.) Though if that does turn out to be the kind of world we live in, I suppose I won't have much choice but to accept it and move on.

I'd concur that this is more of an annoying semantic trick than anything else. It is never denied that 2 + 2 = 4 within the group of integers under addition (or a group containing it as a subgroup), a statement that the vast majority of people would know perfectly well. Instead, you just change the commonly understood meaning of one or more of the symbols "2", "4", "+", or "=", without giving any indication of this. Most people consider the notation of integer arithmetic to be unambiguous in a general context, so for this to make any sense, you'd have to establish that the alternative meaning is so widespread as to require the notation to always be disambiguated.

(There's also the epistemic idea that we can't know that 2 + 2 = 4 within the integers with complete certainty, since we could all just be getting fooled every time we read a supposedly correct argument. But this isn't really helpful without any evidence, since the absence of a universal conspiracy about a statement so trivial should be taken as the null hypothesis. It also isn't relevant to the statement being untrue in your sense, since it's no less certain than any other knowledge about the external world.)

Have people here been reading a different HN than I have? There seems to have always been plenty of anti-lockdown sentiment in the comment section; parts of the HN userbase have a long history of hating government surveillance and control, and I thought I saw plenty of that get translated into anti-lockdown sentiment. Are you claiming that anti-lockdown comments get downvoted less than they used to?

I'm pretty sure that this is a continuation of an earlier discussion in this thread on Friday. The "heroine's journey" is used here as a neologism.

As I understand it, the main idea is that the (U.S.) pharmaceutical industry has been covering up hundreds of thousands of deaths and other adverse effects in their drug trials, using bogus statistical analysis to fool everyone about the efficacy of their drugs, and colluding with government agencies to disallow any alternatives. Thus, we should be immensely distrustful of any and all "evidence-based" medical information, and we should spread this idea in order to convince people to rebuild the medical establishment from the ground up. (I don't personally endorse this argument.)

Honestly, I'd rather watch an AI chess tournament with teams of grandmasters training the AI and no limits on human computer involvement. Maybe have classes set by what processors and their count that can be run.

A world chess engine championship has existed since 1974: https://en.wikipedia.org/wiki/World_Computer_Chess_Championship

If a state legislature decided to ignore all votes for Trump when selecting their electors, then those voters might well have a case under Section 2 of the 14th Amendment. (Unless those voters' "participation in rebellion" could be decided by the states?) Of course, a state might find the constitutional penalty of losing electors superior to the possibility of a Trump victory, if the latter has any real chance of occurring at all.

A difference would still lie in whether Congress alone can disqualify a candidate without the involvement of the judiciary, given that they don't have the power to pass bills of attainder.

How about: "If a baby is so fragile that it can't take a punch, does it really deserve our preservation in the first place?"

Sorry to speculate about your mental state, but I suggest you try practicing stopping between "This is almost inevitable" and "Therefore it's a good thing".

Well, my framing was a bit deliberately hyperbolic; obviously, with all else equal, we should prefer not to all die. And this implies that we should be very careful about not expanding access to the known physically-possible means of mass murder, through AI or otherwise.

Perhaps a better way to say it is, if we end up in a future full of ubiquitous magic bullshit, then that inherently comes at a steep cost, regardless of the object-level situation of whether it saves or dooms us. Right now, we have a foundation of certainty about what we can expect never to happen: my phone can display words that hurt me, but it can't reach out and slap me in the face. Or, more importantly to me, those with the means of making my life a living hell have not the motive, and those few with the motive have not the means. So it's not the kind of situation I should spend time worrying about, except to protect myself by keeping the means far away from the latter group.

But if we were to take away our initial foundation of certainty, revealing it to be illusory, then we'd all turn out to have been utter fools to count on it, and we'd never be able to regain any true certainty again. We can implement a "permanent 'alignment' module or singleton government" all we want, but how can we really be sure that some hyper–Von Neumann or GPT-9000 somewhere won't find a totally-unanticipated way to accidentally make a Basilisk that breaks out of all the simulations and tortures everyone for an incomprehensible time? Not to even mention the possibility of being attacked by aliens having more magic bullshit than we do. If the fundamental limits of possibility can change even once, the powers that be can do absolutely nothing to stop them from changing again. There would be no sure way to preserve our "baby" from some future "punch".

That future of uncertainty is what I am afraid of. Thus my hyperbolic thought, that I don't get the appeal of living in such a fantastic world at all, if it takes away the certainty that we can never get back; I find such a state of affairs absolutely repulsive. Any of our expectations, present or future, would be predicated on the lie that anything is truly implausible.

Here, I'm using "the laws of arithmetic" as a general term to refer to the rules of all systems of arithmetic in common usage, where a "system of arithmetic" refers to the symbolic statements derived from any given set of consistent axioms and well-defined notations. I am not assuming that the rules of integer arithmetic will apply to systems of arithmetic that are incompatible with integer arithmetic but use the exact same notation. I am assuming that no one reasonable will use the notation associated with integer arithmetic to denote something incompatible with integer arithmetic, without first clarifying that an alternative system of arithmetic is in use.

Furthermore, I assert that it is unreasonable to suppose that the notation associated with integer arithmetic might refer to something other than the rules of integer arithmetic in the absence of such a clarification. This is because I have no evidence that any reasonable person would use the notation associated with integer arithmetic in such a way, and without such evidence, there is no choice but to make assumptions of terms ordinarily having their plain meanings, to avoid an infinite regress of definitions used to clarify definitions.

It's an assumption about the meaning of the question, not an assumption about the actual laws of arithmetic, which are not in question. The only lesson to be learned is that your interlocutor's terminology has to be aligned with yours in order to meaningfully discuss the subject. This has nothing to do with how complicated the subject is, only by how ambiguous its terminology is in common usage; terminology is an arbitrary social construct. And my point is that this isn't even a very good example, since roughly no one uses standard integer notation to mean something else, without first clarifying the context. Far better examples can be found, e.g., in the paper where Schackel coined the "Motte and Bailey Doctrine", which focuses on a field well-known for ascribing esoteric or technical meanings to commonplace terms.

Sure, but at that point you're just engaging in magical speculation, that "capabilities" at the scale of the mere human Internet will allow an AGI to simulate the real world from first principles and skip any kind of R&D work. The problem, as I see it, is that cheap nanotechnology and custom viruses are problems are far past what we have already researched as humans: at some point, the AGI will hit a free variable that can't be nailed down with already-collected data, and it will have to start running experiments to figure it out.

I'm aware that Yudkowsky believes something to the effect of the omnipotence of an Internet-scale AGI (that if only our existing data were analyzed by a sufficiently smart intelligence, it would effortlessly derive the correct theory of everything), but I'm not willing to entertain the idea without any proposed mechanism for how the AGI extrapolates the known data to an arbitrary accuracy. After all, without a plausible mechanism, AGI x-risk fears become indistinguishable from Pascal's mugging.

That's why I'm far more partial to scenarios where the AGI uses ordinary near-future robots (or convinces near-future humans) to safeguard its experiments, or where it escapes undetected and nudges human scientists to do its research before it makes its real move. (I have overall doubts about it even being possible for AGI to go far past human capabilities with near-future technology, but that is beside the point here.)

The problem is, how would a hostile AGI develop nanobot clouds without spending significant time and resources, to the point that humans notice its activities and stop it before the nanobots are ready? It might make sense for the AGI to use "off-the-shelf" robot hardware, at least to initially establish its own physical security while it develops killer nanobots or designer viruses or whatever.

The climate-change threat does seem somewhat more plausible: just find some factories with the active ingredients and blow them up (or convince someone to blow them up). But I'd be inclined to think that most atmospheric contaminants would take at least months if not years to really start hitting human military capacity, unless you have some particular fast-acting example in mind.

I think I found the 1846 usage! A German translation of a later edition of this book contained "Re-ification" on Google Books.

Grote, George. History of Greece. Vol. 1, John Murray, 1846. Internet Archive, archive.org/details/dli.granth.36583.

Tacitus, in reporting the speech, accompanies it with the glossary "quasi coram," to mark that the speaker here passes into a different order of ideas from that to which himself or his readers were accustomed. If Boiocalus could have heard, and reported to his tribe, an astronomical lecture, he would have introduced some explanation, in order to facilitate to his tribe the comprehension of Hêlios under a point of view so new to them. While Tacitus finds it necessary to illustrate by a comment the personification of the sun, Boiocalus would have had some trouble to make his tribe comprehend the re-ification of the god Hêlios. (Grote 466)

The derivation appears rather simple here: "re-ification" is constructed by analogy with "personification", going from Latin persōna to rēs. Meanwhile, the other foreign-language hits turned out to be OCR errors, except for an 1855 French usage.

Jullien, B. Thèses de grammaire. Librairie de L. Hachette et cie, 1855. Internet Archive, archive.org/details/thsesdegrammaire00jull.

Nous réi-fions, si l'on peut ainsi parler, les êtres animés, c'est-à-dire que nous les prnons comme des choses quand, par notre sentiment actuel, nous considérons en eux plutôt l'être matériel que l'être intelligent. C'est ainsi qu'on dit tous les jours, en parlant d'un enfant, d'un domestique:

C'est propre, c'est rangé;

C'est tranquille, c'est studieux, etc.;

Cela ne fera jamais que ce que je voudrai. (Jullien 149)

In the index, this passage is cited as "réification opposée à la personnification" (501). Looking at the inflected form réi-fions, it would probably be a good idea to check for variations on the verb "reify". (Edit: No luck on that front.)

Once, I noticed a person talk about an event which occurred "on the 5th Dezember". I presume they spoke German, the phrase being a literal rendering of "am 5. Dezember".

I suppose that the idea here is to work backwards: given that the argument is correct and that the premises imply the conclusion, it is inconsistent to accept the premises and reject the conclusion. So if you do reject the conclusion (as most people do), then the reader is challenged to either reject one or more of the premises, or to find a fault in the argument that makes the implication not hold. This is a standard case of working backward from moral intuitions to check that the foundations make any sense.

Where exactly is the boundary on assets and liabilities that go into net worth? For instance, the sum total of all my future labor is valuable, and events in the present can increase or decrease that value, but it generally wouldn't be included in my net worth (except under the utilitarian accounting that some like to use in these circles). Is the distinction based solely on risk? Are valid sources of personal wealth just enumerated in a list somewhere? Is there some other metric that everyone uses?

I think there is a certain line of thinking that can plausibly be raised as a defense:

  • The process of freely creating art is valuable as a form of human expression, either per se or because it enriches the human experience in some way. (This position is apparently one which you do not hold, but let's assume for the moment that a large portion of the population does sincerely hold it.)

  • So far, the process of creating art has been subsidized by its products which can be sold: corporate art, commissions, etc. However, in the future, these products are poised to be far more efficiently generated by AI.

  • Without revenue from these products, many of today's artists will be forced to move into other fields, and perhaps curtail their personal output due to no longer having enough time, supplies, or practice. This is bad, since it decreases the quality and quantity of valuable art creation.

  • Similarly, once the creation of art becomes no longer profitable, the second-order effects start to occur: the entire industry of art education gradually falls apart, and many people become unable to learn the skills to express themselves through art in the way they would prefer.

That is, the process of art-as-human-expression will be impacted negatively by the AI-driven devaluing of art-as-a-commercial-product.

I recall a discussion on LW or SSC (that I am now unable to find), about how many try to find economic justifications for avoiding animal stress, looking for evidence that less-stressed cows (for instance) produce better meat, since that kind of justification is the only form our society will accept: if no such justification can be found, then animal welfare will inevitably get tossed out the window. I interpret @Primaprimaprima's perspective in a similar light; if there is no more value in humans creating art as a product, then there will be nothing left to prop up the tradition of art as an expression, and the world will be worse off for it.

(Whether this assumption of art-as-expression depending on the existence of art-as-a-product holds up in reality is a different question. But it certainly seems like a plausible enough risk to worry about, assuming one values art-as-expression.)

On October 25, 2020, I tried my hand at the prediction game, registering a prediction elsewhere:

Supposing that Joe Biden is unambiguously held by the mainstream media to have won the 2020 election, Donald Trump will accept his defeat by December 7, 2020, and will leave the White House on January 20, 2021, with 96% probability.

It had been clear by then that the election results would be a mess, but I'd been strongly convinced by the narrative that Trump would make a ruckus for a few weeks to appease his supporters, then lie low until he runs in 2024. Needless to say, I was very surprised when he kept contesting the results well past the Electoral College vote in December; I accepted its legitimacy as coming directly from the Constitution, and I'd thought Trump would similarly respect it. I suppose he simply isn't as much of a traditionalist as I'd judged him to be, given MAGA and all that.

Anyway, being disillusioned, I stopped keeping track of anything Trump-related after January 2021. But given that he apparently intends to run again, does anyone have any good, informative summaries of what he's been up to since then?

My beliefs are binary. Either I believe in something or I don't. I believe everyone's beliefs are like that. But people who follow Bayesian thinking confuse certainty with belief.

In your view, is "believing" something equivalent to supposing it with 100% certainty (or near-100% certainty)?

I have a strong suspicion that your epistemic terminology is very different from most other people's, and they aren't going to learn anything from your claims if you use your terminology without explaining it upfront. For instance, people may have been far more receptive to your "2 + 2" post if you'd explained what you mean by an "assumption", since most people here were under the impression that by an "assumption" you meant a "strong supposition". So it's hard to tell what you mean by "people who follow Bayesian thinking confuse certainty with belief" if we misunderstand what you mean by "certainty" or "belief". Is a "belief" a kind of "supposition", or is it something else entirely?

The "laws of arithmetic" that are relevant depend 100% on what arithmetic we are talking about, therefore it's imperative to know which arithmetic we are talking about.

Then please stop assuming that my uncountable usage of "the concept of arithmetic in general" in that sentence is secretly referring to your countable idea of "a single arithmetic". I've clarified my meaning twice now, I'd appreciate it if you actually responded to my argument instead of repeatedly hammering on that initial miscommunication.

People assume it's the normal arithmetic and cannot possibly be any other one. There is zero doubt in their minds, and that's the problem I'm pointing out.

Why should there be any doubt in their minds, if other systems of arithmetic are never denoted with that notation without prior clarification?

The premises are logically independent from each other: only the conclusions are derived from the premises. If you reject any of the premises, then the entire argument is moot. The point of the argument is to show that rejecting the ultimate conclusion requires one to reject one or more of the premises (or to show that the inference does not hold).