site banner

The scientific method rests on faith in God and Man.

The so-called "scientific method" is, I think, rather poorly understood. For example, let us consider one of the best-known laws of nature, often simply referred to as the Law of Gravity:

Newton's Law of Universal Gravitation: Every object in the universe attracts every other object toward it with a force proportional to the product of their masses, divided by the square of the distance between their centers of mass.

Now here is a series of questions for you, which I often ask audiences when I give lectures on the philosophy of science:

  1. Do you believe Newton's Law of Universal Gravitation is true?
  2. If so, how sure are you that it is true?
  3. Why do you believe it, with that degree of certainty?

The most common answers to these questions are "yes", "very sure", and "because it has been extensively experimentally verified." Those answers sound reasonable to any child of the Enlightenment -- but I submit, on the contrary, that this set of answers has no objective basis whatsoever. To begin with, let us ask, how many confirming experiments do you think would have been done, to qualify as "extensive experimental verification." I would ask that you, the reader, actually pick a number as a rough, round guess.

Whatever number N you picked, I now challenge you state the rule of inference that allows you to conclude, from N uniform observations, that a given effect is always about from a given alleged cause. If you dust off your stats book and thumb through it, you will find no such rule of inference rule there. What you will find are principles that allow you to conclude from a certain number N of observations that with confidence c, the proportion of positive cases is z, where c < 1 and z < 1. But there is no finite number of observations that would justify, with any nonzero confidence, that any law held universally, without exception (that is, z can never be 1 for any finite number of observations, no matter how small the desired confidence c is, unless c = 0). . And isn't that exactly what laws of nature are supposed to do? For Pete's sake it is called the law of universal gravitation, and it begins with the universal quantifier every (both of which may have seemed pretty innocuous up until now).

Let me repeat myself for clarity: I am not saying that there is no statistical law that would allow you to conclude the law with absolute certainty; absolute certainty is not even on the table. I am saying that there is no statistical law that would justify belief in the law of universal gravitation with even one tenth of one percent of one percent confidence, based on any finite number of observations. My point is that the laws of the physical sciences -- laws like the Ideal gas laws, the laws of gravity, Ohm's law, etc. -- are not based on statistical reasoning and could never be based on statistical reasoning, if they are supposed, with any confidence whatsoever, to hold universally.

So, if the scientific method is not based on the laws of statistics, what is it based on? In fact it is based on the

Principle of Abductive Inference: Given general principle as a hypothesis, if we have tried to experimentally disprove the hypothesis, with no disconfirming experiments, then we may infer that it is likely to be true -- with confidence justified by the ingenuity and diligence that has been exercised in attempting to disprove it.

In layman's terms, if we have tried to find and/or manufacture counterexamples to a hypothesis, extensively and cleverly, and found none, then we should be surprised if we then find a counterexample by accident. That is the essence of the scientific method that underpins most of the corpus of the physical sciences. Note that it is not statistical in nature. The methods of statistics are very different, in that they rest on theorems that justify confidence in those methods, under assumptions corresponding to the premises of the theorems. There is no such theorem for the Principle of Abductive Inference -- nor will there ever be, because, in fact, for reasons I will explain below, it is a miracle that the scientific method works (if it works).

Why would it take a miracle for the scientific method to work? Remember that the confidence with which we are entitled to infer a natural law is a function of the capability and diligence we have exercised in trying to disprove it. Thus, to conclude a general law with some moderate degree of confidence (say, 75%), we must have done due diligence in trying to disprove it, to the degree necessary to justify that level confidence, given the complexity of the system under study. But what in the world entitles us to think that the source code of the universe is so neat and simple, and its human denizens so smart, that we are capable of the diligence that is due?

For an illuminating analogy, consider that software testing is a process of experimentation that is closely analogous to scientific experimentation. In the case of software testing, the hypothesis being tested -- the general law that we are attempting to disconfirm -- is that a given program satisfies its specification for all inputs. Now do you suppose that we could effectively debug Microsoft Office, or gain justified confidence in its correctness with respect to on item of its specification, by letting a weasel crawl around on the keyboard while the software is running, and observing the results? Of course not: the program is far too complex, its behavior too nuanced, and the weasel too dimwitted (no offense to weasels) for that. Now, do you expect the source code of the Universe itself to be simpler and friendlier to the human brain than the source code of MS Office is to the brain of a weasel? That would be a miraculous thing to expect, for the following reason: a priori, if the complexity of that source code could be arbitrarily large. It could be a googleplex lines of spaghetti code -- and that would be a infinitesimally small level of complexity, given the realm of possible complexities -- namely the right-hand side of the number line.

In this light, if the human brain is better equipped to discover the laws of nature than a weasel is to confidently establish the correctness an item in the spec of MS Office, it would be a stunning coincidence. That is looking at it from the side of the a priori expected complexity of the problem, compared to any finite being's ability to solve it. But there is another side to look from, which is the side of the distribution of intelligence levels of the potential problem-solvers themselves. Obviously, a paramecium, for example, is not equipped to discover the laws of physics. Nor is an octopus, nor a turtle, nor a panther, nor an orangutan. In the spectrum of natural intelligences we know of, it just so happens that there is exactly one kind of creature that just barely has the capacity to uncover the laws of nature. It is as if some cosmic Dungeon Master was optimizing the problem from both sides, by making the source code of the universe just simple enough that the smartest beings within it (that we know of) were just barely capable of solving the puzzle. That is just the goldilocks situation that good DM's try to achieve with their puzzles: not so hard they can't be solved, not so easy that the players can't take pride in solving them

There is a salient counterargument I must respond to. It might be argued that, while it is a priori unlikely that any finite being would be capable of profitably employing the scientific method in a randomly constructed universe, it might be claimed that in hindsight of the scientific method having worked for us in this particular universe, we are now entitled, a posteriori, to embrace the Principle of Abductive Inference as a reliable method. My response is that we have no objective reason whatsoever to believe the scientific method has worked in hindsight -- at least not for the purpose of discovering universal laws of nature! I will grant that we have had pretty good luck with science-based engineering in the tiny little spec of the universe observable to us. I will even grant that this justifies the continued use of engineering for practical purposes with relative confidence -- under the laws of statistics, so long as, say, one anomaly per hundred thousand hours of use is an acceptable risk. But this gives no objective reason whatsoever (again under the laws of statistics) to believe that any of the alleged "laws of nature" we talk about is actually a universal law. That is to say, if you believe, with even one percent confidence, that we ever have, or ever will, uncover a single line of the source code of the universe -- a single law of Nature that holds without exception -- then you, my friend, believe in miracles. There is no reason to expect the scientific method to work, and good reason to expect it not to work -- unless human mind was designed to be able to uncover and understand the laws of nature, by Someone who knew exactly how complex they are.

-4
Jump in the discussion.

No email address required.

Can one shut up and multiply their way to a problem of evil? Can you, like, multiply a quaternion by an elliptic curve, and it somehow pops out in there somewhere?

Oh yes, quaternions are an example of a necessary evil, or so programmers trying to work in 3D tell me. Whatever it takes to avoid gimbal locking must be necessarily good. I'm sure the NSA might call elliptic curves evil, depending on whether or not they had a chance to put a backdoor or two in cryptographic standards.

Don't be facetious. Or remember your demands to "speak plainly". You need unprincipled* definitions of what counts as "evil" (or good) before you can start. Once you have a working definition, based off whatever moral intuitions you spawned with, then yes, you can do moral calculus and figure out an optimum, but your ability to do so as a computationally bounded entity will always be limited. Doesn't mean you can't try. Though you don't need quaternions or elliptic curves; addition, subtraction or multiplication should suffice. Certainly mere multiplication of massive numbers of matrices seems sufficient to create intelligence by itself, which is a necessity for discerning good from bad.

*Feel free to dispute this when you can argue Socrates into the ground.

I rather like being playfully jocular or humorous. It really makes it plain and clear what's going on. Let me see if I can understand what you're saying. One can just pull some unprincipled definition of what counts as "evil" (or good)... any definition, literally does not matter. Like, your entire moral system could just be, "Ponzi schemes are good, actually," and that's it. That's all you've got. Then, we don't even need the bad old methods of calculation that required quaternions or elliptic curves; we can just add, subtract, and multiply. I'm not actually sure what to do in the next step. Is there another form of Bayes' rule that I can use to make progress? Maybe an inscrutable matrix form of it? I can't find one on Wikipedia. Maybe there's a NeurIPS paper you can link me to?

I'm not opposed to shutting up and multiplying, and I'm pretty good at coding. I just need to know what expression I'm supposed to use in my code. I'm tryna get to some way of commenting on the problem of evil.

This question is, in a strange way, sort of related to my own work that is squarely within my domain of expertise. There is a long history in the literature of a theoretical construct that is sort of related, in a way, to the actual thing we want to know. I observed that there are basically zero, AFAICT, papers out there that actually use the theoretical thing to go on and compute the thing that we actually want to know. It just doesn't seem to be a thing that you can actually do. So decades of papers just get to the point of the theoretical construct, and then stop. There is no actual coding of the thing we actually want - the thing that is actually useful - the thing that was the entire point of the investigation in the first place. It seems to be basically not possible to actually just shut up and compute it. And so the best paper of my career came at the problem from a completely different direction, saying that if we go a different route, we can have the thing that we want, with most of the properties that we were hoping to have. It has spawned a mini-literature of folks building on it now, since they're actually able to shut up and multiply now. So, by all means, let's figure out how to shut up and multiply our way from, "Ponzi schemes are good, actually," to a problem of evil. But it needs to be something that I can code, since that is the premise we're starting with.

I'm calling you this kind of facetious. It's doubly laughable when you've been so humorless with me before, so I don't care to indulge your witticisms.

As far as I'm concerned morality is arbitrary and inherently subjective, there's as much reason to think otherwise as there is to suspect another missing integer between 2 and 3, despite many people fervently wishing otherwise. I've discussed my thoughts at length on it before, and you're welcome to trawl through things if you want to dig it up.

And indeed, arbitrary means arbitrary. We thankfully don't have many people around who think Ponzi schemes are good (instead of thinking they're good because they're duped and don't understand them, a whole different kettle of fish). That is a fact about evolution, and the pressures exerted on it by game theoretical concerns, and us being extremely conserved and morally similar to each other compared to the entire set of possible minds and their morality. Even monkeys and dogs care about "fairness" and dislike perceived injustice and hostility to the ingroup. Hence it's no surprise that human morality is so similar and violent disagreements tend to happen at the edge cases where the concordance breaks down.

Your morality is what your morality is. Attempting to ground it will get you nowhere.

Thankfully, consequentialism, especially utilitarianism, can be computed explicitly. Look at GiveWell or the EA movement for a good effort with the latter.

So, by all means, let's figure out how to shut up and multiply our way from, "Ponzi schemes are good, actually," to a problem of evil. But it needs to be something that I can code, since that is the premise we're starting with.

Ah! I can make it really simple for you, without even getting into first principles:

  1. Find a paper on building the Transformer architecture/RWKV/Mamba or whatever is in vogue these days. I presume NeurIPS will suffice.

  2. Feed it a massive corpus of text. The scrapable internet is a good start.

  3. Stir the pot. (This is where the matrix multiplications come in, for those curious)

  4. Behold, an LLM with an intuitive understanding of morality.

(You may insert step 3.5 of RLHF if you're so inclined, it's not strictly necessary)

Tada, taking mere matrix multiplication and your ability to code, you have created an intelligence with knowledge of morality, and can modify it further with RLHF if you're feeling safety-ist or want to twist it to arbitrary ends. All done with a computer! In code!

Easy enough for you I presume? I can show you about twenty proofs of concept.

As far as I'm concerned morality is arbitrary and inherently subjective

See, this is why I sort of don't believe you when you say that we should just shut up and multiply. It doesn't seem like that's really what you think. If it's, "Shut up and multiply, but oh by the way, you definitely can't multiply," then it really means, "Shut up."

Thankfully, consequentialism, especially utilitarianism, can be computed explicitly. Look at GiveWell or the EA movement for a good effort with the latter.

So, do you think this, or do you think that it's arbitrary, inherently subjective, and that we can't multiply?

Ah! I can make it really simple for you, without even getting into first principles:
  1. Find a paper on building the Transformer architecture/RWKV/Mamba or whatever is in vogue these days. I presume NeurIPS will suffice.

  2. Feed it a massive corpus of text. The scrapable internet is a good start.

  3. Stir the pot. (This is where the matrix multiplications come in, for those curious)

  4. Behold, an LLM with an intuitive understanding of morality.

Ok, so I guess now we have a third option. I could, 1) Shut up and multiply, but not multiply because that doesn't work, so just shut up, 2) Adopt consequentialism/utilitarianism and try to fire up a calculator, or 3) Just download some weights, hell, Llama-7b is enough, right? That'll then appropriately and accurately get me to a correct problem of evil?

Which one is the answer, because they're all different answers, and I suspect they'll give me different results. What does an empiricist as anti-epistemologist do with three different empirical answers to the same question?

See, this is why I sort of don't believe you when you say that we should just shut up and multiply. It doesn't seem like that's really what you think. If it's, "Shut up and multiply, but oh by the way, you definitely can't multiply," then it really means, "Shut up."

What on earth gives you that takeaway? It was never on the menu.

You can perform utilitarian calculus, GiveWell certainly adds, subtracts multiplies and divides to figure out what is the most cost effective, hence most moral in their lights, means of saving or extending human life.

In practise, everyone is a consequentialist. Even the denying Deontologists, who frontload it. Otherwise they have no choice but to halt and catch fire whenever two Kantian imperatives conflict, and for all the ill I can speak of them, they're not that dysfunctional.

That'll then appropriately and accurately get me to a correct problem of evil?

Buddy. I clearly stated that morality is fundamentally arbitrary/subjective/observer dependent and as far as I can tell, there is no good reason to think that's not the case barring wishful thinking.

There is no "correct problem of evil". That is a malformed question built on faulty assumptions.

If you're a consequentialist, especially one of the more well defined strains of utilitarian, you can very much shut up and multiply from base principles, like "saving human lives is good" and further adjustments like discounting by quality of life. Hence QALYs and DALYs. For the record, I am neither a utilitarian in the Benthamian sense or an EA.

If you're not a consequentialist, well, you actually are, deep down, just to be a functioning human, but there is no right or wrong here that is independent of a predefined frame of reference. In other words, an observer.

If, somehow you're actually an LLM, then you've achieved a working understanding of gestalt human morality simply by being fed an enormous amount of text and then doing multiplication (of matrices). It is obviously doable. A randomly instantiated neural net will predictably achieve it.

See, this is why I sort of don't believe you when you say that we should just shut up and multiply. It doesn't seem like that's really what you think. If it's, "Shut up and multiply, but oh by the way, you definitely can't multiply," then it really means, "Shut up."

What on earth gives you that takeaway? It was never on the menu.

Because you're still saying stuff like:

I clearly stated that morality is fundamentally arbitrary/subjective/observer dependent and as far as I can tell, there is no good reason to think that's not the case barring wishful thinking.

That does not sound like something I can just multiply to get. If so, you could also just multiply, and we'd get the same thing. Because that's the bit about empiricism. You know, the linked article that you linked.

Whereas, again, this:

You can perform utilitarian calculus

is option 2. That's not "fundamentally arbitrary/subjective/observer dependent". It's a different option.

If, somehow you're actually an LLM, then you've achieved a working understanding of gestalt human morality simply by being fed an enormous amount of text and then doing multiplication (of matrices). It is obviously doable. A randomly instantiated neural net will predictably achieve it.

And this is a really weird option 3. So, yeah, I think you've again given me all three options. They're going to give me different answers. Which option is the right one? What does an empiricist as anti-epistemologist do with three different empirical answers to the same question?

There is no right or wrong option. To restate my point, that is a fundamentally confused question unless you begin with an axiomatic moral framing.

You cannot acquire morality from first principles. You can execute it on it when you have somehow acquired them, including, yes, basic and complicated mathematics.

Ok, so let me see if I'm understanding you correctly. You reject the possibility of objective morality, but I think you might also be rejecting moral error theory. I think this means that you land somewhere in the land of meta-ethical moral relativism. Of course, I think this also rejects the core underpinning of the project of consequentialism/utilitarianism as being an objective basis for morality. Is this about right?

Then, I think the next move in the relativist frame is to say that individuals simply adopt whatever axioms they choose to adopt. So, like, if someone adopted the axiom that Ponzi schemes are good or the axiom that Llama-7b is the best arbiter of their morality or the axiom that if there were a way to give ALL to two-year olds, it would be good to do so... then, that's just totally and completely fine. That's their axiom, and all that can be done is carrying out that axiom with basic and complicated mathematics. Is that about right?

Then, lets suppose that you and someone else have adopted different axioms. They adopted the axiom that it is good to give ALL to two-year olds (if such a thing were technologically possible), while you adopt the axiom that it is bad. Presumably, you would say that they have no objective grounds on which to claim that your axiomatic system is wrong, and likewise, you would have no objective grounds on which to claim that their axiomatic system is wrong. You would both just go about using basic and complicated mathematics to satisfy your respective axioms, you'd both just shut up and multiply, and no other statements can be made about the situation. Is that about right?

More comments