site banner

The scientific method rests on faith in God and Man.

The so-called "scientific method" is, I think, rather poorly understood. For example, let us consider one of the best-known laws of nature, often simply referred to as the Law of Gravity:

Newton's Law of Universal Gravitation: Every object in the universe attracts every other object toward it with a force proportional to the product of their masses, divided by the square of the distance between their centers of mass.

Now here is a series of questions for you, which I often ask audiences when I give lectures on the philosophy of science:

  1. Do you believe Newton's Law of Universal Gravitation is true?
  2. If so, how sure are you that it is true?
  3. Why do you believe it, with that degree of certainty?

The most common answers to these questions are "yes", "very sure", and "because it has been extensively experimentally verified." Those answers sound reasonable to any child of the Enlightenment -- but I submit, on the contrary, that this set of answers has no objective basis whatsoever. To begin with, let us ask, how many confirming experiments do you think would have been done, to qualify as "extensive experimental verification." I would ask that you, the reader, actually pick a number as a rough, round guess.

Whatever number N you picked, I now challenge you state the rule of inference that allows you to conclude, from N uniform observations, that a given effect is always about from a given alleged cause. If you dust off your stats book and thumb through it, you will find no such rule of inference rule there. What you will find are principles that allow you to conclude from a certain number N of observations that with confidence c, the proportion of positive cases is z, where c < 1 and z < 1. But there is no finite number of observations that would justify, with any nonzero confidence, that any law held universally, without exception (that is, z can never be 1 for any finite number of observations, no matter how small the desired confidence c is, unless c = 0). . And isn't that exactly what laws of nature are supposed to do? For Pete's sake it is called the law of universal gravitation, and it begins with the universal quantifier every (both of which may have seemed pretty innocuous up until now).

Let me repeat myself for clarity: I am not saying that there is no statistical law that would allow you to conclude the law with absolute certainty; absolute certainty is not even on the table. I am saying that there is no statistical law that would justify belief in the law of universal gravitation with even one tenth of one percent of one percent confidence, based on any finite number of observations. My point is that the laws of the physical sciences -- laws like the Ideal gas laws, the laws of gravity, Ohm's law, etc. -- are not based on statistical reasoning and could never be based on statistical reasoning, if they are supposed, with any confidence whatsoever, to hold universally.

So, if the scientific method is not based on the laws of statistics, what is it based on? In fact it is based on the

Principle of Abductive Inference: Given general principle as a hypothesis, if we have tried to experimentally disprove the hypothesis, with no disconfirming experiments, then we may infer that it is likely to be true -- with confidence justified by the ingenuity and diligence that has been exercised in attempting to disprove it.

In layman's terms, if we have tried to find and/or manufacture counterexamples to a hypothesis, extensively and cleverly, and found none, then we should be surprised if we then find a counterexample by accident. That is the essence of the scientific method that underpins most of the corpus of the physical sciences. Note that it is not statistical in nature. The methods of statistics are very different, in that they rest on theorems that justify confidence in those methods, under assumptions corresponding to the premises of the theorems. There is no such theorem for the Principle of Abductive Inference -- nor will there ever be, because, in fact, for reasons I will explain below, it is a miracle that the scientific method works (if it works).

Why would it take a miracle for the scientific method to work? Remember that the confidence with which we are entitled to infer a natural law is a function of the capability and diligence we have exercised in trying to disprove it. Thus, to conclude a general law with some moderate degree of confidence (say, 75%), we must have done due diligence in trying to disprove it, to the degree necessary to justify that level confidence, given the complexity of the system under study. But what in the world entitles us to think that the source code of the universe is so neat and simple, and its human denizens so smart, that we are capable of the diligence that is due?

For an illuminating analogy, consider that software testing is a process of experimentation that is closely analogous to scientific experimentation. In the case of software testing, the hypothesis being tested -- the general law that we are attempting to disconfirm -- is that a given program satisfies its specification for all inputs. Now do you suppose that we could effectively debug Microsoft Office, or gain justified confidence in its correctness with respect to on item of its specification, by letting a weasel crawl around on the keyboard while the software is running, and observing the results? Of course not: the program is far too complex, its behavior too nuanced, and the weasel too dimwitted (no offense to weasels) for that. Now, do you expect the source code of the Universe itself to be simpler and friendlier to the human brain than the source code of MS Office is to the brain of a weasel? That would be a miraculous thing to expect, for the following reason: a priori, if the complexity of that source code could be arbitrarily large. It could be a googleplex lines of spaghetti code -- and that would be a infinitesimally small level of complexity, given the realm of possible complexities -- namely the right-hand side of the number line.

In this light, if the human brain is better equipped to discover the laws of nature than a weasel is to confidently establish the correctness an item in the spec of MS Office, it would be a stunning coincidence. That is looking at it from the side of the a priori expected complexity of the problem, compared to any finite being's ability to solve it. But there is another side to look from, which is the side of the distribution of intelligence levels of the potential problem-solvers themselves. Obviously, a paramecium, for example, is not equipped to discover the laws of physics. Nor is an octopus, nor a turtle, nor a panther, nor an orangutan. In the spectrum of natural intelligences we know of, it just so happens that there is exactly one kind of creature that just barely has the capacity to uncover the laws of nature. It is as if some cosmic Dungeon Master was optimizing the problem from both sides, by making the source code of the universe just simple enough that the smartest beings within it (that we know of) were just barely capable of solving the puzzle. That is just the goldilocks situation that good DM's try to achieve with their puzzles: not so hard they can't be solved, not so easy that the players can't take pride in solving them

There is a salient counterargument I must respond to. It might be argued that, while it is a priori unlikely that any finite being would be capable of profitably employing the scientific method in a randomly constructed universe, it might be claimed that in hindsight of the scientific method having worked for us in this particular universe, we are now entitled, a posteriori, to embrace the Principle of Abductive Inference as a reliable method. My response is that we have no objective reason whatsoever to believe the scientific method has worked in hindsight -- at least not for the purpose of discovering universal laws of nature! I will grant that we have had pretty good luck with science-based engineering in the tiny little spec of the universe observable to us. I will even grant that this justifies the continued use of engineering for practical purposes with relative confidence -- under the laws of statistics, so long as, say, one anomaly per hundred thousand hours of use is an acceptable risk. But this gives no objective reason whatsoever (again under the laws of statistics) to believe that any of the alleged "laws of nature" we talk about is actually a universal law. That is to say, if you believe, with even one percent confidence, that we ever have, or ever will, uncover a single line of the source code of the universe -- a single law of Nature that holds without exception -- then you, my friend, believe in miracles. There is no reason to expect the scientific method to work, and good reason to expect it not to work -- unless human mind was designed to be able to uncover and understand the laws of nature, by Someone who knew exactly how complex they are.

-5
Jump in the discussion.

No email address required.

All models are wrong, some models are useful.

Scientific models come in two variants, those already falsified and those which have yet to be falsified.

Almost no physicists since ca. 1930 believe that Newtons gravity are correct. It is merely the limit of General Relativity for the case of shallow potentials and low velocities (v<<c). However, this still means that it is useful for a lot of cases, like flying a spacecraft to Mars.

Of course, General Relativity as we understand it is also not a fundamental law, because it is incompatible with quantum mechanics (plus there is some small but non-zero chance that the observations which are thought to be caused by dark matter are actually due to Modified Newtonian Dynamics (MOND) or something).

Basically every physicist will admit that our current theories will break near the Planck scale, and possibly long before that.

Happily, we seem to live in a locally sane universe, not some buggy bloatware Microsoft Office hellscape ruled by that paper clip.

Between primordial nucleosynthesis and the cosmic microwave background, our current cosmological models seem roughly consistent, so Occam's razor would suggest that our laws of physics roughly apply in the observable universe.

Every piece of the Moon we have examined is made of rock. Our indirect observations are consistent with the moon being made of rock. This can be stated as "To the best of our knowledge, the Moon is overwhelmingly made of rock." A common shortening of that would be "The Moon is made of rock". This is not meant as a mathematical, absolute truth. The author would require us to test every molecule of the Moon before making such a statement, lest we miss a tiny fraction of cheese in the core.

I agree with most of the substance of this, but have a couple of quibbles.

#1

so Occam's razor would suggest that our laws of physics roughly apply in the observable

I think this is an oversimplification. It could be interpreted to mean something true, but it could just as easily be interpreted to mean something false, and the burden of clarity is on the author. As you probably know, relatively is not even approximately true in the small, and quantum mechanics is not even approximately true in the large. So it is more precise to say that our known laws are approximately true when applied within the scope of their well tested and intended use -- which is also true of classical mechanics, Hooke's law of springs, the ideal gas laws, etc. But the scope of well tested and intended use is a loop we have to be in to make it work. The laws themselves are not as intrinsically accurate as your statement would suggest to an average reader.

#2
I also agree that in commonsense terms, "the moon is made of rock" means the moon is made primarily out of rock, and not the moon is made entirely of rock -- and that on that commonsense interpretation we are entitled to justified confidence in it on the bases of much fewer miracles that in a bona fide universal generalization.

But when you say this:

All models are wrong, some models are useful.

I do not believe we need to be resigned to it. That must mean that I believe in one more miracle than you.

You are bringing back memories of my time in undergrad philosophy of logic, science, & epistemology classes. Never did grok it well enough to bother proceeding to graduate-level studies, but I appreciate seeing the radical skepticism I dimly encountered being (to the best of my grasp) succinctly described here.

After I swore it off, you made this place worth lurking in again.

An addendum: I browsed your bio, and am paranoid enough in consideration of this [https://www.newswire.com/news/elemental-cognition-sets-new-standard-for-generative-ai-achieves-100-22248725] press release's claims to wonder at the ulterior motives for this post. and whether a human is the (primary) author: is this a sufficiently midwitted community that we are a reliable testing ground for AI-generated reasoning, a middle-of-the-road so constrained that we lack both rubes trivially easy to fool, yet also lacking the true brilliance capable of sussing out machine-made flaws in logic?

I see no reason to assume anything is "real" anymore in consideration of my own limited reasoning abilities. Who am I to tell if that press release is legitimate or corporate jack-off? Laugh at me if you wish, in a thread investigating the very foundations of what we colloquially call "truth" I don't see why I shouldn't indulge my most schizophrenic tendencies.

After I swore it off, you made this place worth lurking in again.

I'm flattered!

I browsed your bio, and am paranoid enough in consideration of this [https://www.newswire.com/news/elemental-cognition-sets-new-standard-for-generative-ai-achieves-100-22248725] press release's claims to wonder at the ulterior motives for this post. and whether a human is the (primary) author:

If I were a bot, and my goal were to mine data to train an LLM, and I were smart enough to fool you into thinking I'm something else, would I say that I work for an AI research startup?

Laugh at me if you wish, in a thread investigating the very foundations of what we colloquially call "truth" I don't see why I shouldn't indulge my most schizophrenic tendencies.

Then again, maybe my plan is to make little mistakes like that throw you off the trail. Buahahhahahaha!

But there is no finite number of observations that would justify, with any nonzero confidence, that any law held universally, without exception

Occam's Razor.

Is not a formal argument.

There is no consensual definition of parsimony.

I mean, there is, it's the minimum possible message length (number of bits) needed to fully describe your model.

Of course, no one can actually give a maximally-compressed complete description of the universe such that we could compare two models for message length. But we can look at two proposed addendums to the hypothetical model (eg two hypotheses about natural law) and try to judge which would add more bits, and whether any observations justify that cost.

There are issues with information density as a metric for parsimony because of implication and vagueness.

Consider the Von Neumann and Copenhagen interpretations of quantum mechanics.

Both are equally able to explain observations. The former can be stated in a much more succint way. And yet almost everyone agrees the latter is more parsimonious.

All because of the unexplained implications of consciousness as an object.

Information density is not enough, and if we had to pick a metric I'd much rather pick energy efficiency. But even that poses a problem, which is energy efficiency to do what?

Millions of engineers use Newton's demonstrably inaccurate classical physics every day to great effect. According to Occam, they are all wrong to do so, because accuracy has primacy.

This alone I think refutes the razor as a practical rule of thumb. If we are to pick models we ought to pick them with the correct tradeoffs of accuracy, coherence, energy efficiency and information density for the task at hand.

None can claim to be more true than others by maximizing a single of these factors unless we embrace the prejudice of a particular theory of truth.

This is a reply jointly to several comments so I will put it as a new semi-top level post. Several of the responses, including such as (what I consider) the most thoughtful ones of @sqeecoo and @Gillitrut, point in the direction that the mission of science is not to discover natural laws that are literally true, but to produce useful fictions -- stories about the world that we are better off believing and acting on. That position, if you really believe it, is immune from my argument. But if you take that position, and at the same time embrace the study of science, then you cannot, at the same time, argue against theism on the grounds that it is literally false.

Why can't I let the other shoe drop and say that "theism is literally false" is also a story about the world that we are better off believing and acting on? In fact, this seems like a natural extension of the "science discovers things that are literally true" act. Sure, this line of argument pressures that there is some "out-of-character" meta level of cognition on which you perform this cost-benefit analysis and are essentially a radical agnostic, but that doesn't mean you have to drop into OOC every time some theist comes along and demands that you explain yourself, any more than a good theatre actor would stop acting and instead break into a rant as to why he needed the job every time someone in the audience indicated they were unhappy with the play.

Why can't I let the other shoe drop and say that "theism is literally false" is also a story about the world that we are better off believing and acting on?

A far as I'm concerned, you are welcome to make that argument; be my guest. I just said that, under certain premises, you couldn't rationally make the other one.

Why can't I let the other shoe drop and say that "theism is literally false" is also a story about the world that we are better off believing and acting on?

Because that would be entirely arbitrary. And thus useless as an argument.

If you want to do things, do things. But legitimacy requires justification.

I'll just poke in to say that I think that the mission of science is to discover the actual, literal truth. I've hopefully made this clearer in my response in our conversation below, so I'll just refer to that instead of repeating myself here.

To add content to this post, I'd say that many epistemological perpectives do indeed give up on truth in favor of usefulness or, in some variants of Bayesianism, in favor of our probability estimates. I don't care whether a scientific hypothesis is probably true, I care whether it is actually true - and if it is true, it will also be useful.

What is truth in the sense you mean here? I wager it isn't the same as what OP means.

In the strict sense, truth is literally inaccessible to any a posteriori method. Error and the senses cannot be absolutely mitigated.

Truth in the classical sense of correspondence to reality. If I say aliens exist and you say they don't, one of us has hit upon the truth despite both of us guessing. We won't know which of the two claims is true, but one of them is true, i.e. it corresponds to reality.

What would be the truth in the "strict sense", as you put it?

Truth in the classical sense of correspondence to reality.

So not the classical, but the non-classical modern sense.

Classicism in truth theories usually refers to the division between theories that rely on criteria and procedures and theories that do not.

Evidence theory (A is true if A is evident), coherence theory (A is true if it can be embedded in a coherent system without destroying its coherence), common agreement theory (A is true if specialists agree about its correctness), utilitarian theory (A is true if A is useful); these are all non-classical theories because they appeal to a mechanism to obtain truth.

Classical theories do not do this, and consider true what is necessarily so without appealing to criteria.

For instance Tarski's STT that works through a relation of satisfaction and solely operates on formal languages is a classical theory of truth in the line of the Aristotelian syllogisms that it was inspired by.

"aliens exist" is a classically meaningless statement because you neither defined what an alien is nor the totality of an existence relationship.

What would be the truth in the "strict sense", as you put it?

Logical necessities. Anything that isn't contingent on evidence and stands by itself. Things that are so by virtue of pure reason, before evaluation of the senses. Things that are true a priori.

Most of mathematics is true in this strict sense, none of science is.

Ok, I was going for a plain language simple answer, but you obviously know your stuff. Tarski's STT in the Popper/Miller interpretation is the theory of truth I adhere to, then.

I see. OP seems to be arguing absolutes, so probabilistic epistemologies are going to be hard to reconcile, but I think I understand your point better with that added context.

I think you're right to say that it's not necessary that theories of our observations that don't assume a metaphysics are fictitious. And Propensity is a good example of this.

But one can probably retort that in application even such theories have to make the assumption that the universe is meaningfully descriptible, a fortiori probabilistically, if they want to make a claim at Truth. Which as I understand is the whole debate around inductive skepticism.

I've never found Popper's arguments to the abilities of pure deductivism to be entirely convincing myself. Even he has to appeal to one hypothesis being better or worse “corroborated” by the evidence which decays him into method. Hence the unfortunate fate of logical positivism.

I'm not sure I can follow everything you're saying here, but I'm interested in what you find unconvincing about Popper, if you feel like expounding on it. I hope you're not implying Popper was a logical positivist :)

It would be a bit silly to say that about one of its most tenacious critics. I'm merely saying his own criticisms of the problems with induction apply to his own ideas when scrutinized. He's a deductivist in the same way Marx is a materialist: only in theory.

I really have two problems with Popper.

First, the aforementioned issue with deductivism requiring some ranking of theories through experimentation.

I think this reintroduces the problems he sees in positivism.

The way he tries to get away with it is, as you know, by refraining from claiming truth and instead having science go for truthlikeness and verisimilitude.

This is all well and good and a more honest account of the scientific process, but his definition of truthlikeness is incoherent (by his own estimation) because it can't rank false theories. We may yet find a satisfactory solution for this but none of the attempts I've seen were very convincing.

Second is the more mundane criticism that his views don't manage to characterize a lot of behavior that we do regard as scientific. There is a lot wrong with Khune bun on this I wager he is correct.

More comments

To be honest, I am a little put-off by your phrasing that science is what we are better off "believing."

When I think, "things we are better off believing," I think of a case where believing and not-believing make a difference. For example, maybe there is a self-fulfilling prophecy involving the prescription "You should be confident." In that case, I might say we are better of believing "I am confident." Science is not a self-fulfilling prophecy, because results of experiment do not depend on beliefs.

Science is stories about the world that we are better off acting on. This phrasing seems better to me. In this way, can't I argue against theism (whatever you mean by that) by saying "acting on theism doesn't make us better off"?

Actually, similarly to the old adage that theism is Not Even Wrong, in this new formulation of "true," theism is Not Even Actionable. I don't think this parallel is a coincidence.

Science is stories about the world that we are better off acting on. This phrasing seems better to me. In this way, can't I argue against theism (whatever you mean by that) by saying "acting on theism doesn't make us better off"?

Yes, feel free. But not (under the premises I described) on the grounds that there is no objective evidence that God actually exists (since that is also true of universal gravitation).

I'd like to hear more about why we can't argue in that direction. Is this like a hypocrisy claim? That since science isn't literally true it would be hypocritical to criticize theism for not being literally true? Or is this more that the acknowledged limits of scientific inquiry do not permit disproving theism?

I am content with believing that the particular empirical claims theists make seem to all have non-theistic explanations. If there is some causally inert god or gods out there, who do not interact with our reality in an empirically testable way, I am not that concerned with their existence.

Is this like a hypocrisy claim? That since science isn't literally true it would be hypocritical to criticize theism for not being literally true?

Yes, that's what I'm saying.

If there is some causally inert god or gods out there, who do not interact with our reality in an empirically testable way, I am not that concerned with their existence.

God's pronouns are He/Him. (For the sarcasm-impaired, that's a joke)

Yes, that's what I'm saying.

Okay well in that case it's also hypocritical to criticize Cthulhu and Star Wars lore for not being literally true. Hooray, solipsism. This entire line of argument advances absolutely nothing.

It essentially amounts to a theist's special request for their beliefs to be treated as intellectually serious even though they can't point to any justification for them that exists outside of their own skull, because hey after all, nothing is really certain, right?

Bluntly, request denied until one of these arguments successfully and meaningfully distinguishes Christianity, theism, whatever, from an infinite number of bullshit things I could make up on the spot.

I have lurked this group since long before it had a site or even a name, and throughout the years, I have almost never commented. So I apologize for being critical of someone like yourself who actually does post and contribute, as that is a bit hypocritical of me.

I think you are pattern-matching OP as a member of your outgroup — that is to say, a theist — then skipping past his argument about the inferential basis of the scientific method to attack him because his post appears to give aid and comfort to the enemy.

I am familiar with some physics, some math, and some epistemology — not enough to be an expert, but enough to where I think OP’s argument is reasonably well-defined and not equivalent to extreme skepticism about everything. I can’t find a clear basis to dismiss OP’s argument (at least as to the inferential limitations of science) as bullshit. (I don’t think OP has established at all the truth of theism or anything like that, but he only appears to make a single terse allusion to it at the end of his argument, and perhaps was using it rhetorically as bait.)

As someone whose intuition is that we do have objective evidence for scientific laws, I’ve been hoping that some mathematician or philosopher would eventually pop in here and formally demolish OP’s argument in a direction pleasing to my sensibilities, and I subscribed to the thread to wait for that to happen, but it hasn’t happened yet. @self_made_human, who will often deliver an incisive and sometimes brutal presentation of the traditional rationalist materialist viewpoint, appears to have lost interest. And of the two posters in this thread, @sqeecoo and @IGI-111, who appear most literate in the epistemology of science (or at least, more literate than me), neither has outright dismissed OP’s major premises as nonsensical or solipsistic, and it appears that I will have to read some Popper if I want to get to the bottom of it for myself.

My questions for you, or anyone, are:

(1) What is the first premise or step in OP’s argument that is clearly unreasonable / irrational?

(2) Is OP’s “principle of abductive inference” truly the inferential basis of the scientific method, and if not, what is, and how does it work?

(3) Is it impossible to infer universal physical laws with greater than 0% confidence, as OP claims?

(4) For OP: you suggest downthread that we should be inclined to trust models like Newtonian or Einsteinian physics. Why should we trust them (if we cannot infer universal physical laws with nonzero confidence) and how much should we trust them?

(4) For OP: you suggest downthread that we should be inclined to trust models like Newtonian or Einsteinian physics. Why should we trust them (if we cannot infer universal physical laws with nonzero confidence) and how much should we trust them?

We should trust them for two reasons. First, we do not need nonzero confidence in full generality to trust them for practical purposes. Being 99% sure the technology works 99% of the time is good enough -- or something like that, depending on the application. Second, I didn't say we cannot infer universal physical laws with nonzero confidence, just that we can't do it without believing in one more miracle, viz. that we are blessed with just enough intelligence, and a simple enough universe, that abductive reasoning is reliable (on top of the miracle that certain equations are physically instantiated in the form of a physical systems and consciousness, that this system continues persistently to be governed by those laws, that the parameters of those laws fall into the narrow range required for stars to form, etc.).

and how much should we trust them?

That depends on how many miracles you believe.

Thank you.

In return, I'll save you some effort about getting to grips with Popperian notions of falsifiability by pointing out that they're obsolete.

Popper claimed that a single contradictory finding is sufficient to sink a hypothesis, whereas no amount of evidence can ever prove it with 100% confidence. In other words, you can't prove anything, only disprove it.

Bayesianism goes even further. It asserts, with the maths to prove it, that it is impossible, mathematically so, to achieve either 0 or 100% confidence in a hypothesis without starting there, at which point literally no finite amount of evidence will sway you.

Starting anywhere in between, it would take an infinite amount of evidence to raise confidence in a hypothesis to 100%, or to reduce it to 0%. And if you start with 100% credence or 0% levels of disbelief, nothing anyone can do to you short of invasive neurosurgery (or maybe a shit ton of LSD) can change it.

What are the practical ramifications? Well, here, what Nelson is trying to argue is a waste of time. If you demand 100% confidence that the laws of physics are "universal" and timeless, you're SOL unless you assume the conclusion in advance. But we can approach arbitrarily close, and the fact that modern technology works is testament to the fact that we can be goddamn bloody confident in them. And worst part is that it's not the poor laws of physics at stake here, it's everything you don't hold axiomatic.

Skip Popper. Get on the Bayes Boat, baby, it's all you need.

Here's a few links if you're curious:

0 And 1 Are Not Probabilities (at least in the Bayesian sense)

And Scott on the Predictive Processing theory of cognition which holds that all human cognition is fundamentally Bayesian, even when it breaks.

And if you start with 100% credence or 0% levels of disbelief, nothing anyone can do to you short of invasive neurosurgery (or maybe a shit ton of LSD) can change it.... What are the practical ramifications? Well, here, what Nelson is trying to argue is a waste of time. If you demand 100% confidence that the laws of physics are "universal" and timeless, you're SOL unless you assume the conclusion in advance. But we can approach arbitrarily close

This is mistaken. There are two quantifiers in an assertions about laws of nature: one might be called generality, which refers to the uniformity with which the law is believed to hold, and the other might be called confidence, which refers to the degree of belief that the law holds with the given generality. For example, if I say I firmly believe that at least 1% of crows are black, this statement would have high confidence and low generality -- whereas if I said, It is plausible that at least 99% of crows are black, that statement would have lower confidence and higher generality. Nothing in any of my posts mentioned 100% confidence; my thesis is about nonzero confidence in 100% generality.

Skip Popper. Get on the Bayes Boat, baby, it's all you need.

Funny thing: everybody loves Bayes rule; but they never state their priors. To that extent they never consciously use it. Nor is there any evidence that it models the unconscious process of real life rational cognition. The evidence to support that would need to be quantitative; not just "Hey I believed something, then I saw something, and I altered my degree of belief. Must have been using Bayes!"

Funny thing: everybody loves Bayes rule; but they never state their priors. To that extent they never consciously use it. Nor is there any evidence that it models the unconscious process of real life rational cognition. The evidence to support that would need to be quantitative; not just "Hey I believed something, then I saw something, and I altered my degree of belief. Must have been using Bayes!"

You evidently don't hang around LessWrong enough.

While Predictive Processing theory, which posits that human cognition is inherently Bayesian, has not been established to the extent it's nigh incontrovertible, it elegantly explains many otherwise baffling things about human cognition, including how it breaks when it comes to mental illnesses like depression, autism, OCD, and schizophrenia. I've linked to Scott on it before. I think it's more likely to be true than not, even if I can't say with a straight-face that it's gospel truth. It is almost certainly incomplete.

In other words, humans are being imperfect Bayesians all the time, and you don't need to explicitly whip out the formula on encountering evidence to get by, but in situations where the expected value of doing so in a rigorous fashion is worth it, you should. The rest of the time, evolution has got you covered.

Besides, the best, most accurate superforecasters and people like quants absolutely pull it out and do explicit work. In their case, the effort really is worth it. You can't beat them without doing the same.

Besides, the best, most accurate superforecasters and people like quants absolutely pull it out and do explicit work. In their case, the effort really is worth it. You can't beat them without doing the same.

I know quants do this, but I think it is a special case. Show me a hundred randomly selected people who are making predictions they suffer consequences for getting wrong, and are succeeding, I will show you maybe 10 (and I think that's generous) that are writing down priors and using Bayes rule. Medical research, for example, uses parametric stats overwhelmingly more than Bayes (remember all those p-values you were tripping over?), as do the physical sciences.

If the effective altruism (EA) crowd are in the habit of regularly writing down priors (not just "there exist cases"), then I must be mistaken in the spirit of my descriptive claim that nobody writes them down. On the other hand, I would not count EA as people who pay consequences of being wrong, or that is doing a demonstrably good job of anything. If they aren't doing controlled experiments (which would absolutely be possible in the domain of altruism), they are just navel gazing -- and making it look like something else by throwing numbers around. I have a low opinion of EA in the first place; in fact, in the few cases where I looked at the details of the quantitative reasoning on sites like LessWrong, it was so amateurish that I wasn't sure whether to laugh or cry. So an appeal to the authority if LessWrong doesn't cut much ice with me.

I should give an example of this. Here is an EA article on the benefits of mosquito nets from Givewell.org. It is one of their leading projects. (https://www.givewell.org/international/technical/programs/insecticide-treated-nets#How_cost-effective_is_it). At a glance, to an untrained eye, it looks like an impressive, rigorous study. To a trained eye the first thing that jumps out is that it is highly misleading. The talk about "averting deaths" would make an untrained reader think that they are counting the number of "lives saved". But this is not how experts think about "saving lives" and there is a good reason for it. Let's suppose that we take a certain child, that at 9 AM our project saves him from a fatal incident; at 10 Am another, at 11 AM another, but at noon he dies from exactly the peril our program is designed to prevent. Yay, we just averted 3 deaths! That is the stat that Givewell is showing you. Did we save three lives? no, we saved three hours of life.

This is the way anyone with a smidgeon of actuarial expertise thinks about "saving lives" -- in terms of saving days of life, not "averting deaths", and the Givewell and Lesswrong people either know that or ought to know it. If they don't know it, they are incompetent; and if they know it, then talking about "averting deaths" in their public facing literature is deliberately deceptive because it strongly suggests "saving lives", meaning whole lives, in the mind of the average reader. To be fair to givewell, their method of analyzing deaths averted apply to saving someone from malaria for a full year (not just an hour), but (1) that would not be apparent to a typical donor who is not versed in actuarial science, and (2) the fact remains that you could "avert the death" of the same person nine times while they still died of malaria (the peril the program is supposed to prevent) at the age of 10. The analysis and language around it is either incompetent or deceptive -- contrary to either one word or the other in the name of the endeavor, effective altruism.

That's not a cherry picked example; it was the first thing I saw in my first five minutes of investigating "effective altruism". It soured me and I didn't look much further, but maybe I'm mistaken. Maybe you can point me to some EA projects that are truly well reasoned, that are also on the top of the heap for the EA community.

More comments

While Predictive Processing theory, which posits that human cognition is inherently Bayesian,

I'm skeptical of this. I think predictive processing theory posits a model with certain qualitative features that Bayesian updating would also have, but there are scads of non-Bayesian approaches that would also have those qualitative properties. They would only look Bayesian from the point of view of someone who doesn't know any other theories of belief updating. Does PPT posit a model that have the quantitative properties of Bayesian updating in particular, and experimentally validate those? That would be a very interesting find. If you know of a source I'd be curious to look at it.

More comments

If you demand 100% confidence that the laws of physics are "universal" and timeless, you're SOL unless you assume the conclusion in advance. But we can approach arbitrarily close, and the fact that modern technology works is testament to the fact that we can be goddamn bloody confident in them.

How can we approach arbitrarily close? As stated, this does nothing to address Hume's argument against induction, which holds equally whether you are aiming for probability or for certainty, and does not address the retro skeptical argument that every reason you can give is either based on something else or based on nothing, leading to infinite regress. I don't see how Bayesianism helps with this. Justification is not to be had, with any level of confidence or probabilty. Which is why you need Popper, who explained how you can maintain the use of logic and reason and maintain truth as the aim of science, while also accepting Hume's and the skeptical arguments as correct and consequently discarding justification alltogether.

Another issue Bayesianism often runs into is that many variants of Bayesianism give up on truth - I'm not interested in the confidence we can assign to a theory given our priors and the evidence, I'm interested in whether the theory in question is actually true. Even if we could be justified in Bayesian calculations of probabity/confidence (which we can't be), this would tell us exactly nothing about whether this probable theory is actually true, which is what we are really interested in. There is no logical connection between probable truth and truth (just because something is probably true, it need not be true), and Bayesianism often focuses on subjective calculations of probable truth and abandons actual truth as the goal of science. But if Bayesianism aims at truth rather than solely at subjective calculations of confidence unmoored from reality, if it is interested in what is true rather than just what we can be confident in, it is in no better a position to provide justification than any other epistemology.

How can we approach arbitrarily close?

By amassing more evidence from observations and updating accordingly. Physics demands 5 sigmas of confidence in experimental results before accepting an experiment as valid. For most purposes, you can get away with a lot less.

As stated, this does nothing to address Hume's argument against induction, which holds equally whether you are aiming for probability or for certainty, and does not address the retro skeptical argument that every reason you can give is either based on something else or based on nothing, leading to infinite regress.

Nobody has a solution to infinite regress, barring "I said so". As far as I can tell, you've got to start somewhere, and Bayesianism leads to more sensible decision theories and is clean and simple.

Another issue Bayesianism often runs into is that many variants of Bayesianism give up on truth - I'm not interested in the confidence we can assign to a theory given our priors and the evidence, I'm interested in whether the theory in question is actually true.

"The next sentence is false. The previous sentence is true." Good luck.

Given that English is an imprecise language, feel free to interpret my 99.9999% confidence that the Sun will rise tomorrow as being equivalent to "it's true the Sun will rise tomorrow".

But if Bayesianism aims at truth rather than solely at subjective calculations of confidence unmoored from reality, if it is interested in what is true rather than just what we can be confident in, it is in no better a position to provide justification than any other epistemology.

The universe we live in does not provide us the luxury of not being "subjective" observers. Bayesianism happens to be entirely cool with that.

Nobody has a solution to infinite regress, barring "I said so". As far as I can tell, you've got to start somewhere, and Bayesianism leads to more sensible decision theories and is clean and simple.

I have no problem with starting somewhere, but I don't claim our theories can ever be anything more than a guess, since, as you seem to have agreed, they are ultimately baseless due to infinite regress. In the context of this discussion on justification and the basis of science, I'm ok with Bayesianism that only claims to be decision theory, a formalized account of how we try to temper our guesses by reason and experience with no justification or basis ever being provided, which is also the Popperian view of the epistemic status of science. Bayesianism would then be a methdology to help in our conjectural decisionmaking, but would never elevate our theories beyond the status of a guess, in the sense of them having some sort of justification or basis. Do we disagee here?

Given that English is an imprecise language, feel free to interpret my 99.9999% confidence that the Sun will rise tomorrow as being equivalent to "it's true the Sun will rise tomorrow".

Ok, so if I'm understanding you right, you do care about the truth of your beliefs, not just about your confidence in them. So what's the logical relationship between your calculation of confidence in a theory and the truth of that theory? What is the epistemic benefit of confidence calculation, as opposed to a Popperian conjecture? It seems to me that if you are mistaken about the truth of the belief in question (as you would be with regard to the sun rising tomorrow if you went to, say, Iceland in winter), your high calculated confidence does nothing to mitigate your mistake. You are equally wrong as a Popperian who would just say he guessed wrong, despite your high confidence. And if the belief in question is true, it's just as true for the Popperian who only claims it to be a guess, regardless of confidence calculation. So what is the epistemic benefit of the confidence calculation?

To clarify a bit more, I see two questions we are discussing. First, whether Popper's falsificationist "logic of science" is a better description/methodology of science than Bayesianism. We can set that aside for now, as it is not the focus of the topic. The second question that's relevant to the topic at hand is whether you think Bayesianism can provide some sort of justification or rational basis for claims about the truth of our beliefs that elevates them to something more than a guess. We certainly seem to agree that we can temper our guesses using logic and reason and experience, but in the Popperian view all of this is still guesswork, and never elevates the epistemic status of a theory beyond that of a guess. So tell me if and where we disagree on this :)

More comments

Okay well in that case it's also hypocritical to criticize Cthulhu and Star Wars lore for not being literally true. Hooray, solipsism. This entire line of argument advances absolutely nothing.

If someone just jumped into this thread without reading the history, they might gather that I (or someone else) had criticized Cthulhu on the grounds of not being literally true. So for anyone who is jumping in in the middle, nothing of the sort happened.

Moreover, I would never detract from the merit of Shakespeare or Homer on the grounds that there is no evidence for the literal truth of their writings. Nor would I detract from the merit of a physics text on the grounds that there is no objective evidence that its contents are literally true. I do not think I am asking for special status for anything. I am arguing against a special status for the physical sciences, that I believe is widely attributed to them.

It essentially amounts to a theist's special request for their beliefs to be treated as intellectually serious even though they can't point to any justification... request denied until one of these arguments successfully and meaningfully distinguishes Christianity, theism, whatever, from an infinite number of bullshit things I could make up on the spot.

I agree that you should deny that request if somebody made it -- but I don't think I did (unless "whatever" casts a very wide net).

My thesis is that (1) if you hold nonzero confidence in the literal truth of a universal physical law, then you should be able to give reasons for your belief, and (2) the only rule of evidence I know of that would justify such a conclusion (abductive inference) -- and the one that is actually used in the physical sciences to establish credibility of physical theories -- rests on premises that are infinitesimally unlikely to hold in the absence of a miracle.

Tagging @marten too so I don't have to post twice.

Look, I'll be honest: If you're not playing some kind of game that amounts to wanting people to stop snorting when someone brings up god in an intellectual context? If this isn't the usual goofy theist sophistry and you're actually just parsing the differences between degrees of philosophical certainty that no one out in the world ever thinks about when making decisions?

Then I'll leave you to your hobby and continue to be puzzled as to the appeal. Back in the world where people make decisions, the fact that science does in fact produce functional results obliterates every other consideration anyway.

If you're not playing some kind of game that amounts to wanting people to stop snorting when someone brings up god in an intellectual context?

I'm glad you mentioned that. I am actually not interested in the reactions of people who scoff (or "snort") when someone brings up God in an intellectual context. The readers that interests me for this argument are people like political scientist Charles Murray and historian Tom Holland, who do not scoff, and who are even sympathetic to the idea, but who are not believers because they cannot find reasons to believe.

just parsing the differences between degrees of philosophical certainty that no one out in the world ever thinks about when making decisions?

My argument isn't about parsing degrees of certainty

Then I'll leave you to your hobby and continue to be puzzled as to the appeal. Back in the world where people make decisions, the fact that science does in fact produce functional results obliterates every other consideration anyway.

Look, I'll be honest:...

I'm glad you are being honest. In that same spirit, I think it is Philistine to separate the effort to reveal the true laws of nature from "the world where people make decisions". Science, conceived as the effort to reveal the laws of nature, involves making many of decisions; I believe it is what many scientists perceive themselves as doing, and I believe it is a worthwhile pursuit for its own sake -- independently from its applications to such things as bread and circuses.

My argument isn't about parsing degrees of certainty

No? Because it sort of sounds like it to my Philistine ears.

Is this like a hypocrisy claim? That since science isn't literally true it would be hypocritical to criticize theism for not being literally true?

Yes, that's what I'm saying.

Except one of these things can produce consistent on-demand results that wouldn't be possible if its claims were false, while the other cannot. By any standard of truth-seeking that doesn't succumb to solipsism and ludicrously rule out observation of the world as a means of understanding it, the former is obviously much more true than the latter.

Ah but while science may contain observable truth, it doesn't meet Nelson Rushton's standard for being "the source code of the universe" and that's important... why exactly? Telling me you have a standard of truth under which apparently absolutely nothing is "literally true" isn't actually interesting.

The reason I keep thinking this is about getting atheists to stop snorting is because I can't think of any other purposes for this whole argument, charitable or otherwise. Like okay, nothing in the universe meets the Rushton Source Code Standard of Literal Truth. Neat, why should anyone care? What decision should anyone make differently now that they've heard this?

Do you believe Newton's Law of Universal Gravitation is true?

This strikes me as a Socratic question. Socrates used to ask Greeks questions that were slightly off. Being polite, the Greeks would refrain from nit-picking the questions, and try to answer. Then Socrates, being an arse-hole, would nit pick the answers. He would entangle his victims with his verbal dexterity, and skillfully obscure that bad answers were down-stream from bad questions.

There are many stories to tell about gravity. Kepler discovered that the planets moved in ellipses. Newton invented a theory of mechanics and new mathematics. Then he was able to respond to speculation that the ellipses were due to an inverse square law of attraction by filling in the details of what that actually meant, and solving the mathematical problems to demonstrate it.

Newton went further, spotting that gravity was "universal". By "universal" Newton meant that the attraction was not a specific property of the Sun (which would leave gravity on the surface of the Earth as a separate mystery) but was about all matter attracting all matter. So a cannon ball fired by an artillery man follows an elliptical trajectory with one focus at the center of the Earth. Obviously an artillery man uses the parabolic approximation (until the Paris gun in 1918. But (unless my memory is playing tricks on me) Newton had the idea that a cannon fired horizontally with sufficient force would cause the cannon ball to orbit the Earth, just as the moon orbits the Earth.

"Universal" creates a loose end. Jupiter is attracting Saturn and Saturn is attracting Jupiter. The Sun is not the only player in the solar system. That loose thread went unpulled until it was noticed that Jupiter was spiraling in. Jupiter's orbit was decaying and it would in time destroy the Earth. Then a French mathematician (LaPlace?) got stuck into the details. Jupiter and Saturn are nearly in a five to two orbital resonance. The difference frequency is about 800 years. Four hundred years of Jupiter spiraling in and Saturn spiraling out get followed by four hundred years of Jupiter spiraling out and Saturn spiraling in. Theory and accurate astronomical observation agreed; panic over.

Other stories include Halley working out the orbital parameters of a comet and predicting its return. That was a big deal at the time, because comets were traditionally seen as bad omens. If they simply moved in obedience to Kepler's Laws, they stopped being frightening. The comet returned as predicted and is now called Halley's Comet.

After Hershel discovered Uranus, both John Couch Adams and Urbain Le Verrier puzzled over anomalies in the orbit of Uranus. Could there be another planet. Verrier go Johann Galle and Heinrich D'Arrest to look, and there was Neptune, discovered in 1846 by mathematics and Newton's Law of Gravitation. Verrier tried to repeat his success with anomalies in the orbit of Mercury, and inferred the existence of the planet Vulcan. Which wasn't there, leading eventually, by a circuitous route to Einstein's General Theory of Relativity.

For me, this raises questions about the word believe. I'm comfortable with three interpersonal meanings. Do I believe a person's testimony: did the things he tells me actually happen? Do I believe a person's promises: will he keep them? Do I believe a person's predictions: will they actually happen? But how does one extend the word believe to cover scientific theories? The tale that I've told goes well beyond my personal experience. The largest telescope that I have looked through is a twelve inch reflector. Maybe the story about Neptune is made up; I've seen Saturn, but neither Uranus nor Neptune. Interpersonal belief is at issue. Yet when we talk of belief in Newton's Law of Gravity, we assume the honesty of astronomers and are talking about something else. I'm not clear what. Contemplating the long narrative that I have sketched is valuable because it gives a concrete example of what successful science looks like. Trying to abstract a high level concept of "belief"? That is the kind of unmotivated abstraction that confuses things.

I'm comfortable with two meanings of the word true. One is person testimony (again). Did that actually happen? The other is in my books on mathematical logic. When is (A and B) true? When A is true, but not just A, B must also true. Add in first order logic, sets, and model theory and there is lots to read about. But neither notion of truth fits well with generalisations arising from empirical investigations.

The most promising notion of truth, appropriate to empirical investigation, that I have encountered is Probably Approximately Correct learning theory and Vapnik-Chervonenkis Dimension. Those are crap links. To get the basic idea, image rolling a d6 six hundred times to estimate the probabilities of each face. You get numbers like 100, 118, 95, 88, 114, 85 or like 112, 103, 93, 104, 99, 89. Empirical work always has a certain about of random slop and your empirical estimate will never be true in the sense of being exact. But what about being approximately true? Fix an unambitious goal for accuracy and ponder the probability of being approximately correct. Things can still go horribly wrong; an unlucky sequence of rolls could give you 600, 0, 0, 0, 0, 0 and your empirical work is not even approximately correct. But something interesting happens when the Vapnik-Chervonenkis dimension is finite. Fix your desired level of approximation and keep rolling the d6. The probability of not meeting your approximation goal eventually starts to decline exponentially with the number of rolls. Exponentially! You are on the route to the practical man's version of certain knowledge. Well, that is nice, but God is it complicated.

Asking "Do you believe Newton's Law of Universal Gravitation is true?" is doing the 20000 foot overview thing. It can only lead to vague waffle. On the other hand, waffling vaguely is rather fun; what am I actually proposing as a rival ideal? I think that interesting gap is between social science and "hard" science. There is a gap between "hard" science and ideal certainty, but it seems unimportant compared to the gap between social science and "hard" science. Let me give a concrete example of how little we know in social science so that you can see how well Newton's Law of Gravity compares.

Think about Laffer Curve effects. Here are four theories.

  1. The Laffer Curve is bunk. If the government increases income tax from 40 pence in the pound to 83 pence in the pound, that will increase revenue. Revenue will probably double.

  2. Rich businessmen are trapped by their commitments. If their take home pay falls, they won't be willing to give up their yacht or their mistress. They will draw more salary from their business, to maintain their take home pay. Rather than pay themselves $1,600,000 to take home $1,000,000 they will pay themselves $5,882,353 to take home $1,000,000. Tax revenue will rise from $600,000 to $4,882,353 Eight fold, not two fold.

  3. Don't ask where we are on the Laffer Curve, ask when we are. The government is taxing fifty year old businessmen, expecting revenues to hold up indefinitely. But in thirty years time they will all have retired. Will today's twenty year olds replace them? No, once Boxer goes to the knacker's yard, no-one is taking his place.

  4. Laffer Curve effects are prompt. When taxes are low the rich businessman pays his mistress from after tax income. When tax rates soar, he cuts the money that he withdraws from his business as personal income, and preserves his lifestyle by having his company employ his mistress as a secretary. Tax revenue falls.

What would it be like to have a theory of taxation with the accuracy of Newton's Law of Gravity? The very idea is mind boggling. A good philosophy of science would help us construct a scientific theory of tax revenue. A good discussion of the philosophy of science would look at areas of science where we are doing badly and wonder how to do a little better. Perhaps a good discussion of the philosophy of science would also look at successes, such as Newton's Law of Gravity and try to extract lessons, about how to do science, that we could apply to where we are failing. That is very different from looking at Newton's Law of Gravity and worrying about miracles or something.

A well thought-out post! However, I reject your Principle of Abductive Inference. The essence of science is falsification. Experiments cannot verify a hypothesis (it always remains just our best guess), but they can contradict and thus falsify a hypothesis. The hypothesis "all swans are white" cannot be verified by any number of white swans (because there may always be a non-white swan out there), but it is contradicted by the observation of a single black swan. Of course, the experiment itself is also just a best guess (maybe the swan is just painted black?). All knowledge is guesswork. However, the logical relationship of falsification holds (the hypothesis is logically contradicted by the experiement), while inductive inference is not logically sound (no amount of verification can "ground", "prove" or whatnot that the hypothesis is true).

For further reading along these lines, I recommend "The Logic of Scientific Discovery" by Karl Popper, or this shorter and more modern article: https://www.science.org/doi/10.1126/science.284.5420.1625

To answer your three questions:

  1. Yes, I believe Newton's Law of Universal Gravitation is true.

  2. How sure am I that it is true? Psychologically, very sure. Logically and rationally speaking, not at all, it's just a guess.

  3. Why do I believe it, and with that degree of certainty? I believe it beause it has passed tests that other competing hypotheses have failed. This does not prove it to be true (with any degree of certainty), as you rightly point out, but given we accept the results of the tests, it makes it preferable to the competing hypotheses that fail those tests, because they are logically contradicted by those tests. So it's our best guess because its competitors have been eliminated by experiments, but it is not certain or probable or verified in any way.

Really, you are very close to my position on this, except you want experiments to do more than they can do, and struggling to find a way for them to do what they cannot, namely provide justification/inference/certainty/likelihood for hypotheses. Experiments can contradict and thus falsify hypotheses, but they cannot justify them. Relinquish the demand for justification, and the logical situation is clean and sound: we make guesses, discard those guesses that don't stand up to experiments, and tentatively accept those that do.

Thanks for the reply.

I agree that my argument does not merit against your stated belief [Newton's L.O.G is (merely) our best guess for how gravity works]. But I submit that you have good reason to believe something stronger than your stated belief, that in your heart you do believe something stronger than that (as you indicate: " Psychologically, very sure."), that you probably act as if you believe something stronger than that, and that you plan to deliberately continue acting that way. So, either you are acting irrationally (which I doubt), or something is missing from your formal statement (which I submit is probably the case).

For example, imagine you are told that a certain missile defense system uses a model of rocket flight that is based on Newton's law of gravity. Do you think it would be rational to posit that the system is unsafe, and to take costly action to avoid relying on it, specifically on the grounds that Newton's law of gravity is a "just a guess" in which we are not entitled to a smidgeon of confidence? Now, by comparison, imagine that a system protecting your safety were based on the latest theory of a less well studied domain (say, quantum computing); in that case you might be justifiably concerned about that, and, if so, be more likely to take costly actions to avoid relying on that system -- even though the theory it rests on, like Newton's law of gravity, might be our current best guess about the relevant domain.

Here is the point: (1) we have best guesses in many different domains of inquiry; (2) we have more confidence in our best guesses in some domains than we do in others, and (3) it is prima facie instrumentally rational to act on those differences. So, if our stated beliefs are to be consistent with our actions, which we fully intend to continue taking while regarding ourselves as rational, then we should be able to say something stronger than that the law of gravity is merely our best guess in the relevant domain. If we find ourselves unable to justify saying anything stronger, then we have important epistemological work to do.

For further reading along these lines, I recommend "The Logic of Scientific Discovery" by Karl Popper,

As I recall, Popper held that repeated, failed attempts to disprove a hypothesis count as evidence for its truth (though never certain evidence). Am I mistaken?

The first thing I should clarify is that I think that scientific hypotheses, despite evidence never being able to elevate them above the status of a guess, can be true, really, absolutely true. If we guess right! So if you say aliens exist and I say they don't, we are both guessing (but not randomly: we are motivated, but not justified, by our other other background beliefs). But either aliens exist or they don't. So despite both of us just guessing, one of us is right and has hit upon the truth, the absolute truth. So while Newton's L.O.G. is just a guess from an epistemological standpoint, I am also tentatively accepting it as true. I claim it really is true, and I act upon that belief, although my belief in that is just a guess. Does that satisfy what you felt was missing from my position?

As for your question on the missile defense systems example. So lets say I'm choosing between two courses of action based on two different scientific hypotheses. If one of those hypotheses has passed its empirical tests and the other hasn't, the logical situation is very clear: logic and reason dictate that I reject the hypothesis that has been falsified by the tests, since the tests logically contradict the hypothesis. The hypothesis that has passed its tests I can tentatively accept as true, and I prefer the course of action based on that hypothesis. If both hypotheses have passed all their tests, I would try to concieve of a test that distinguishes between them (a test that one fails but the other doesn't). If this is not possible, then the logical situation is also clear, however: if both hypotheses have passed all their tests, the evidence tells us exactly nothing about which one we should accept - we have to decide what to believe.

And this is a crucial aspect of my position: rationality and and logic cannot tell us what to believe: we have to make that decision. Reason can, however, tell us what not to believe: we should not believe contradictory things, or in this case hypotheses that are contradicted by test results we accept. Rationality does not provide justifications that tell us what to believe. Rationality is the method, namely the method of critical evaluation and when possible empirical testing, which serves to eliminate some of our ideas, hopefully leaving us with true ones. Yes, it'd be great if we could be justified in believing what we believe, but we can't. So we are left with conjectures that we attempt to parse from error by criticism and empirical testing, using logic and reason, with the goal of believing true things. We are rational, in the sense that we use reason and logic to criticize our ideas and hopefully eliminate errors, and our goal is the truth - we aim at having true beliefs. But we can never know that our beliefs are true; we can only guess at the truth, and use reason as best we can to eliminate the guesses that are untrue.

Does this answer your questions? Feel free to ask more if I've been unclear. There are various complications I didn't want to go into (like differences in the severity of empirical tests) for the sake of clarity.

As I recall, Popper held that repeated, failed attempts to disprove a hypothesis count as evidence for its truth (though never certain evidence). Am I mistaken?

You are mistaken, but it's a common mistake. In Popper's and my view, corroborating evidence does nothing, but contradicting evidence falsifies (although also without any degree of certainty).

So if you say aliens exist and I say they don't, we are both guessing (but not randomly: we are motivated, but not justified, by our other background beliefs). But either aliens exist or they don't. So despite both of us just guessing, one of us is right and has hit upon the truth, the absolute truth. So while Newton's L.O.G. is just a guess from an epistemological standpoint, I am also tentatively accepting it as true.

The fact that you have guessed right, or that you may have guessed right, does not entail that you are rationally licensed to embrace the proposition (I think you agree with this). For example, if a tarot card reader told me that I was going to get a job offer today, and I believed her and acted on it by taking out a car loan, and if the Gypsy turned out to be right by sheer luck, my action would still be irrational.

To clarify my position in this light, I never said that the physical laws we have in our corpus are all false, or anything of that sort. I said that we are not entitled to any rational confidence in them -- just as I am not entitled to any rational confidence in a tarot card reading (unless I am mistaken about that practice), even though they may be sometimes right as well -- except to the extent we also believe in miracles.

Success rates matter.

If tarot reading worked as consistently physics or math then boy would that be something.

(Now social sciences, well…)

Science as a method frequently involves guessing and dumb luck and accidental discovery. But then the point is systematically testing findings and examining new evidence and ideas. Tarot reading doesn’t have iterative improvement going on.

Success rates matter.

The success rate of science in enabling improvements to our material lives is pretty good. The success rate of science in yielding justifiable nonzero confidence in universal natural laws may be zero. Can you defend the proposition that it is not? It would be a compelling refutation of my argument if someone were to give a single universal natural law of the physical world -- take your pick -- and give an objective argument why we should have greater than zero confidence in its literal truth. Now that I think about it, that is the straightforward path to refuting my argument, and it is notable that one has attempted to take it.

A word of advice if you proceed: don't waste your time trying to use Bayesian reasoning; you will not get a nonzero posterior unless you have a nonzero prior, and that would be begging the question. And don't bother trying to use parametric statistics, because no finite number of observations will get you there.

yielding justifiable nonzero confidence in universal natural laws may be zero

I’m failing to understand why this is a bar any epistemology needs to clear.

Science as a method verifiably works at improving our material lives because it produces sufficiently accurate information. The utility is the payoff, but the correlation to reality is what enables it.

if someone were to give a single universal natural law of the physical world -- take your pick -- and give an objective argument why we should have greater than zero confidence in its literal truth.

Where does math fit here under “physical world”?

The thing you seem to be doing is putting forth a standard no epistemology can satisfy. It’s not like pure math and logic don’t have identified paradoxes and limitations. Just ask Bertrand Russell.

How about the finding that nothing with mass can exceed the speed of light? This is something backed by math and logic, as well as experimentation. If it were otherwise physics would break, is my layman’s understanding anyway.

Is that sufficiently “universal”?

There are a lot of “universal” rules in physics, so long as you stay at the atomic level. (The quantum domain also has its rules, but they don’t break the atomic ones altogether.)

How about the finding that nothing with mass can exceed the speed of light? This is something backed by math and logic, as well as experimentation. If it were otherwise physics would break, is my layman’s understanding anyway... Is that sufficiently “universal”?

It sure is. Thanks for taking me up on the offer.

I am looking for objective evidence of the theory, Nullius in verba [Latin: No one's words (will be trusted)]. If you claim something is a theorem, show me the proof. If you claim something is experimentally verified, describe the experimental design and its results. What we have here is an appeal to authority claiming that the theory is "backed by math and logic" or that "physics would break" if it were untrue, omnes in verbo [all on the word (of authority)].

I would not be so demanding that I ask anyone to perform experiments, or even look up experimental data in literature, for the purpose of making a "Motte" post. A plausible (but concrete) story of what such evidence would look like -- in evidence of any theory of your choice -- would be enough to rebut my argument.

An appeal to authority is warranted here, rebutting your argument doesn't actually hinge on the truth of the theory, it hinges on whether it is possible for experimental evidence to justify a belief in the correspondence of a theory and reality. If it does there are cases where the logic of the theory enforces universality.

To wit, taking Newton's law as an example (and supposing we only knew classical mechanics), would we be justified in saying that the masses we observe behave as per his theory?

I'm not saying universally, merely the things we've observed locally.

If so, it turns out there are other cases, where if we are justified in believing the theory, the theory says things about the universe as a whole.

If you don't believe we can go from experimental evidence to justified belief in theory, then we have bigger problems.

More comments

Well I’m a layman at physics, so I’d suggest finding someone who can lay out the math, theory, and experimentation that shows it is impossible for any object with mass to travel faster than the speed of light.

My layman’s understanding is that the fundamental properties of spacetime, mass, and energy as we understand them via Special Relativity make it impossible.

Here’s a bunch of physics nerds describing how it would violate causality:

https://physics.stackexchange.com/questions/671516/proof-for-impossibility-of-ftl-signals

More comments

I’m failing to understand why this is a bar any epistemology needs to clear... science as a method verifiably works at improving our material lives because it produces sufficiently accurate information. The utility is the payoff, but the correlation to reality is what enables it.

I did not say that any epistemology needed to clear that bar. If your position is that science a collection of useful fictions, and that discerning the (literally true) laws of nature falls outside the scope of its business, then your position is immune to my argument. For myself, I am a little more romantic about the goals of science.

You’re applying a rigid categorization of “fact or fiction” to an area where the practicality of “all models are wrong; some are useful” is the typical approach.

You’re calling for perfection or it’s fiction, when science has been building knowledge bit by bit. Things can have shades of gray.

Obviously, understanding the Ultimate Nature of Reality and Its Universal Laws is a fine goal, but the way to get there is almost certainly a pretty messy process.

More comments

@NelsonRushton: As I recall, Popper held that repeated, failed attempts to disprove a hypothesis count as evidence for its truth (though never certain evidence). Am I mistaken?

@sqeecoo: You are mistaken, but it's a common mistake. In Popper's and my view, corroborating evidence does nothing, but contradicting evidence falsifies (although also without any degree of certainty).

Seeing as we recall the text differently, I was probing there for a source there (other than yourself). I am not convinced that I was mistaken. Popper defines corroboration as a diligent attempt to disprove a hypothesis:

So long as theory withstands detailed and severe tests and is not superseded by another theory in the course of scientific progress, we may say that it has ‘proved its mettle’ or that it is ‘corroborated’ [Popper, "The Logic of Scientific Discovery", p. 10]

He goes on to say that the degree of corroboration, which he views as the merit of the theory, increases with the number of non-disconfirming experiments:

When trying to appraise the degree of corroboration of a theory we may reason somewhat as follows. Its degree of corroboration will increase with the number of its corroborating instances. [Popper, "The Logic of Scientific Discovery", p. 268]

If there is a difference between what Popper said, and what I said he said, it would be that I used the word "truth". Fair enough, but so did you:

@squeeco: I think that the mission of science is to discover the actual, literal truth.

and I do not see how the following claim could be correct, in light of the quotes above: "In Popper's view,... corroborating evidence does nothing". [emphasis added]

You said that Popper thinks corroboration (failed attempts to falsify a hypothesis) count as evidence for its truth. Instead, Popper says that theories cannot be verified. The first sentence of the chapter you quote is:

Theories are not verifiable, but they can be ‘corroborated’. [Popper, "The Logic of Scientific Discovery", p. 248]

In the footnote soon after:

I introduced the terms ‘corroboration’ (‘Bewährung’) and especially ‘degree of corroboration’ (‘Grad der Bewährung’, ‘Bewährungsgrad’) in my book because I wanted a neutral term to describe the degree to which a hypothesis has stood up to severe tests, and thus ‘proved its mettle’. By ‘neutral’ I mean a term not prejudging the issue whether, by standing up to tests, the hypothesis becomes ‘more probable’ [Popper, "The Logic of Scientific Discovery", p. 249]

And finally, here's Popper stating the difference between psychological questions of one's state of mind (that one can be "very certain") and epistemological questions of the state of the evidence, where evidence cannot verify hypotheses.

Like inductive logic in general, the theory of the probability of hypotheses seems to have arisen through a confusion of psychological with logical questions. Admittedly, our subjective feelings of conviction are of different intensities, and the degree of confidence with which we await the fulfilment of a prediction and the further corroboration of a hypothesis is likely to depend, among other things, upon the way in which this hypothesis has stood up to tests so far—upon its past corroboration. But that these psychological questions do not belong to epistemology or methodology is pretty well acknowledged even by the believers in probability logic. [Popper, "The Logic of Scientific Discovery", p. 252]

So corroboration is a measure of how well-tested a theory is, and the severity of the tests it has undergone. But corroboration does not provide evidence for the truth of the hypothesis. Here's a quote from Popper, "Objective Knowledge", 21f:

From a rational point of view we should not "rely" on any theory, for no theory has been shown to be true, or can be shown to be true. ... in spite of the "rationality" of choosing the best-tested theory as a basis of action, this choice is not "rational" in the sense that it is based upon good reasons for expecting that it will in practice be a successful choice: there can be no good reasons in this sense, and this is precisely Hume's result.

I like my Popper but I hate looking for quotes - I'm much more interested in the substance of the discussion we're having and the view I've outlined as a response to yours.

Thanks for the researched response. I think I finally understand the disagreement now.

@NelsonRushton: As I recall, Popper held that repeated, failed attempts to disprove a hypothesis count as evidence for its truth (though never certain evidence). Am I mistaken?

As you point out, Popper does not regard repeated experiments as progressively raising our confidence in the probability that the theory is true; his notion of the merit of a theory is much more nuanced than "probability of truth". So that is where my statement differs from his view; I am convinced now that I was mistaken and thank you for pointing it out.

@squeecoo: In Popper's and my view, corroborating evidence does nothing, but contradicting evidence falsifies (although also without any degree of certainty).

But I believe you are also mistaken, and your view differs from Popper's in a more profound way. If you open an electronic copy of Popper's book (https://philotextes.info/spip/IMG/pdf/popper-logic-scientific-discovery.pdf), hit ctrl-f, and search for "degree of corroboration" you will find that that phrase occurs 84 times -- about once every five pages for the length of the book. So, while his notion of merit is not defined in terms of truth or probability of truth, he does hold that repeated, diligent, failed attempts to disprove a theory tend to progressively confirm its merit (or to use his word, its "mettle") -- which is a far cry from doing nothing. For Popper, non-disconfirming experiments do something (viz, "corroborate") and greater number of such experiments do more of that thing:

Its [the theory's] degree of corroboration will increase with the number of its corroborating instances. [Popper, "The Logic of Scientific Discovery", p. 268]

I read you correctly, you seem to believe that there should be no difference in our willingness to act on a theory after one rigorous non-disconfirming experiment, versus 1000 of them by 1000 different researchers using different methods and bringing different perspectives and skill sets to the table (say, Newton's law of gravity vs. some new law of quantum computing). Do I read you incorrectly (or did you perhaps misspeak)?

@squeecoo: I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system.

Ok that is a relief to hear, but it is not consistent with your other statement above (corroborating evidence does nothing), so it seems you misspoke.

But I believe you are also mistaken, and your view differs from Popper's in a more profound way. If you open an electronic copy of Popper's book (https://philotextes.info/spip/IMG/pdf/popper-logic-scientific-discovery.pdf), hit ctrl-f, and search for "degree of corroboration" you will find that that phrase occurs 84 times -- about once every five pages for the length of the book. So, while his notion of merit is not defined in terms of truth or probability of truth, he does hold that repeated, diligent, failed attempts to disprove a theory tend to progressively confirm its merit (or to use his word, its "mettle") -- which is a far cry from doing nothing.

Sure, Popper is developing the idea of degree of corroboration in that book, so he mentions it a lot. But no degree of corroboration can change the epistemic status of a theory, which always remains a conjecture. Like I said, it's a common mistake, and Popper shares some of the blame for by speaking about "preference" in the context of corroboration, which sounds a lot like justification or that we "rationally ought" to believe the better tested theory as if it had a greater likelihood of being true, or something like that. Popper did a lot to muddle the waters here. But corroboration is a measure of the state of the critical discussion, and not in any way a measure of the justification, reliability, probability, etc. of a theory. With regard to the epistemic status of a theory being adjusted by evidence, which is what is relevant to our discussion, corroboration does nothing. Here's Popper saying it outright, in Objective Knowledge 1972 (1979 revised edition), p. 18:

By the degree of corroboration of a theory I mean a concise report evaluating the state (at a certain time t) of the critical discussion of a theory, with respect to the way it solves its problems; its degree of testability; the severy of tests it has undergone; and the way it has stood up to these tests. Corroboration (or degree of corroboration) is thus an evaluating report of past performace. Like preference, it is esentially comparative: in general, one can only say that the theory A has a higher (or lower) degree of corroboration than a competing theory B, in the light of the critical discussion, which includes testing, up to some time t. Being a report of past performance only, it has to do with a situation which may lead us to prefer some theories to others. But is says nothing whatever about future performance, or about the "reliability" of a theory.

As the missile example:

@squeecoo: I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system.

Ok that is a relief to hear, but it is not consistent with your other statement above (corroborating evidence does nothing), so it seems you misspoke.

This would be my conjecture, motivated in part by how poorly tested quantum computing is, but not justified or "based" on that. It's my best guess that has taken into consideration the evaluation of the state of the critical discussion on quantum computing (how well corroborated it is), but is not justified by it and remains a guess/conjecture. We can certainly take the degree of corroboration into consideration when deciding what to believe, but it can never elevate our beliefs beyond the status of conjecture, and it is in this epistemological sense that corroborating evidence does nothing.

But is says nothing whatever about future performance, or about the "reliability" of a theory.

I think I see now why I, like many people, misread Popper. Frankly, I think the position he expresses here is so egg-headed that I did not anticipate it. He implicitly conditions future performance (aka reliability) on justified confidence in general, literal truth, and so winds up concluding that theories of physical world have only two levels of reliability: known false, and other. This position hamstrings his theory of corroboration with respect to establishing a rational basis for action -- and that moves him to the bottom of my reading list for philosophy of science. It's not that his work has no intellectual merit (it's all very interesting); it's just that I have better things to do, because I am interested science as a rational basis for discriminating between alternative courses of action, and in philosophy of science as an articulated theory of the rules of evidence for doing so.

It appears that Popper (1) accepts the essence of my argument in the original post, but (2) doesn't believe in miracles -- which commits him to his position on reliability and future-performance, and also makes his theory of corroboration impotent a basis for rational action. I share his view of (1) but not (2).

For clarity, do you agree with the Popper on this (that corroboration says nothing whatever about the future performance of a theory)?

Yup, you got it. There's no establishing a rational basis for action, it cannot be done. You have done a good job articulating some of the obstacles to this in your original post. We can, however, still use reason and logic in the method of eliminating errors in the pursuit of truth. That's Popper's insight.

A small note: there is no "known false" category. Falsification is not justified either, it is as conjectural as anything else. So yes, justification doesn't work, and there is no rational basis to be had. But we can still engage in the rational pursuit of truth, in the sense of using reason and experience to temper our conjectures about the world.

As for your future reading, go with your interests, of course, but I can still recommend this short article articulating this position: https://www.science.org/doi/10.1126/science.284.5420.1625

The beauty and clarity of Popper's view is relinquishing justification and the search for a "basis", which reason and rationality are not capable of providing, but still maintaining rationality, empiricism, and the pursuit of truth. It's worth keeping in mind at least, as a possible different path that eschews the use of justification and "good reasons" but retains the use of reason and truth as the aim of science. If ever you stop believing in miracles, you need not despair of reason just yet, give Popper's view a shot first :)

I'll leave you with a final Popper quote:

And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’ The question of the sources of our knowledge, like so many authoritarian questions, is a genetic one. It asks for the origin of our knowledge, in the belief that knowledge may legitimize itself by its pedigree. The nobility of the racially pure knowledge, the untainted knowledge, the knowledge which derives from the highest authority, if possible from God: these are the (often unconscious) metaphysical ideas behind the question. My modified question, ‘How can we hope to detect error?’ may be said to derive from the view that such pure, untainted and certain sources do not exist, and that questions of origin or of purity should not be confounded with questions of validity, or of truth. …. The proper answer to my question ‘How can we hope to detect and eliminate error?’ is I believe, ‘By criticizing the theories or guesses of others and – if we can train ourselves to do so – by criticizing our own theories or guesses.’ …. So my answer to the questions ‘How do you know? What is the source or the basis of your assertion? What observations have led you to it?’ would be: ‘I do not know: my assertion was merely a guess. Never mind the source, or the sources, from which it may spring – there are many possible sources, and I may not be aware of half of them; and origins or pedigrees have in any case little bearing upon truth. But if you are interested in the problem which I tried to solve by my tentative assertion, you may help me by criticizing it as severely as you can; and if you can design some experimental test which you think might refute my assertion, I shall gladly, and to the best of my powers, help you to refute it.

So while Newton's L.O.G. is just a guess from an epistemological standpoint, I am also tentatively accepting it as true. I claim it really is true, and I act upon that belief, although my belief in that is just a guess. Does that satisfy what you felt was missing from my position?

The difference I was trying to elucidate with the missile defense system example was a difference in the degree of confidence you would have between two theories A and B, both of which have been tested, neither of which has been disconfirmed, but one of which has been tested more thoroughly (or, for whatever reason, you have more confidence in). The crucial issue is a difference in degrees of confidence (or what Popper called degree of corroboration) between two hypotheses, neither of which has been falsified.

The hypothesis that has passed its tests I can tentatively accept as true, and I prefer the course of action based on that hypothesis. If both hypotheses have passed all their tests, I would try to conceive of a test that distinguishes between them

This is not the situation I was describing. In the hypothetical, the two laws are in different domains (gravity vs. quantum computing), possibly for different purposes (say, missile defence vs. airplane autopilot) and one is better established (or better corroborated) than the other.

Like I said, if both theories A and B have passed all their tests, the evidence says nothing about them. We are free to tentatively accept them as true. We don't have to, though - my guess might be that quantum computing theory is not true, or it might be that I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system. That's fine, but that is the part where I conjecture/guess at the truth. We don't disagree about my mental process, it's just that I think it's conjectural and not warranted by the evidence - the evidence can't tell me what to think and which bet to make and which hypothesis to prefer, the evidence can only contradict a hypothesis and thus force me to reject it if I accept the evidence as true. Everything else is me making my best guess. I'm free to describe my mental state as "very confident" in that process, but that describes my state of mind, not the state of the evidence.

if both theories A and B have passed all their tests, the evidence says nothing about them. We are free to tentatively accept them as true. We don't have to, though - my guess might be that quantum computing theory is not true, or it might be that I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system.

I think I am beginning to understand your position better. So, here is my question. Do you think that the preference for acting on a better-tested theory over acting on a worse-tested theory is an arbitrary, subjective preference? like, some people like chocolate; some people like vanilla; different strokes? I assert that it is only rational to be more willing to act on a better tested theory.

We are free to tentatively accept them as true. We don't have to, though.

When did anybody ever have to accept a theory? By have to do you mean rationally ought to? If rationally ought to is what you mean, then, as I said, I disagree.

I think I am beginning to understand your position better. So, here is my question. Do you think that the preference for acting on a better-tested theory over acting on a worse-tested theory is an arbitrary, subjective preference? like, some people like chocolate; some people like vanilla; different strokes? I assert that it is only rational to be more willing to act on a better tested theory. When did anybody ever have to accept a theory? By have to do you mean rationally ought to? If rationally ought to is what you mean, then, as I said, I disagree.

Questions of subjective/objective are always tricky, and I can answer this question on several different levels. Those who think rationality can lead to justified beliefs think that justification and evidence can make it so that we objectively rationally ought to believe a justified theory, as you say. Popper and I reject this. Theories (or beliefs in general) cannot be justified. At all. However, if we are interested in finding the truth (and this is also a subjective goal, one might be more interested in, say, propaganda), we should try to eliminate any erroneous beliefs that we have, and our tool for this is rational criticism and experiments. So we should try to deploy these tools as much as we can if we are interested in the truth, and we thus want our theories to be as severely tested as possible. No matter how well-tested, however, our theories remain conjectures tempered by rational criticism.

We are also not mandated by reason (in Popper's view of science) to prefer the better-tested theory. It's not the case that we rationally ought to accept the better tested theory. We could for example be super stoked about a poorly tested theory in preference to a better tested one - but the thing to do then is to try and come up with stronger tests of our preferred poorly tested theory, since in the search for truth we should try to test our theories as strongly as possible in order to eliminate error. This is subjective in the sense that our preference for a theory is our decision, but it's not like a preference for an ice cream flavor - we deploy rational evaluation and empirical experiments to the best of our ability in order to try to guess at the truth and eliminate errors, which we do not do in our ice cream preferences. This use of the rational method of criticism in the search for truth is what makes the difference and what makes our decision rational in the sense of using critical reasoning, although this provides no objective justification for our decision and it does not tell us what we rationally ought to believe.

Those who think rationality can lead to justified beliefs think that justification and evidence can make it so that we objectively rationally ought to believe a justified theory

There is a nuance to my position that this glosses over. In my view, scientific epistemology is not just matter of ought vs ought not; it is a matter of rationally obligatory degrees of preference for better tested theories, on a continuum. However, when one theory is better tested than another on this continuum, and on some occasion we have to choose between the two, then we rationally ought to trust the better tested theory on that occasion.

This is subjective in the sense that our preference for a theory is our decision, but it's not like a preference for an ice cream flavor

If I understand your position correctly, it is an awful lot like the preference among ice cream flavors. Let's say you have to choose from chocolate, vanilla, and strawberry -- but you know the strawberry is poisoned. So strawberry is a not a viable choice, but the choice between vanilla and strawberry remains wholly subjective. Similarly, (in your view as I understand it) when choosing among alternative theories to act on, the choice among those theories that have not been disconfirmed is a subjective preference as much as chocolate vs. vanilla.

For example, suppose a person has a choice between action A and action B, and that their goal in making that choice is to maximize the likelihood that they will continue living. Action A maximizes their chance of surviving if a certain viable (tested, not disconfirmed) theory is true, and B maximizes their chance of surviving if a certain other viable theory, in another domain, is true. They know one of those theories is substantially better confirmed than the other by every relevant criterion (say, the law of gravity vs. the most recent discovery in quantum computing). I say there is only one rational action in that scenario (trust the better tested theory). Do you say the same or different?

My position is that no actions or beliefs are "rational" in this sense, of being justified or mandated by reason. Actions or beliefs can be rational in the sense that we have deployed the method of rational criticism (and, if possible, empirical testing) in order to eliminate errors, with no justification/warrant/likelihood/etc. being involved at any point. So the contents of a belief don't determine its rationality (reason doesn't tell you what to believe), but the methods we have used in order to try to find errors in that belief can be rational. A choice can be rational if we've employed critical thinking in making it, and this is the only sense in which decisions can be rational, since justification is not possible.

In comparison to ice cream preference, yes, both are arbitrary in the sense we have to judge for ourselves (we are the arbiters of) what to believe/which icecream to like. But we generally don't employ critical discussion and experimentation in our ice cream choices, although we certainly can. Again, it's the methods of critical analysis and experimentation that are rational, and a decision can made with deliberation and with the use of reason, in contrast to a preference for ice cream which usually does not involve this. But the beliefs or actions themselves can never be rational in the sense of justified, warranted, mandated by reason, etc.

As for your the law of gravity vs. the most recent discovery in quantum computing example, it's slightly confusing to me. Does option B that uses quantum computing go against the law of gravity? If so, I would reject it, since I believe the law of gravity to be true (tentatively, without justification). Or does option B use both the law of gravity and quantum computing? In that case I'm not really choosing between gravity and quantum computing, but whether to additionally also use quantum computing in my plan, in which case how well-tested quantum computing is compared with gravity is not really relevant, since I'm using gravity as well.

In general, my view of the preference for the better-tested theory (and my reading of Popper's opinion here) is that this is a soft rule-of-thumb methodological advice, but not an "rationally ought" rule. Since we want to test our theories as severely as possible in order to hopefully eliminate error, all else being equal we should prefer the better tested theory - but not in the sense of "rationally ought" but in the sense of "let's test as much as possible". But all else is rarely equal, and "better tested" is not an exact calculation. So sort of like the advice "it's a good idea to castle your king in chess". Yes, that's good advice, but it's not necessarily always the best choice, and you are not "irrational" for deciding not to castle. A more clear formulation of this advice has been advanced by Miller, Popper's former student, who formulates this stuff much more dryly than Popper but in a way more suited to the style of modern analytical philosophy (Out of Error, p. 124):

Prefer the practical proposal that best survives critical scrutiny is more transparent and more obviously sound advice than Act on the best-tested theory, which is often not real advice at all. What must not be admitted is the suggestion that a proposal that has been subjected to critical scrutiny, and has survived it, thereby qualifies as a better proposal than one that has not been subjected to critical scrutiny. That would convict deductivism not only of inductivism by of clairvoyance, and even inductivists and justificationists can be expected to resist a claim at once so far-seeing and so reactionary. Even the advice Prefer the practical proposal that best survives critical scrutiny is defective in this respect. Since subjecting a proposal to criticism is itself a practical action of a kind, it cannot, on pain of infinite regres, always be ill advised to try something yet untried. It is not of course being suggested that it is a mistake to prefer or to adopt the best-criticized proposal, only that it need not be a mistake not to do so. At this point considerations of utility often intervene. The correct advice is, as usual, negative: Refrain from any practical proposal that does not survive critical scrutiny as well as others do. Observe that someone who rejects this advice will at once be vulnerable to critical attack.

More comments

The scientific method is believing things based on evidence.

Religious faith is believing things not based on evidence.

“Evidence” is doing a lot of work in those statements, of course. There is no One True Definition of scientific evidence. It’s almost easier to say what it isn’t. For example, receiving revelation in a dream or any other special knowledge that can’t be tested or verified by others. “It made me feel warm inside” is no way to practice epistemology because human emotion is too volatile. “We explain the inexplicable with further inexplicabilities” leaves something to be desired as searching for truth goes.

Scientific knowledge is true insofar as it can accurately represent the workings of the universe. Theory often comes after experimentation. Theories often need to be refined or get superseded by new knowledge. What’s important is the process of refining our understanding, not a single point in time on any given theory.

The Wright Brothers achieved flight before the physics were understood. My understanding is that the physics of lift are still contested. We use electricity to make sand do math and other thinking, but my understanding is that the physics of lightning are still contested. We started vaccination before understanding all that much about germs and viruses.

All this is to say that you are making a category error, common among classical philosophers, of judging science by the benchmarks of classical philosophy and finding it wanting.

Science is what works; not what is sacred or revealed from mysterious sources. If something doesn’t work, we can and should discard it. That’s the nice thing about iterative systems open to feedback and dedicated to improvement.

Religious faith is believing things not based on evidence.

I respectfully disagree. This is a common strawman of "faith". Allow me to offer a better definition;

Faith is believing in something that can neither be proven nor disproven with existing methods.

Religious faith applies this to transcendental concepts.

Now, of course, making decisions and casting judgments based wholly on religious faith creates problems, especially in a pluralistic democracy. That's a different discussion. I'm scoping my comments only to a beginning definition of "faith."

The keen among you might realize that this definition of faith covers things that aren't explicitly religious. "Gut feelings", "intuition" and the like. I happen to agree with you. In fact, I believe that all humans must exercise some level and version of small-f "faith" in order to function. A purely rational optimization pattern of thought would make it impossible to get out of bed in the morning ("which foot should I put on the floor first, should I wait another 7 minutes to get up to optimize my post REM wakefulness, is there too much or too little light in the room")

Blind faith - believing in something despite contradictory evidence or simply never even allowing that evidence to enter into your calculations - is bad and exists in myriad domains outside of religious faith. Currently, there's a lot of it in politics. It's a common human cognitive failing based on confirmation bias and the need for belief-decision-identity consistency.

True faith (and True Faith) is a demanding epistemic situation. You have to hold multiple things in your head at once;

  1. I believe X
  2. I cannot prove X
  3. X cannot be disproven
  4. Y, which may directly contradict X, is also a possibility, but I deem it less probable than X
  5. Points 2 and 3 may or may not also apply to Y, and Z, and A, and B
  6. I choose to retain my belief in X, knowing that points 2 - 5 still apply and may, in fact, apply infinitely regressively.

Faith is not for the feeble of mind yet must only be held with a poverty of spirit (read: poverty of passion).

You can try to accuse me of strawmanning Religious Faith, but I was raised in a devout religious environment and personally experienced how it works. Different strains of Christianity have their particular spins on it, but there’s a lot of commonality in at least how it’s scripturally defined and commonly practiced.

Bringing up little-f faith is a red herring with respect to critiques of religious epistemology. Obviously, humans go about our days dealing with uncertainty and have to rely on heuristics and gut feelings. The motte/bailey between “regular faith” and “Religious Faith as an epistemology” is obnoxious and frequently invoked to shield religious beliefs from direct critique. It’s probably necessary to taboo the word “faith” altogether to avoid these kinds of issues.

So to is “blind faith” unhelpful to discuss in that it typically is something everyone can agree is bad and that religious types will deny they are doing it.

It is certainly the case that Faith or any other label for “faulty epistemology” can exist in non-religious contexts. Many ideologies rely on credulity and insufficiently examined claims to survive. In fact, science is hard and scientists fail regularly to do it well (and can be wrong even when doing it as well as they could). That’s why it’s so critical that science is iterative, with no special authorities or sources of knowledge. Human nature is not by default well-calibrated for consistent scientific reasoning.

Religious epistemology does not have standards of evidence that satisfy science, or even other secular frameworks, such as law. It does tend to have special authorities and sources of knowledge. Religion can iterate and change, but it tends to be haphazard and so rarely results in more consensus on any given religious concept or interpretations of god’s will—even within one religion.

Religious beliefs have to be justified via a special religious epistemology because they cannot withstand scrutiny from an actually effective and consistent epistemology. It’s simply special pleading and inconsistent standards backed by tradition.

When you say:

Faith is believing in something that can neither be proven nor disproven with existing methods.

Religious faith applies this to transcendental concepts.

You’re making a few major mistakes. One is that “existing methods” is basically “god of the gaps” and it ought to be embarrassing to invoke.

Two, “proven nor disproven” is to frame things wrongly. If good evidence sufficient to justify a belief probabilistically can’t be obtained, then saying “well you can’t disprove it so I can maintain my belief” is not a logical stance. I don’t need to disprove there’s an incorporeal dragon in your garage to dismiss it as extraordinarily unlikely.

Three, historically (and in many cases to this day), Religious Faith is not merely applied to whatever “transcendental concepts” are. Religious Faith has retreated enormously as science has progressed, because science actually worked no matter what your religion is. E.g. no need to worry so much about casting out devils as medical science improved.

Relatedly, “nonoverlapping magisteria” really doesn’t get you very far because science has this pesky habit of intruding. For example, Christians typically have strong doctrinal and personal beliefs about souls and prayer. Unfortunately, “souls” do not exist, unless they are somehow neither matter nor energy. Same situation as ghosts and other such phenomena. The mind is what the brain does, which we can demonstrate in a myriad of ways. Similarly, “prayer” as a way to communicate with deity or to seek causal impact or special knowledge is consistently shown to just not be a thing. Same situation as mediums and fortune tellers.

The trick that worked for me was examining other religious beliefs and finding them sorely lacking (as encouraged by my religion). Eventually, those critical tools of logic and reason came for my own religious beliefs.

Religious epistemology does not have standards of evidence that satisfy science, or even other secular frameworks, such as law.

I agree. And I am happy about this. I don't think it makes sense to apply the rules of one domain to the evaluation of another. We don't evaluate basketball players with the rules of baseball.

It does tend to have special authorities and sources of knowledge.

Sure does. And I get worried when those authorities interject into other domains. For instance, Pope Francis is a communist and, therefore, I don't really like his political takes.

Religion can iterate and change

Some of them can, some of the chose not to. Much of the time when a big enough change occurs, some subgroup splinters off.

but it tends to be haphazard and so rarely results in more consensus on any given religious concept or interpretations of god’s will—even within one religion.

Building consensus in matters pertaining to the Lord of The Universe? That's how you get a crusade going. Forget consensus, we're looking for Truth (and not in your fiddle-faddle science and law concepts of truth. Space travel and infinite energy? Boring).

Religious beliefs have to be justified via a special religious epistemology because they cannot withstand scrutiny from an actually effective and consistent epistemology. It’s simply special pleading and inconsistent standards backed by tradition.

Again, I see this as essentially saying "Basketball players keep violating the rules of baseball! How dare they!"

You’re making a few major mistakes. One is that “existing methods” is basically “god of the gaps” and it ought to be embarrassing to invoke.

I'm too dumb and unread to know what you're saying here. Please clarify.

Two, “proven nor disproven” is to frame things wrongly. If good evidence sufficient to justify a belief probabilistically can’t be obtained, then saying “well you can’t disprove it so I can maintain my belief” is not a logical stance.

So, if it ain't Bayesian it ain't right? You say I'm framing things wrongly ... but I think you just propose a different frame than mine and then make a value judgement about the "rightness" of my framing. Could you maybe try to make an argument for why your framing is a better overall approach to the subject at hand?

I don’t need to disprove there’s an incorporeal dragon in your garage to dismiss it as extraordinarily unlikely.

Editor's pen here: "I don't need to disprove to dismiss." This is correct. Dismissing is a personal choice and I respect it.

Religious Faith has retreated enormously as science has progressed, because science actually worked no matter what your religion is.

"Worked" .... in which domains? What's the scientific take on the concept of justice?

Unfortunately, “souls” do not exist, unless they are somehow neither matter nor energy.

Say it with me .... Physical ...

The mind is what the brain does

... Materialism.

I understand the attraction of materialist philosophy. If that's your firmly held position, we're just not going to agree and that's fine. I would love your thoughts on the idea of personal responsibility, however.

Similarly, “prayer” as a way to communicate with deity or to seek causal impact or special knowledge is consistently shown to just not be a thing.

So you don't believe I can communicate with a thing you don't believe in. I am shocked, shocked!

The trick that worked for me was examining other religious beliefs and finding them sorely lacking (as encouraged by my religion). Eventually, those critical tools of logic and reason came for my own religious beliefs.

So you debugged the program from within it, eh? Pure rationalism triumphs.

I can't buy it. One thing even the most religious and most ... good at logic people ... agree on is that humans are emotional beings prone to all sorts of self serving cognitive failures. I'll admit that my belief in the Magical Sky Man is cooky but it somehow seems a better premise than 'I solved my own brain with my brain"

Comparing one sport to another is still within the domain of sports. The rules are different in any particular case, but it’s not a fundamentally different category where say the laws of physics or other fundamental facts about reality change.

Science deals with reality as we can understand it. Religion seems to not do that so much. The fact that religions don’t tend to come to consensus on much of anything over time is pretty strong evidence there is no underlying system of discovering truth.

You’re leaning hard into nonoverlapping magisteria. It’s not very trad but it is common.

If you read an article on “god of the gaps” you should be able to see your point about “existing methods” is doing the same. Applied backwards, it makes religious believers seem naive. So too using it now.

You’re bringing up Bayes when there’s no need to. It’s elementary logic that you shouldn’t look to “believe what can’t be disproven” vs. “believe what there is evidence for.” The possibility space of the former is infinite; the latter is constrained by reality if you have good standards of evidence. “What must I believe” vs. “What can I get away with believing.”

Not sure how familiar you are with various forms of trad religious legal systems, but I’ll take secular legal systems informed by modern concepts of science and reason. Secular philosophy is rich on the questions of justice and personal responsibility, and scientific principles and findings influence most of us who care to think about such things.

Methodological naturalism is true because it works. Anyone can use it, even the religious. Anyone can run and observe experiments that show the mind-brain connection, and the lack of evidence for any concept of a soul.

I think you’re failing to understand the model here. I don’t need to believe in your god or anyone’s beforehand for you to demonstrate solid evidence something strange is happening via prayer. If your god stretches forth its hand to affect the material world, as so many claim it does, then where is the evidence? Trying to philosophize about the limitations of materialism are irrelevant unless your god never comes into that domain.

You also have the causation backwards: I disbelieve in god because people claim so much about eg prayer but can show so little evidence to back those claims. But even if I had started from a null position, the burden is on the claim being made. Why is the omnipotent creator of the universe such a shy fellow and why do his believers talk him up so much with so little hard evidence brought to bear?

Nowhere did I claim I solved my own brain with my brain. I used tools to disprove certain ideologies to them disprove the one I had been raised with. The facts were relatively easy; the emotions and conditioning were far more challenging.

Faith is believing in something that can neither be proven nor disproven with existing methods.

I think the idea of faith as firm belief without evidence is a power grab by fundamentalist clergymen who do not want to be questioned on their axioms. My idea of faith is something entirely different. When the Marines say semper fidelis (always faithful), they are not talking about firm belief in the Marine Corps without evidence; they are talking about doing their duty in relation to the Corps.

To illustrate, imagine (God forbid) my wife is in a coma, it is unknown whether she was brain dead, and it is unknown whether she can perceive what was going on around her. I have two choices: (1) I can say, well she may well already be gone, so I am going to leave her alone in the bed indefinitely, or (2) I can say she might be in there; if she is, she wants to hear from me, so I will go sit by her and hold her hand and talk to her until the situation is resolved for better or worse. That is faith in the sense of faithfulness (Greek pistis, Latin fidelis) in the relationship -- even a relationship with someone who may or may not be in there (or out there), as long as there is hope.

I agree with your sentiment wholeheartedly even if it looks like there is some distance between us semantically.

I would also say that your (again, God forbid) wife-in-coma scenario reveals what I believe to be the fact that all humans have a natural impulse towards what we would term faith. It may be utterly a- or even anti-religious and its often poorly developed and formalized, but the innateness of that desire remains. I think it has to to propagate the species. There are certainly times where things look forlorn and all available data might point to hitting your own off button to unalive yourself. You need either a strong intuitive volition to not do that (faith) ... or have the mental acuity of Mr. Big Brain himself Sam Harris to jiu-jitsu rationalize your way into it.

Focusing on the “trust” aspect instead of the “belief” aspect of the word “faith” is not helpful in an epistemological discussion.

To trust in a deity, one believes it exists.

Those Marines doing their duty are not doing so with an epistemology that requires them to believe in anything without regular boring evidence.

Similarly, people do recover from comas. No special beliefs required.

In contrast, hoping/trusting/believing in something like a deity that may or may not be out there is in a different category of belief. How do you know?

I think the idea of faith as firm belief without evidence is a power grab by fundamentalist clergymen who do not want to be questioned on their axioms.

I guess it’s too bad the faithful can’t come to a shared understanding on how it is they ought to develop their beliefs regarding deity and theology.

Is this site fundamentalist in your view? (I assume you think Evangelicals count as such.)

https://www.christianity.com/jesus/following-jesus/repentance-faith-and-salvation/what-does-faith-mean.html?amp=1

These?

https://www.archspm.org/faith-and-discipleship/catholic-faith/what-is-faith-how-does-it-tie-in-to-what-we-believe-as-catholics/#

https://www.churchofjesuschrist.org/study/manual/true-to-the-faith/faith?lang=eng

I don't think there's anything miraculous about the fact that as we've observed the wondrous variety of natural phenomena, it's been possible to pick out a few aspects that can be reliably approximately explained with reference to simple mathematical rules. It's just selection bias that we hype these singular aspects of nature where it does work well. The vast majority of our observational data has resisted lossless compression -- it's only reasonably predictable through extensive particular knowledge, if at all. Various non-physics authors have drawn attention to this as the "Unreasonable INeffectiveness of Mathematics" in their domains of study. I get suspicious every time somebody holds up the law of gravitation as a "representative" outcome of scientific inquiry.

But what in the world makes you think that the source code of the universe is so neat and simple

Because the universe empirically works. It doesn’t appear to be particularly random, and where it is, that randomness disappears as appropriate. There doesn’t look like enough room in the universe to support infinite complexity - where would it go and to what end? If the complexity disappears at macroscopic scale, what could it actually be doing? Bounded complexity and a few centuries of progress point towards understandability.

I’d agree there is no reason to extrapolate theorems towards zero or beyond a single decade of supporting evidence. For your specific questions about gravity, I’d reply: no, -, very high.

I think you overstate your thesis here.

I'm routinely pointing out to the resident empiricists that Empiricism requires belief in the logical consistency of the universe and generally our ability to observe enough to guide us to an accurate model. These, as you point out, are purely articles of faith and, considered by themselves, truly miraculous. Kant certainly seemed to think that God was required to ground reasonable belief, but this is a conviction, not a necessity.

It is perfectly consistent for empiricists to retreat to a secure motte here: That indeed, Science does not make any claim to the Truth, and is only a practical mechanism to model nature. Naturalists are fond of embracing this solely convenient mode for Science, and will (rightly in my opinion) point out that similar behaviors can be observed everywhere in nature and with different tradeoffs that make different methods of modeling comparable but not strictly ordered in utility.

Positivists do not have this luxury of course, but then again, Positivism has been soundly discredited, as you know, in part because of this.

If God, and other metaphysical groundings are to be brought in, it is not in such cavalier a manner as to notice that logical consistency itself is axiomatic.

Sophisticated atheists have been arguing for quite a while that, when we reach axioms, we may very well believe in the axioms for themselves instead of invoking the convenience of a single grounding, be it a Prime Mover, or Spirit, or what have you.

You know, Yudkowsky still writes new things, as deeply unfashionable as it is around these parts to praise him.

So I present 'Empiricism!' as Anti-Epistemology. It's certainly relevant, especially to philosophy of science and your own post.

But if I had to speak for myself, the Scientific Method is just a bounded, modestly idiot-proofed form of Bayesian reasoning, generated after centuries of adjusting for human stupidity and adversarial incentives. When you look at it that way, there's no reason to have to argue ad nauseam about what counts as significant and well-established. You have priors and posteriors, and rules to work with. Shut up and multiply.

the Scientific Method is just a bounded, modestly idiot-proofed form of Bayesian reasoning.

I do not see anything Bayesian about the scientific method. When I pick up the text for the lab component of a college course in physics or chemistry, look to see if there are any priors, conditional probabilities, or posteriors, written down in it, I predict that the median number of Bayesian inferences I will find over the course of 15 experiments is zero. Here is one such a text: https://www.lehman.edu/faculty/kabat/PHY166.pdf (I selected it because it was the top hit in my google search that had a full pdf, but if you think I'm cherry picking you are welcome to try a different one). There is no Bayesian reasoning in that text, nor do I recall every seeing any of the half dozen lab science courses I took in high school and college. I think the same will be true if you look, not at an undergraduate course, but in a physics or chemistry journal.

But if physicists what are really doing is a special case of Bayesian inference, I find it peculiar that they do not seem to know what they are doing, because they sure don't talk about it that way. So I'm curious what makes you think they are. It is a pretty important question to me because if you can show me how typical forms of experimental reasoning in the physical sciences is Bayesian, or in any way probabilistic or statistical, that would disprove the miraculous aspect of its success.

What humans are doing is Bayesian reasoning, at least if you subscribe to the Predictive Processing model of cognition, as I (and Scott, amongst others), tentatively endorse.

People, in general, are perfectly capable of modifying their beliefs in a Bayesian manner without remotely holding the idealized version of Bayes' rule in their skulls, at least above the level of groups of neurons for whom it comes naturally.

In physics, it is implicit. You will find all kinds of mention of how unlikely it is for certain varieties of observations to be seen were it not for X model that pre-emptively expect it to be the case (or at least doesn't conflict with it in hindsight).

You do not need them to write it out, any more than they need to invoke the tenets of ZFC when they add two numbers together. But then again:

When I pick up the text for the lab component of a college course in physics or chemistry, look to see if there are any priors, conditional probabilities, or posteriors, written down in it

What is a p-value? It's not a urine dipstick test I can tell you. Still can't read most papers without tripping over one. Don't ask me if they're using it in frequentist or Bayesian terms, but there's a conditional probability for you.

(Now, I can assure you that while Bayesian reasoning is not foreign to medical doctors, at least the ones who do actual research, the majority of doctors on the ground would recoil from that simple equation, or simply be confused at first sight. Doesn't mean they aren't using it, either explicitly or implicitly. The same goes for physics.)

What is a p-value? It's not a urine dipstick test I can tell you. Still can't read most papers without tripping over one. Don't ask me if they're using it in frequentist or Bayesian terms, but there's a conditional probability for you.

It is not generally true that "you can't read most papers without tripping over one [p-value]". There is a thread of truth to this in medicine and the social sciences, but not in the physical sciences. More importantly, I think the duality you are looking for is parametric vs. Bayesian, not frequentist vs. Bayesian. The tool of p-values is part of parametric statistics, which is the main alternative to Bayesian statistics. If you see a paper with a p-value, it means they are not using Bayesian updating -- so the thing you keep tripping over is evidence against your thesis.

Finally, p-values, while they may not be urine dipsticks, are also not conditional probabilities. A conditional probability is the probability of A given B, where A and B are events in a probability space. A p-value, on the other hand, is the probability of an event A in a probability space conditioned on the value of a fixed-but-unknown parameter of that probability space. That is why parametric statistics does not use Bayes rule.

Correct me if I am wrong, self_made_human, but it seems to me that the unstated premise of your position is this: if someone holds an uncertain belief, and then they see something, and they revise their degree of certainty based on what they saw, and if they are acting rationally, then they must be doing Bayesian updating. Do you affirm that?

I think a lot of people fall into the trap of thinking probabilities are the only rational way of representing and reasoning with uncertain information because, unless they take an AI class, it is the only method covered in a typical undergraduate curriculum. This leaves them with the impression that "probability" means degree of belief, "probability theory" means the logic of reasoning about degrees of belief, and that the problem has been settled of the right way to do such reasoning. If all of that were true, and if Bayes rule were the only way to update beliefs using probability theory, then the unstated premise above would be correct. The problems are that (1) none of that is true, and (2) even when we use probability theory to update our beliefs, we are not always using Bayes rule.

Probability theory is actually a specific set of axioms that constitutes one particular way of reasoning about degrees of belief. There are well developed alternatives to probability theory -- including certainty factors (as used in Mycin: https://en.wikipedia.org/wiki/Mycin), Dempster-Schafer evidence theory, backpropagation (as used in large language models such as ChatGPT), and many others, which are often more effective than probability theory for particular applications -- none of which use Bayes formula or can even be incidentally described by it. Moreover, even among belief-updating methods that do use probability theory, the most frequently used approach in scientific literature is parametric statistics -- which (as I point out in a separate reply) does not use Bayesian updating.

If you claim that physicists, for example, routinely use Bayesian updating, and you claim to hold that belief for a good reason, then you should be able to give evidence that they are thinking in terms of conditional probabilities (satisfying the axioms of probability) and updating them by Bayes equation -- which is a much more specific claim than that they merely change their degrees of belief after making observations in an effective manner.

Can one shut up and multiply their way to a problem of evil? Can you, like, multiply a quaternion by an elliptic curve, and it somehow pops out in there somewhere?

Oh yes, quaternions are an example of a necessary evil, or so programmers trying to work in 3D tell me. Whatever it takes to avoid gimbal locking must be necessarily good. I'm sure the NSA might call elliptic curves evil, depending on whether or not they had a chance to put a backdoor or two in cryptographic standards.

Don't be facetious. Or remember your demands to "speak plainly". You need unprincipled* definitions of what counts as "evil" (or good) before you can start. Once you have a working definition, based off whatever moral intuitions you spawned with, then yes, you can do moral calculus and figure out an optimum, but your ability to do so as a computationally bounded entity will always be limited. Doesn't mean you can't try. Though you don't need quaternions or elliptic curves; addition, subtraction or multiplication should suffice. Certainly mere multiplication of massive numbers of matrices seems sufficient to create intelligence by itself, which is a necessity for discerning good from bad.

*Feel free to dispute this when you can argue Socrates into the ground.

I rather like being playfully jocular or humorous. It really makes it plain and clear what's going on. Let me see if I can understand what you're saying. One can just pull some unprincipled definition of what counts as "evil" (or good)... any definition, literally does not matter. Like, your entire moral system could just be, "Ponzi schemes are good, actually," and that's it. That's all you've got. Then, we don't even need the bad old methods of calculation that required quaternions or elliptic curves; we can just add, subtract, and multiply. I'm not actually sure what to do in the next step. Is there another form of Bayes' rule that I can use to make progress? Maybe an inscrutable matrix form of it? I can't find one on Wikipedia. Maybe there's a NeurIPS paper you can link me to?

I'm not opposed to shutting up and multiplying, and I'm pretty good at coding. I just need to know what expression I'm supposed to use in my code. I'm tryna get to some way of commenting on the problem of evil.

This question is, in a strange way, sort of related to my own work that is squarely within my domain of expertise. There is a long history in the literature of a theoretical construct that is sort of related, in a way, to the actual thing we want to know. I observed that there are basically zero, AFAICT, papers out there that actually use the theoretical thing to go on and compute the thing that we actually want to know. It just doesn't seem to be a thing that you can actually do. So decades of papers just get to the point of the theoretical construct, and then stop. There is no actual coding of the thing we actually want - the thing that is actually useful - the thing that was the entire point of the investigation in the first place. It seems to be basically not possible to actually just shut up and compute it. And so the best paper of my career came at the problem from a completely different direction, saying that if we go a different route, we can have the thing that we want, with most of the properties that we were hoping to have. It has spawned a mini-literature of folks building on it now, since they're actually able to shut up and multiply now. So, by all means, let's figure out how to shut up and multiply our way from, "Ponzi schemes are good, actually," to a problem of evil. But it needs to be something that I can code, since that is the premise we're starting with.

I'm calling you this kind of facetious. It's doubly laughable when you've been so humorless with me before, so I don't care to indulge your witticisms.

As far as I'm concerned morality is arbitrary and inherently subjective, there's as much reason to think otherwise as there is to suspect another missing integer between 2 and 3, despite many people fervently wishing otherwise. I've discussed my thoughts at length on it before, and you're welcome to trawl through things if you want to dig it up.

And indeed, arbitrary means arbitrary. We thankfully don't have many people around who think Ponzi schemes are good (instead of thinking they're good because they're duped and don't understand them, a whole different kettle of fish). That is a fact about evolution, and the pressures exerted on it by game theoretical concerns, and us being extremely conserved and morally similar to each other compared to the entire set of possible minds and their morality. Even monkeys and dogs care about "fairness" and dislike perceived injustice and hostility to the ingroup. Hence it's no surprise that human morality is so similar and violent disagreements tend to happen at the edge cases where the concordance breaks down.

Your morality is what your morality is. Attempting to ground it will get you nowhere.

Thankfully, consequentialism, especially utilitarianism, can be computed explicitly. Look at GiveWell or the EA movement for a good effort with the latter.

So, by all means, let's figure out how to shut up and multiply our way from, "Ponzi schemes are good, actually," to a problem of evil. But it needs to be something that I can code, since that is the premise we're starting with.

Ah! I can make it really simple for you, without even getting into first principles:

  1. Find a paper on building the Transformer architecture/RWKV/Mamba or whatever is in vogue these days. I presume NeurIPS will suffice.

  2. Feed it a massive corpus of text. The scrapable internet is a good start.

  3. Stir the pot. (This is where the matrix multiplications come in, for those curious)

  4. Behold, an LLM with an intuitive understanding of morality.

(You may insert step 3.5 of RLHF if you're so inclined, it's not strictly necessary)

Tada, taking mere matrix multiplication and your ability to code, you have created an intelligence with knowledge of morality, and can modify it further with RLHF if you're feeling safety-ist or want to twist it to arbitrary ends. All done with a computer! In code!

Easy enough for you I presume? I can show you about twenty proofs of concept.

As far as I'm concerned morality is arbitrary and inherently subjective

See, this is why I sort of don't believe you when you say that we should just shut up and multiply. It doesn't seem like that's really what you think. If it's, "Shut up and multiply, but oh by the way, you definitely can't multiply," then it really means, "Shut up."

Thankfully, consequentialism, especially utilitarianism, can be computed explicitly. Look at GiveWell or the EA movement for a good effort with the latter.

So, do you think this, or do you think that it's arbitrary, inherently subjective, and that we can't multiply?

Ah! I can make it really simple for you, without even getting into first principles:
  1. Find a paper on building the Transformer architecture/RWKV/Mamba or whatever is in vogue these days. I presume NeurIPS will suffice.

  2. Feed it a massive corpus of text. The scrapable internet is a good start.

  3. Stir the pot. (This is where the matrix multiplications come in, for those curious)

  4. Behold, an LLM with an intuitive understanding of morality.

Ok, so I guess now we have a third option. I could, 1) Shut up and multiply, but not multiply because that doesn't work, so just shut up, 2) Adopt consequentialism/utilitarianism and try to fire up a calculator, or 3) Just download some weights, hell, Llama-7b is enough, right? That'll then appropriately and accurately get me to a correct problem of evil?

Which one is the answer, because they're all different answers, and I suspect they'll give me different results. What does an empiricist as anti-epistemologist do with three different empirical answers to the same question?

See, this is why I sort of don't believe you when you say that we should just shut up and multiply. It doesn't seem like that's really what you think. If it's, "Shut up and multiply, but oh by the way, you definitely can't multiply," then it really means, "Shut up."

What on earth gives you that takeaway? It was never on the menu.

You can perform utilitarian calculus, GiveWell certainly adds, subtracts multiplies and divides to figure out what is the most cost effective, hence most moral in their lights, means of saving or extending human life.

In practise, everyone is a consequentialist. Even the denying Deontologists, who frontload it. Otherwise they have no choice but to halt and catch fire whenever two Kantian imperatives conflict, and for all the ill I can speak of them, they're not that dysfunctional.

That'll then appropriately and accurately get me to a correct problem of evil?

Buddy. I clearly stated that morality is fundamentally arbitrary/subjective/observer dependent and as far as I can tell, there is no good reason to think that's not the case barring wishful thinking.

There is no "correct problem of evil". That is a malformed question built on faulty assumptions.

If you're a consequentialist, especially one of the more well defined strains of utilitarian, you can very much shut up and multiply from base principles, like "saving human lives is good" and further adjustments like discounting by quality of life. Hence QALYs and DALYs. For the record, I am neither a utilitarian in the Benthamian sense or an EA.

If you're not a consequentialist, well, you actually are, deep down, just to be a functioning human, but there is no right or wrong here that is independent of a predefined frame of reference. In other words, an observer.

If, somehow you're actually an LLM, then you've achieved a working understanding of gestalt human morality simply by being fed an enormous amount of text and then doing multiplication (of matrices). It is obviously doable. A randomly instantiated neural net will predictably achieve it.

See, this is why I sort of don't believe you when you say that we should just shut up and multiply. It doesn't seem like that's really what you think. If it's, "Shut up and multiply, but oh by the way, you definitely can't multiply," then it really means, "Shut up."

What on earth gives you that takeaway? It was never on the menu.

Because you're still saying stuff like:

I clearly stated that morality is fundamentally arbitrary/subjective/observer dependent and as far as I can tell, there is no good reason to think that's not the case barring wishful thinking.

That does not sound like something I can just multiply to get. If so, you could also just multiply, and we'd get the same thing. Because that's the bit about empiricism. You know, the linked article that you linked.

Whereas, again, this:

You can perform utilitarian calculus

is option 2. That's not "fundamentally arbitrary/subjective/observer dependent". It's a different option.

If, somehow you're actually an LLM, then you've achieved a working understanding of gestalt human morality simply by being fed an enormous amount of text and then doing multiplication (of matrices). It is obviously doable. A randomly instantiated neural net will predictably achieve it.

And this is a really weird option 3. So, yeah, I think you've again given me all three options. They're going to give me different answers. Which option is the right one? What does an empiricist as anti-epistemologist do with three different empirical answers to the same question?

There is no right or wrong option. To restate my point, that is a fundamentally confused question unless you begin with an axiomatic moral framing.

You cannot acquire morality from first principles. You can execute it on it when you have somehow acquired them, including, yes, basic and complicated mathematics.

More comments

I understand why no finite amount of evidence can give you a statistical confidence of 1, but you go on to say that

there is no statistical law that would justify belief in the law of universal gravitation with even one tenth of one percent of one percent confidence, based on any finite number of observations.

Is this just because gravitation is claimed to be "universal" e.g. for all we know, gravity could suddenly change to work differently tomorrow, or work differently as soon as we leave the solar system?

it is a miracle that the scientific method works

Is it? Maybe since I live in this world, I am corrupted by it and I can't imagine it any differently. But: I cannot imagine a world where the scientific method doesn't work.

I think the Sun rises every morning because so far it has, but even if it didn't rise every morning, there would be hidden order to it. Maybe it rises every other day. Maybe on some mornings it rises, and on other mornings it doesnt - maybe I never learn to predict whether the Sun rises on a particular morning, just like how we can't really predict the weather, or which way a leaf blows in the wind. But if I spend decades failing to predict the Sun's rise, then tomorrow I expect it to be difficult to predict. If the Sun did alternate between periods of "rising every day for 10 days in a row" and then "a period of complete unpredictability," I've still summarized it with some compression, so I'm not totally ignorant.

I suppose a world that doesn't have this hidden order would essentially have to be free of cause-and-effect. In that world, I'm not sure how I could exist as a lawful being within it. Maybe there's an anthropic argument here?

Overall, your post seems to be a weaker form of what a lot of philosophical skeptics claim. Skeptics say things like "you can't know things with 100% confidence" and your post seems to just zero in on "the laws of physics, the source code of the universe." I'll reply to you the same way I reply to philosophical skeptics, which is: while it would be nice to know what is True, I'd rather send rockets to the moon anyways.

Is this just because gravitation is claimed to be "universal" e.g. for all we know, gravity could suddenly change to work differently tomorrow, or work differently as soon as we leave the solar system?

Yes, it is because of the claim of universality, but this is a different issue than skepticism about induction and causality a la Hume, or the laws of nature turning on a dime. It could be that even yesterday, there were unobserved exceptions to any physical law we think we know. In fact, the point of my argument is that we have no (non-miraculous) reason to doubt that there were.

Is it? Maybe since I live in this world, I am corrupted by it and I can't imagine it any differently. But: I cannot imagine a world where the scientific method doesn't work.

What I claimed is that we have no non-miraculous reason to believe that the scientific methods works, for purposes of inferring universal generalizations, even in this world.

Well, yes. Thinking our generalizations are universal would be equivalent to saying, "Science knows everything; we will not be amending our theories" which is not really how it works as far as I know? It seems anti-inductive to me, in fact, as so far science has only ever been wrong! So in the future I expect it to stay wrong! Obligatory link to a classic by Asimov.

I do wonder why people would be so obsessed with "Laws of nature," as you seem to be calling it the "Source code" of the universe. It seems (to me) more apt to describe scientific theories as working with some of the universe's internal APIs than working directly with source code. Still, there's a lot we can do with APIs.

It could be that even yesterday, there were unobserved exceptions to any physical law we think we know. In fact, the point of my argument is that we have no (non-miraculous) reason to doubt that there were.

I don't understand how this is different from skepticism in general. Like if I believe that apple pies can't spontaneously appear or disappear, by your reasoning do I have any non miraculous reason to believe that?

I don't understand how this is different from skepticism in general. Like if I believe that apple pies can't spontaneously appear or disappear, by your reasoning do I have any non miraculous reason to believe that?

It is different from more aggressive forms of skepticism in that I take for granted that the universe is governed by unchanging laws and that inductive reasoning is valid in theory. The principle of abductive inference says, in effect, if I cannot produce a counterexample, there probably are no counterexamples. This requires a certain level of facially hubristic confidence in the power of your mind, relative to the complexity of the system under study -- even if that form of reasoning would work on that same system for a sufficiently intelligent agent.

I must admit, though, that the law of conservation of apple pies strikes me as pretty non-miraculous. I will think that over and get back to you.

I'm interested but not sure I understand your argument.

If inductive reasoning is valid why can't we go from "all observed masses follow Newton's law" to "therefore all masses follow Newton's law."?

Simply because there could be an object that doesn't?

I mean yes, there could be (in fact, we know there are), but assuming I don't know that Newton's Law fails, that I've only ever seen otherwise, why am I not justified in believing it?

This is a good question.

If inductive reasoning is valid why can't we go from "all observed masses follow Newton's law" to "therefore all masses follow Newton's law."?

I think this puts the burden of proof in a strange place. The question is always why should we be able to make the inference, and according to what articulable rule of inference. But I will pick up the burden of proof and try to explain why we can't make that inference from all observed P are Q* to all P are Q, using the Raven Paradox.

Imagine that I see a few crows and note that they are all black, and I form the hypothesis that all crows are black. I begin to seriously pursue the matter by looking for crows, counting them, and noting their color. How many crows would I need to see, all of which are black, before I can conclude that all crows are black, or, more conservatively, that probably (more than 50% likely) all crows are black? Pick a number you think is reasonable. I'll say a hundred thousand; that sounds conservative.

Now the following is a theorem of first order logic: (for all x, P(x) => Q(x)) <=> (for all x, -Q(x) => -P(x)). Or to instantiate the symbols, all crows are black is equivalent to everything that is not black is not a crow. One way to see that that is a theorem is to see that whichever form you consider, a counterexample would consist of a crow that is not black.

But now the alternative formulation gives me an idea. It's not that easy to find crows, but it's really easy to find things that aren't black. Now there are about 150 million blades of grass in an acre of land, so I can go into my 1/8 acre back yard and find about 19 million non-black things (namely, blades of grass) that are not crows. That's waaaaay over what seemed like a reasonable threshold to establish that probably, everything that is not black is not a crow, which is logically equivalent to all crows are black. Hypothesis confirmed!

But seriously, can I prove that probably most crows are black -- let alone that definitely all crows are black -- by looking at blades of grass in my back yard? of course not. So that shows that this reasoning is not valid, even if some forms of inductive reasoning are:

If inductive reasoning is valid why can't we go from "all observed masses follow Newton's law" to "therefore all masses follow Newton's law."?

I won't spoil the fun by resolving the paradox for you. Unless want me to.

Looking at blades of grass won't help you because you have prior knowledge that blades of grass aren't crows, and actually looking at them provides you with no additional evidence that is not subsumed by your existing knowledge.

If you started picking random things in the universe without prior knowledge of whether they are crows, and then it turned out that they were all non-black non-crows, that would be evidence. It would be very weak evidence since the universe is filled with lots and lots of things, but if you kept doing it you'd be gathering more and more evidence and if you somehow managed to look at every object in the universe and they were all non-black non-crows (or black crows), you would indeed have proven the idea.

Looking at blades of grass won't help you because you have prior knowledge that blades of grass aren't crows, and actually looking at them provides you with no additional evidence that is not subsumed by your existing knowledge. If you started picking random things in the universe without prior knowledge of whether they are crows, and then it turned out that they were all non-black non-crows, that would be evidence.

Thanks for the statistically literate post. So please tell me,

If you started picking random things in the universe without prior knowledge of whether they are crows, and then it turned out that they were all non-black non-crows, that would be evidence.

by what rule of inference? If you say Bayes, it would be nice if you sketch your priors and your sampling method, to lend some plausibility to the answer.

That already describes my priors and sampling method.

More comments

crows

Sure, I get the crows,and have an opinion on it too, but I thought you were making a point about justification for physical laws uniquely?

Is there something that singles out the laws of physics as uniquely unjustifiable, or are you simply saying that you can't prove a physical law the same way you can't prove all crows are black?

Is there something that singles out the laws of physics as uniquely unjustifiable

This applies to all universal generalizations over any set with large numbers of members we cannot directly test. The first critical part of my top level post is this:

What you will find [in a statistics book] are principles that allow you to conclude from a certain number N of observations, that with confidence c, the proportion of positive cases is z, where c < 1 and z < 1. But there is no finite number of observations that would justify, with any nonzero confidence, that any law held universally, without exception (that is, z can never be 1 for any finite number of observations, no matter how small the desired confidence c is, unless c = 0).

So, statistical arguments cannot establish universal generalizations; nothing unique to physics about that. The second critical part is what I said in my first reply to your first comment:

The principle of abductive inference says, in effect, if I cannot produce a counterexample, there probably are no counterexamples. This requires a certain level of facially hubristic confidence in the power of your mind, relative to the complexity of the system under study -- even if that form of reasoning would work on that same system when deployed by a sufficiently intelligent agent.

There is an old joke that is relevant to the application of the abductive inference principle [credit to Kan Kannan, my doctoral advisor]: I tried whiskey and coke, rum and coke, gin and coke, tequila and coke, and vodka and coke, and got drunk every time. Must be the coke! Maybe nobody would be that dim in real life, but the principle is real. When we are doing experiments to gather evidence for a universal principle (coke and anything gets you drunk), we might be dim witted to actually look where the counterexamples are.

Here is a real-world example. I once assigned a homework problem to write a function in Python that would compute the greatest common divisor of any two integers a and b, and test it on 5 inputs to see if it worked. One student evidently copied the pseudocode found on Wikipeda (which is fine; real life is open book and open google), and submitted this program:

def gcd(a, b):  
    while b != 0:  
       t = b  
       b = a % b  
       a = t  
   return a

and these 5 test cases:

gcd(5,10) = 5
gcd(8,7) = 1
gcd(9,21) = 3
gcd(8,8) = 58
gcd(1000,2000) = 100

He tested big numbers and little ones, first argument smaller than the second, second argument smaller than the first, both arguments the same, one a multiple of the other, and them being relatively prime (having no common factors other than 1), and got correct answers in every case. So, in some ways it is a highly varied test suite -- but he probably could have written ten thousand test cases and still never found that the function is incorrect, because he systematically failed to think about negative numbers in the test suite, just like he did in his code (it gives the wrong answer for gcd(-10,-5). In one way of looking at things, negative number are atypical (in that we don't bump into them as often in ordinary life), and many people wouldn't think to test them; but from an objective way of looking at things, he systematically ignored half of the number line, despite straining to come up with a highly varied test suite. Must be the coke!

The point of the joke, and the example, is to illustrate how, when analyzing complex system with nuanced twists and turns, we might not have enough ingenuity to look where the counterexamples to our hypothesis really are. But what counts as a "complex system with nuanced twists and turns" depends on the complexity of the system under investigation, relative to the mental acuity of the investigator. So, what right do we have to expect that our little brains are up to the task of finding the "bugs" in our hypotheses about the laws of nature, when they are just barely (sometimes) capable of finding the bugs in a six-line program that is wrong for fully half of its possible inputs? If the source code of the universe is that simple, relative to the power of the little meat computers between our ears, it would be a miracle.

I get that when the sample set is unbounded/the known is unbound we can not define a hard number to some confidence of a hypothesis about that set, but I don't see how the principle of abductive inference isn't a statistical argument. Isn't it just some kind of logic similar to a Bayesian update? You have some hypothesis, every time time you fail to find a counter example that is evidence for the hypothesis. Isn't it just as flawed when dealing with a potentially infinite number of crows?

Also, isn't a fundamental difference between crows, and physics, is that we expect there to be universal laws? That is, we have no reason to believe there is a universal law governing the color of crows, but we do believe there are for how objects behave, right?

I take for granted that the universe is governed by unchanging laws

Next, I would suggest that the mathematical foundations of physics give reason to believe in universal laws. That is the laws of physics are deeply related and not as arbitrary as I think you imagine.

Its kind of like if you said "All Euclidean triangles have an internal angle of 180 degrees". And I said, "Well I can imagine a triangle with 181 degrees." I might think that I can imagine a Euclidean triangle with 181 degrees, but really I can't.

If we were pulling Euclidean polygons out of a box, I think you would be justified in saying "all triangles in the box have an internal angle of 180 degrees". This would have started out as an empirical observation, we would have pulled out polygon after polygon out of the box, counted the sides, and measured the angles, and noticed that the ones with 3 sides always had 180 degrees. However eventually someone would have discovered the mathematics that justifies saying all triangles have 180 degrees.

However, at the same time, it could be true that some of what we thought were triangles, weren't. Maybe they had a microscopic fourth side.

If we are pulling triangles out of the box, then they all have 180 degrees, and we will never see one that doesn't. But maybe they aren't triangles. And it turns out that the laws of physics are very, very much like this.

More comments

Like if I believe that apple pies can't spontaneously appear or disappear, by your reasoning do I have any non miraculous reason to believe that?

I think this issue turns out to be pretty deep. Note, first, that apple pie is not a natural kind in physics, and is not of a character that it ever could or would become a natural kind in the domain of physics. That is, you will not find any mention of "apple pie" in a physics text that is not interchangeable with, say, "blueberry pie". For example, there could be a problem that says "Suppose an apple pie weighs 2 kilograms, and falls from a height of twelve meters in a vacuum..." -- but in this case, the apple pie is interchangeable with any other common sense object that might way 2 kilograms, and is just there to make the problem more fun than if it were a falling rock, or a falling stick. On the other hand, if we changed kilograms to pounds, or "in a vacuum" to "in a pressure of one atmosphere", that would change the problem physically. So, to restate, apple pie is not a concept that is mentioned in any law of physics, nor a concept of the sort that would ever be mentioned in a law of physics.

In that light, an apple pie of all things popping into existence is categorically more unlikely, a priori, than the sorts of things that are explicitly ruled out by the laws physics. Even a 2Kg object (in particular, of all weights) is not a natural kind in physics. The laws that actually prohibit apple pies from materializing and disintegrating -- viz. the law of conservation of matter and energy -- could, in theory, be violated in myriad ways that do not involve apple pies in particular, or flying teapots in particular, or objects that weigh 2KG in particular. And I do stand by my argument in the case of the law of conservation of matter and energy.

I still wouldn't claim to have gotten to the bottom of it (of what makes something a candidate to be a natural kind in physics, that is), but I do think that my argument is only supposed to apply to propositions that are actually candidates to be laws of the physical sciences, and the Law of Conservation of Apple Pies, for whatever reason, does not have that property.

The so-called "scientific method" is, I think, very poorly understood.

Or you misunderstand everyone else's understanding of the scientific method.

The most common answers to these questions are "yes"

This shows you are talking to people with a layman's understanding of science, or people with more education who are speaking colloquially.

Anyone who has more than a high school education of physics should answer "No", if taking you literally.

Your whole post is predicated on a misconception that the things in science we have named "laws" are thought to be the rules of the universe, they aren't. Newton's law of universal gravitation isn't even universal, it fails in our own solar system, it doesn't correctly describe the orbit of mercury.

that any law held universally, without exception. And isn't that exactly what laws of nature are supposed to do?

No.

For Pete's sake it is called the law of universal gravitation, and it begins with the universal quantifier every

Don't take it's name so literally.