site banner

The scientific method rests on faith in God and Man.

The so-called "scientific method" is, I think, rather poorly understood. For example, let us consider one of the best-known laws of nature, often simply referred to as the Law of Gravity:

Newton's Law of Universal Gravitation: Every object in the universe attracts every other object toward it with a force proportional to the product of their masses, divided by the square of the distance between their centers of mass.

Now here is a series of questions for you, which I often ask audiences when I give lectures on the philosophy of science:

  1. Do you believe Newton's Law of Universal Gravitation is true?
  2. If so, how sure are you that it is true?
  3. Why do you believe it, with that degree of certainty?

The most common answers to these questions are "yes", "very sure", and "because it has been extensively experimentally verified." Those answers sound reasonable to any child of the Enlightenment -- but I submit, on the contrary, that this set of answers has no objective basis whatsoever. To begin with, let us ask, how many confirming experiments do you think would have been done, to qualify as "extensive experimental verification." I would ask that you, the reader, actually pick a number as a rough, round guess.

Whatever number N you picked, I now challenge you state the rule of inference that allows you to conclude, from N uniform observations, that a given effect is always about from a given alleged cause. If you dust off your stats book and thumb through it, you will find no such rule of inference rule there. What you will find are principles that allow you to conclude from a certain number N of observations that with confidence c, the proportion of positive cases is z, where c < 1 and z < 1. But there is no finite number of observations that would justify, with any nonzero confidence, that any law held universally, without exception (that is, z can never be 1 for any finite number of observations, no matter how small the desired confidence c is, unless c = 0). . And isn't that exactly what laws of nature are supposed to do? For Pete's sake it is called the law of universal gravitation, and it begins with the universal quantifier every (both of which may have seemed pretty innocuous up until now).

Let me repeat myself for clarity: I am not saying that there is no statistical law that would allow you to conclude the law with absolute certainty; absolute certainty is not even on the table. I am saying that there is no statistical law that would justify belief in the law of universal gravitation with even one tenth of one percent of one percent confidence, based on any finite number of observations. My point is that the laws of the physical sciences -- laws like the Ideal gas laws, the laws of gravity, Ohm's law, etc. -- are not based on statistical reasoning and could never be based on statistical reasoning, if they are supposed, with any confidence whatsoever, to hold universally.

So, if the scientific method is not based on the laws of statistics, what is it based on? In fact it is based on the

Principle of Abductive Inference: Given general principle as a hypothesis, if we have tried to experimentally disprove the hypothesis, with no disconfirming experiments, then we may infer that it is likely to be true -- with confidence justified by the ingenuity and diligence that has been exercised in attempting to disprove it.

In layman's terms, if we have tried to find and/or manufacture counterexamples to a hypothesis, extensively and cleverly, and found none, then we should be surprised if we then find a counterexample by accident. That is the essence of the scientific method that underpins most of the corpus of the physical sciences. Note that it is not statistical in nature. The methods of statistics are very different, in that they rest on theorems that justify confidence in those methods, under assumptions corresponding to the premises of the theorems. There is no such theorem for the Principle of Abductive Inference -- nor will there ever be, because, in fact, for reasons I will explain below, it is a miracle that the scientific method works (if it works).

Why would it take a miracle for the scientific method to work? Remember that the confidence with which we are entitled to infer a natural law is a function of the capability and diligence we have exercised in trying to disprove it. Thus, to conclude a general law with some moderate degree of confidence (say, 75%), we must have done due diligence in trying to disprove it, to the degree necessary to justify that level confidence, given the complexity of the system under study. But what in the world entitles us to think that the source code of the universe is so neat and simple, and its human denizens so smart, that we are capable of the diligence that is due?

For an illuminating analogy, consider that software testing is a process of experimentation that is closely analogous to scientific experimentation. In the case of software testing, the hypothesis being tested -- the general law that we are attempting to disconfirm -- is that a given program satisfies its specification for all inputs. Now do you suppose that we could effectively debug Microsoft Office, or gain justified confidence in its correctness with respect to on item of its specification, by letting a weasel crawl around on the keyboard while the software is running, and observing the results? Of course not: the program is far too complex, its behavior too nuanced, and the weasel too dimwitted (no offense to weasels) for that. Now, do you expect the source code of the Universe itself to be simpler and friendlier to the human brain than the source code of MS Office is to the brain of a weasel? That would be a miraculous thing to expect, for the following reason: a priori, if the complexity of that source code could be arbitrarily large. It could be a googleplex lines of spaghetti code -- and that would be a infinitesimally small level of complexity, given the realm of possible complexities -- namely the right-hand side of the number line.

In this light, if the human brain is better equipped to discover the laws of nature than a weasel is to confidently establish the correctness an item in the spec of MS Office, it would be a stunning coincidence. That is looking at it from the side of the a priori expected complexity of the problem, compared to any finite being's ability to solve it. But there is another side to look from, which is the side of the distribution of intelligence levels of the potential problem-solvers themselves. Obviously, a paramecium, for example, is not equipped to discover the laws of physics. Nor is an octopus, nor a turtle, nor a panther, nor an orangutan. In the spectrum of natural intelligences we know of, it just so happens that there is exactly one kind of creature that just barely has the capacity to uncover the laws of nature. It is as if some cosmic Dungeon Master was optimizing the problem from both sides, by making the source code of the universe just simple enough that the smartest beings within it (that we know of) were just barely capable of solving the puzzle. That is just the goldilocks situation that good DM's try to achieve with their puzzles: not so hard they can't be solved, not so easy that the players can't take pride in solving them

There is a salient counterargument I must respond to. It might be argued that, while it is a priori unlikely that any finite being would be capable of profitably employing the scientific method in a randomly constructed universe, it might be claimed that in hindsight of the scientific method having worked for us in this particular universe, we are now entitled, a posteriori, to embrace the Principle of Abductive Inference as a reliable method. My response is that we have no objective reason whatsoever to believe the scientific method has worked in hindsight -- at least not for the purpose of discovering universal laws of nature! I will grant that we have had pretty good luck with science-based engineering in the tiny little spec of the universe observable to us. I will even grant that this justifies the continued use of engineering for practical purposes with relative confidence -- under the laws of statistics, so long as, say, one anomaly per hundred thousand hours of use is an acceptable risk. But this gives no objective reason whatsoever (again under the laws of statistics) to believe that any of the alleged "laws of nature" we talk about is actually a universal law. That is to say, if you believe, with even one percent confidence, that we ever have, or ever will, uncover a single line of the source code of the universe -- a single law of Nature that holds without exception -- then you, my friend, believe in miracles. There is no reason to expect the scientific method to work, and good reason to expect it not to work -- unless human mind was designed to be able to uncover and understand the laws of nature, by Someone who knew exactly how complex they are.

-5
Jump in the discussion.

No email address required.

A well thought-out post! However, I reject your Principle of Abductive Inference. The essence of science is falsification. Experiments cannot verify a hypothesis (it always remains just our best guess), but they can contradict and thus falsify a hypothesis. The hypothesis "all swans are white" cannot be verified by any number of white swans (because there may always be a non-white swan out there), but it is contradicted by the observation of a single black swan. Of course, the experiment itself is also just a best guess (maybe the swan is just painted black?). All knowledge is guesswork. However, the logical relationship of falsification holds (the hypothesis is logically contradicted by the experiement), while inductive inference is not logically sound (no amount of verification can "ground", "prove" or whatnot that the hypothesis is true).

For further reading along these lines, I recommend "The Logic of Scientific Discovery" by Karl Popper, or this shorter and more modern article: https://www.science.org/doi/10.1126/science.284.5420.1625

To answer your three questions:

  1. Yes, I believe Newton's Law of Universal Gravitation is true.

  2. How sure am I that it is true? Psychologically, very sure. Logically and rationally speaking, not at all, it's just a guess.

  3. Why do I believe it, and with that degree of certainty? I believe it beause it has passed tests that other competing hypotheses have failed. This does not prove it to be true (with any degree of certainty), as you rightly point out, but given we accept the results of the tests, it makes it preferable to the competing hypotheses that fail those tests, because they are logically contradicted by those tests. So it's our best guess because its competitors have been eliminated by experiments, but it is not certain or probable or verified in any way.

Really, you are very close to my position on this, except you want experiments to do more than they can do, and struggling to find a way for them to do what they cannot, namely provide justification/inference/certainty/likelihood for hypotheses. Experiments can contradict and thus falsify hypotheses, but they cannot justify them. Relinquish the demand for justification, and the logical situation is clean and sound: we make guesses, discard those guesses that don't stand up to experiments, and tentatively accept those that do.

Thanks for the reply.

I agree that my argument does not merit against your stated belief [Newton's L.O.G is (merely) our best guess for how gravity works]. But I submit that you have good reason to believe something stronger than your stated belief, that in your heart you do believe something stronger than that (as you indicate: " Psychologically, very sure."), that you probably act as if you believe something stronger than that, and that you plan to deliberately continue acting that way. So, either you are acting irrationally (which I doubt), or something is missing from your formal statement (which I submit is probably the case).

For example, imagine you are told that a certain missile defense system uses a model of rocket flight that is based on Newton's law of gravity. Do you think it would be rational to posit that the system is unsafe, and to take costly action to avoid relying on it, specifically on the grounds that Newton's law of gravity is a "just a guess" in which we are not entitled to a smidgeon of confidence? Now, by comparison, imagine that a system protecting your safety were based on the latest theory of a less well studied domain (say, quantum computing); in that case you might be justifiably concerned about that, and, if so, be more likely to take costly actions to avoid relying on that system -- even though the theory it rests on, like Newton's law of gravity, might be our current best guess about the relevant domain.

Here is the point: (1) we have best guesses in many different domains of inquiry; (2) we have more confidence in our best guesses in some domains than we do in others, and (3) it is prima facie instrumentally rational to act on those differences. So, if our stated beliefs are to be consistent with our actions, which we fully intend to continue taking while regarding ourselves as rational, then we should be able to say something stronger than that the law of gravity is merely our best guess in the relevant domain. If we find ourselves unable to justify saying anything stronger, then we have important epistemological work to do.

For further reading along these lines, I recommend "The Logic of Scientific Discovery" by Karl Popper,

As I recall, Popper held that repeated, failed attempts to disprove a hypothesis count as evidence for its truth (though never certain evidence). Am I mistaken?

The first thing I should clarify is that I think that scientific hypotheses, despite evidence never being able to elevate them above the status of a guess, can be true, really, absolutely true. If we guess right! So if you say aliens exist and I say they don't, we are both guessing (but not randomly: we are motivated, but not justified, by our other other background beliefs). But either aliens exist or they don't. So despite both of us just guessing, one of us is right and has hit upon the truth, the absolute truth. So while Newton's L.O.G. is just a guess from an epistemological standpoint, I am also tentatively accepting it as true. I claim it really is true, and I act upon that belief, although my belief in that is just a guess. Does that satisfy what you felt was missing from my position?

As for your question on the missile defense systems example. So lets say I'm choosing between two courses of action based on two different scientific hypotheses. If one of those hypotheses has passed its empirical tests and the other hasn't, the logical situation is very clear: logic and reason dictate that I reject the hypothesis that has been falsified by the tests, since the tests logically contradict the hypothesis. The hypothesis that has passed its tests I can tentatively accept as true, and I prefer the course of action based on that hypothesis. If both hypotheses have passed all their tests, I would try to concieve of a test that distinguishes between them (a test that one fails but the other doesn't). If this is not possible, then the logical situation is also clear, however: if both hypotheses have passed all their tests, the evidence tells us exactly nothing about which one we should accept - we have to decide what to believe.

And this is a crucial aspect of my position: rationality and and logic cannot tell us what to believe: we have to make that decision. Reason can, however, tell us what not to believe: we should not believe contradictory things, or in this case hypotheses that are contradicted by test results we accept. Rationality does not provide justifications that tell us what to believe. Rationality is the method, namely the method of critical evaluation and when possible empirical testing, which serves to eliminate some of our ideas, hopefully leaving us with true ones. Yes, it'd be great if we could be justified in believing what we believe, but we can't. So we are left with conjectures that we attempt to parse from error by criticism and empirical testing, using logic and reason, with the goal of believing true things. We are rational, in the sense that we use reason and logic to criticize our ideas and hopefully eliminate errors, and our goal is the truth - we aim at having true beliefs. But we can never know that our beliefs are true; we can only guess at the truth, and use reason as best we can to eliminate the guesses that are untrue.

Does this answer your questions? Feel free to ask more if I've been unclear. There are various complications I didn't want to go into (like differences in the severity of empirical tests) for the sake of clarity.

As I recall, Popper held that repeated, failed attempts to disprove a hypothesis count as evidence for its truth (though never certain evidence). Am I mistaken?

You are mistaken, but it's a common mistake. In Popper's and my view, corroborating evidence does nothing, but contradicting evidence falsifies (although also without any degree of certainty).

So if you say aliens exist and I say they don't, we are both guessing (but not randomly: we are motivated, but not justified, by our other background beliefs). But either aliens exist or they don't. So despite both of us just guessing, one of us is right and has hit upon the truth, the absolute truth. So while Newton's L.O.G. is just a guess from an epistemological standpoint, I am also tentatively accepting it as true.

The fact that you have guessed right, or that you may have guessed right, does not entail that you are rationally licensed to embrace the proposition (I think you agree with this). For example, if a tarot card reader told me that I was going to get a job offer today, and I believed her and acted on it by taking out a car loan, and if the Gypsy turned out to be right by sheer luck, my action would still be irrational.

To clarify my position in this light, I never said that the physical laws we have in our corpus are all false, or anything of that sort. I said that we are not entitled to any rational confidence in them -- just as I am not entitled to any rational confidence in a tarot card reading (unless I am mistaken about that practice), even though they may be sometimes right as well -- except to the extent we also believe in miracles.

Success rates matter.

If tarot reading worked as consistently physics or math then boy would that be something.

(Now social sciences, well…)

Science as a method frequently involves guessing and dumb luck and accidental discovery. But then the point is systematically testing findings and examining new evidence and ideas. Tarot reading doesn’t have iterative improvement going on.

Success rates matter.

The success rate of science in enabling improvements to our material lives is pretty good. The success rate of science in yielding justifiable nonzero confidence in universal natural laws may be zero. Can you defend the proposition that it is not? It would be a compelling refutation of my argument if someone were to give a single universal natural law of the physical world -- take your pick -- and give an objective argument why we should have greater than zero confidence in its literal truth. Now that I think about it, that is the straightforward path to refuting my argument, and it is notable that one has attempted to take it.

A word of advice if you proceed: don't waste your time trying to use Bayesian reasoning; you will not get a nonzero posterior unless you have a nonzero prior, and that would be begging the question. And don't bother trying to use parametric statistics, because no finite number of observations will get you there.

yielding justifiable nonzero confidence in universal natural laws may be zero

I’m failing to understand why this is a bar any epistemology needs to clear.

Science as a method verifiably works at improving our material lives because it produces sufficiently accurate information. The utility is the payoff, but the correlation to reality is what enables it.

if someone were to give a single universal natural law of the physical world -- take your pick -- and give an objective argument why we should have greater than zero confidence in its literal truth.

Where does math fit here under “physical world”?

The thing you seem to be doing is putting forth a standard no epistemology can satisfy. It’s not like pure math and logic don’t have identified paradoxes and limitations. Just ask Bertrand Russell.

How about the finding that nothing with mass can exceed the speed of light? This is something backed by math and logic, as well as experimentation. If it were otherwise physics would break, is my layman’s understanding anyway.

Is that sufficiently “universal”?

There are a lot of “universal” rules in physics, so long as you stay at the atomic level. (The quantum domain also has its rules, but they don’t break the atomic ones altogether.)

How about the finding that nothing with mass can exceed the speed of light? This is something backed by math and logic, as well as experimentation. If it were otherwise physics would break, is my layman’s understanding anyway... Is that sufficiently “universal”?

It sure is. Thanks for taking me up on the offer.

I am looking for objective evidence of the theory, Nullius in verba [Latin: No one's words (will be trusted)]. If you claim something is a theorem, show me the proof. If you claim something is experimentally verified, describe the experimental design and its results. What we have here is an appeal to authority claiming that the theory is "backed by math and logic" or that "physics would break" if it were untrue, omnes in verbo [all on the word (of authority)].

I would not be so demanding that I ask anyone to perform experiments, or even look up experimental data in literature, for the purpose of making a "Motte" post. A plausible (but concrete) story of what such evidence would look like -- in evidence of any theory of your choice -- would be enough to rebut my argument.

An appeal to authority is warranted here, rebutting your argument doesn't actually hinge on the truth of the theory, it hinges on whether it is possible for experimental evidence to justify a belief in the correspondence of a theory and reality. If it does there are cases where the logic of the theory enforces universality.

To wit, taking Newton's law as an example (and supposing we only knew classical mechanics), would we be justified in saying that the masses we observe behave as per his theory?

I'm not saying universally, merely the things we've observed locally.

If so, it turns out there are other cases, where if we are justified in believing the theory, the theory says things about the universe as a whole.

If you don't believe we can go from experimental evidence to justified belief in theory, then we have bigger problems.

More comments

Well I’m a layman at physics, so I’d suggest finding someone who can lay out the math, theory, and experimentation that shows it is impossible for any object with mass to travel faster than the speed of light.

My layman’s understanding is that the fundamental properties of spacetime, mass, and energy as we understand them via Special Relativity make it impossible.

Here’s a bunch of physics nerds describing how it would violate causality:

https://physics.stackexchange.com/questions/671516/proof-for-impossibility-of-ftl-signals

More comments

I’m failing to understand why this is a bar any epistemology needs to clear... science as a method verifiably works at improving our material lives because it produces sufficiently accurate information. The utility is the payoff, but the correlation to reality is what enables it.

I did not say that any epistemology needed to clear that bar. If your position is that science a collection of useful fictions, and that discerning the (literally true) laws of nature falls outside the scope of its business, then your position is immune to my argument. For myself, I am a little more romantic about the goals of science.

You’re applying a rigid categorization of “fact or fiction” to an area where the practicality of “all models are wrong; some are useful” is the typical approach.

You’re calling for perfection or it’s fiction, when science has been building knowledge bit by bit. Things can have shades of gray.

Obviously, understanding the Ultimate Nature of Reality and Its Universal Laws is a fine goal, but the way to get there is almost certainly a pretty messy process.

More comments

@NelsonRushton: As I recall, Popper held that repeated, failed attempts to disprove a hypothesis count as evidence for its truth (though never certain evidence). Am I mistaken?

@sqeecoo: You are mistaken, but it's a common mistake. In Popper's and my view, corroborating evidence does nothing, but contradicting evidence falsifies (although also without any degree of certainty).

Seeing as we recall the text differently, I was probing there for a source there (other than yourself). I am not convinced that I was mistaken. Popper defines corroboration as a diligent attempt to disprove a hypothesis:

So long as theory withstands detailed and severe tests and is not superseded by another theory in the course of scientific progress, we may say that it has ‘proved its mettle’ or that it is ‘corroborated’ [Popper, "The Logic of Scientific Discovery", p. 10]

He goes on to say that the degree of corroboration, which he views as the merit of the theory, increases with the number of non-disconfirming experiments:

When trying to appraise the degree of corroboration of a theory we may reason somewhat as follows. Its degree of corroboration will increase with the number of its corroborating instances. [Popper, "The Logic of Scientific Discovery", p. 268]

If there is a difference between what Popper said, and what I said he said, it would be that I used the word "truth". Fair enough, but so did you:

@squeeco: I think that the mission of science is to discover the actual, literal truth.

and I do not see how the following claim could be correct, in light of the quotes above: "In Popper's view,... corroborating evidence does nothing". [emphasis added]

You said that Popper thinks corroboration (failed attempts to falsify a hypothesis) count as evidence for its truth. Instead, Popper says that theories cannot be verified. The first sentence of the chapter you quote is:

Theories are not verifiable, but they can be ‘corroborated’. [Popper, "The Logic of Scientific Discovery", p. 248]

In the footnote soon after:

I introduced the terms ‘corroboration’ (‘Bewährung’) and especially ‘degree of corroboration’ (‘Grad der Bewährung’, ‘Bewährungsgrad’) in my book because I wanted a neutral term to describe the degree to which a hypothesis has stood up to severe tests, and thus ‘proved its mettle’. By ‘neutral’ I mean a term not prejudging the issue whether, by standing up to tests, the hypothesis becomes ‘more probable’ [Popper, "The Logic of Scientific Discovery", p. 249]

And finally, here's Popper stating the difference between psychological questions of one's state of mind (that one can be "very certain") and epistemological questions of the state of the evidence, where evidence cannot verify hypotheses.

Like inductive logic in general, the theory of the probability of hypotheses seems to have arisen through a confusion of psychological with logical questions. Admittedly, our subjective feelings of conviction are of different intensities, and the degree of confidence with which we await the fulfilment of a prediction and the further corroboration of a hypothesis is likely to depend, among other things, upon the way in which this hypothesis has stood up to tests so far—upon its past corroboration. But that these psychological questions do not belong to epistemology or methodology is pretty well acknowledged even by the believers in probability logic. [Popper, "The Logic of Scientific Discovery", p. 252]

So corroboration is a measure of how well-tested a theory is, and the severity of the tests it has undergone. But corroboration does not provide evidence for the truth of the hypothesis. Here's a quote from Popper, "Objective Knowledge", 21f:

From a rational point of view we should not "rely" on any theory, for no theory has been shown to be true, or can be shown to be true. ... in spite of the "rationality" of choosing the best-tested theory as a basis of action, this choice is not "rational" in the sense that it is based upon good reasons for expecting that it will in practice be a successful choice: there can be no good reasons in this sense, and this is precisely Hume's result.

I like my Popper but I hate looking for quotes - I'm much more interested in the substance of the discussion we're having and the view I've outlined as a response to yours.

Thanks for the researched response. I think I finally understand the disagreement now.

@NelsonRushton: As I recall, Popper held that repeated, failed attempts to disprove a hypothesis count as evidence for its truth (though never certain evidence). Am I mistaken?

As you point out, Popper does not regard repeated experiments as progressively raising our confidence in the probability that the theory is true; his notion of the merit of a theory is much more nuanced than "probability of truth". So that is where my statement differs from his view; I am convinced now that I was mistaken and thank you for pointing it out.

@squeecoo: In Popper's and my view, corroborating evidence does nothing, but contradicting evidence falsifies (although also without any degree of certainty).

But I believe you are also mistaken, and your view differs from Popper's in a more profound way. If you open an electronic copy of Popper's book (https://philotextes.info/spip/IMG/pdf/popper-logic-scientific-discovery.pdf), hit ctrl-f, and search for "degree of corroboration" you will find that that phrase occurs 84 times -- about once every five pages for the length of the book. So, while his notion of merit is not defined in terms of truth or probability of truth, he does hold that repeated, diligent, failed attempts to disprove a theory tend to progressively confirm its merit (or to use his word, its "mettle") -- which is a far cry from doing nothing. For Popper, non-disconfirming experiments do something (viz, "corroborate") and greater number of such experiments do more of that thing:

Its [the theory's] degree of corroboration will increase with the number of its corroborating instances. [Popper, "The Logic of Scientific Discovery", p. 268]

I read you correctly, you seem to believe that there should be no difference in our willingness to act on a theory after one rigorous non-disconfirming experiment, versus 1000 of them by 1000 different researchers using different methods and bringing different perspectives and skill sets to the table (say, Newton's law of gravity vs. some new law of quantum computing). Do I read you incorrectly (or did you perhaps misspeak)?

@squeecoo: I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system.

Ok that is a relief to hear, but it is not consistent with your other statement above (corroborating evidence does nothing), so it seems you misspoke.

But I believe you are also mistaken, and your view differs from Popper's in a more profound way. If you open an electronic copy of Popper's book (https://philotextes.info/spip/IMG/pdf/popper-logic-scientific-discovery.pdf), hit ctrl-f, and search for "degree of corroboration" you will find that that phrase occurs 84 times -- about once every five pages for the length of the book. So, while his notion of merit is not defined in terms of truth or probability of truth, he does hold that repeated, diligent, failed attempts to disprove a theory tend to progressively confirm its merit (or to use his word, its "mettle") -- which is a far cry from doing nothing.

Sure, Popper is developing the idea of degree of corroboration in that book, so he mentions it a lot. But no degree of corroboration can change the epistemic status of a theory, which always remains a conjecture. Like I said, it's a common mistake, and Popper shares some of the blame for by speaking about "preference" in the context of corroboration, which sounds a lot like justification or that we "rationally ought" to believe the better tested theory as if it had a greater likelihood of being true, or something like that. Popper did a lot to muddle the waters here. But corroboration is a measure of the state of the critical discussion, and not in any way a measure of the justification, reliability, probability, etc. of a theory. With regard to the epistemic status of a theory being adjusted by evidence, which is what is relevant to our discussion, corroboration does nothing. Here's Popper saying it outright, in Objective Knowledge 1972 (1979 revised edition), p. 18:

By the degree of corroboration of a theory I mean a concise report evaluating the state (at a certain time t) of the critical discussion of a theory, with respect to the way it solves its problems; its degree of testability; the severy of tests it has undergone; and the way it has stood up to these tests. Corroboration (or degree of corroboration) is thus an evaluating report of past performace. Like preference, it is esentially comparative: in general, one can only say that the theory A has a higher (or lower) degree of corroboration than a competing theory B, in the light of the critical discussion, which includes testing, up to some time t. Being a report of past performance only, it has to do with a situation which may lead us to prefer some theories to others. But is says nothing whatever about future performance, or about the "reliability" of a theory.

As the missile example:

@squeecoo: I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system.

Ok that is a relief to hear, but it is not consistent with your other statement above (corroborating evidence does nothing), so it seems you misspoke.

This would be my conjecture, motivated in part by how poorly tested quantum computing is, but not justified or "based" on that. It's my best guess that has taken into consideration the evaluation of the state of the critical discussion on quantum computing (how well corroborated it is), but is not justified by it and remains a guess/conjecture. We can certainly take the degree of corroboration into consideration when deciding what to believe, but it can never elevate our beliefs beyond the status of conjecture, and it is in this epistemological sense that corroborating evidence does nothing.

But is says nothing whatever about future performance, or about the "reliability" of a theory.

I think I see now why I, like many people, misread Popper. Frankly, I think the position he expresses here is so egg-headed that I did not anticipate it. He implicitly conditions future performance (aka reliability) on justified confidence in general, literal truth, and so winds up concluding that theories of physical world have only two levels of reliability: known false, and other. This position hamstrings his theory of corroboration with respect to establishing a rational basis for action -- and that moves him to the bottom of my reading list for philosophy of science. It's not that his work has no intellectual merit (it's all very interesting); it's just that I have better things to do, because I am interested science as a rational basis for discriminating between alternative courses of action, and in philosophy of science as an articulated theory of the rules of evidence for doing so.

It appears that Popper (1) accepts the essence of my argument in the original post, but (2) doesn't believe in miracles -- which commits him to his position on reliability and future-performance, and also makes his theory of corroboration impotent a basis for rational action. I share his view of (1) but not (2).

For clarity, do you agree with the Popper on this (that corroboration says nothing whatever about the future performance of a theory)?

Yup, you got it. There's no establishing a rational basis for action, it cannot be done. You have done a good job articulating some of the obstacles to this in your original post. We can, however, still use reason and logic in the method of eliminating errors in the pursuit of truth. That's Popper's insight.

A small note: there is no "known false" category. Falsification is not justified either, it is as conjectural as anything else. So yes, justification doesn't work, and there is no rational basis to be had. But we can still engage in the rational pursuit of truth, in the sense of using reason and experience to temper our conjectures about the world.

As for your future reading, go with your interests, of course, but I can still recommend this short article articulating this position: https://www.science.org/doi/10.1126/science.284.5420.1625

The beauty and clarity of Popper's view is relinquishing justification and the search for a "basis", which reason and rationality are not capable of providing, but still maintaining rationality, empiricism, and the pursuit of truth. It's worth keeping in mind at least, as a possible different path that eschews the use of justification and "good reasons" but retains the use of reason and truth as the aim of science. If ever you stop believing in miracles, you need not despair of reason just yet, give Popper's view a shot first :)

I'll leave you with a final Popper quote:

And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’ The question of the sources of our knowledge, like so many authoritarian questions, is a genetic one. It asks for the origin of our knowledge, in the belief that knowledge may legitimize itself by its pedigree. The nobility of the racially pure knowledge, the untainted knowledge, the knowledge which derives from the highest authority, if possible from God: these are the (often unconscious) metaphysical ideas behind the question. My modified question, ‘How can we hope to detect error?’ may be said to derive from the view that such pure, untainted and certain sources do not exist, and that questions of origin or of purity should not be confounded with questions of validity, or of truth. …. The proper answer to my question ‘How can we hope to detect and eliminate error?’ is I believe, ‘By criticizing the theories or guesses of others and – if we can train ourselves to do so – by criticizing our own theories or guesses.’ …. So my answer to the questions ‘How do you know? What is the source or the basis of your assertion? What observations have led you to it?’ would be: ‘I do not know: my assertion was merely a guess. Never mind the source, or the sources, from which it may spring – there are many possible sources, and I may not be aware of half of them; and origins or pedigrees have in any case little bearing upon truth. But if you are interested in the problem which I tried to solve by my tentative assertion, you may help me by criticizing it as severely as you can; and if you can design some experimental test which you think might refute my assertion, I shall gladly, and to the best of my powers, help you to refute it.

So while Newton's L.O.G. is just a guess from an epistemological standpoint, I am also tentatively accepting it as true. I claim it really is true, and I act upon that belief, although my belief in that is just a guess. Does that satisfy what you felt was missing from my position?

The difference I was trying to elucidate with the missile defense system example was a difference in the degree of confidence you would have between two theories A and B, both of which have been tested, neither of which has been disconfirmed, but one of which has been tested more thoroughly (or, for whatever reason, you have more confidence in). The crucial issue is a difference in degrees of confidence (or what Popper called degree of corroboration) between two hypotheses, neither of which has been falsified.

The hypothesis that has passed its tests I can tentatively accept as true, and I prefer the course of action based on that hypothesis. If both hypotheses have passed all their tests, I would try to conceive of a test that distinguishes between them

This is not the situation I was describing. In the hypothetical, the two laws are in different domains (gravity vs. quantum computing), possibly for different purposes (say, missile defence vs. airplane autopilot) and one is better established (or better corroborated) than the other.

Like I said, if both theories A and B have passed all their tests, the evidence says nothing about them. We are free to tentatively accept them as true. We don't have to, though - my guess might be that quantum computing theory is not true, or it might be that I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system. That's fine, but that is the part where I conjecture/guess at the truth. We don't disagree about my mental process, it's just that I think it's conjectural and not warranted by the evidence - the evidence can't tell me what to think and which bet to make and which hypothesis to prefer, the evidence can only contradict a hypothesis and thus force me to reject it if I accept the evidence as true. Everything else is me making my best guess. I'm free to describe my mental state as "very confident" in that process, but that describes my state of mind, not the state of the evidence.

if both theories A and B have passed all their tests, the evidence says nothing about them. We are free to tentatively accept them as true. We don't have to, though - my guess might be that quantum computing theory is not true, or it might be that I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system.

I think I am beginning to understand your position better. So, here is my question. Do you think that the preference for acting on a better-tested theory over acting on a worse-tested theory is an arbitrary, subjective preference? like, some people like chocolate; some people like vanilla; different strokes? I assert that it is only rational to be more willing to act on a better tested theory.

We are free to tentatively accept them as true. We don't have to, though.

When did anybody ever have to accept a theory? By have to do you mean rationally ought to? If rationally ought to is what you mean, then, as I said, I disagree.

I think I am beginning to understand your position better. So, here is my question. Do you think that the preference for acting on a better-tested theory over acting on a worse-tested theory is an arbitrary, subjective preference? like, some people like chocolate; some people like vanilla; different strokes? I assert that it is only rational to be more willing to act on a better tested theory. When did anybody ever have to accept a theory? By have to do you mean rationally ought to? If rationally ought to is what you mean, then, as I said, I disagree.

Questions of subjective/objective are always tricky, and I can answer this question on several different levels. Those who think rationality can lead to justified beliefs think that justification and evidence can make it so that we objectively rationally ought to believe a justified theory, as you say. Popper and I reject this. Theories (or beliefs in general) cannot be justified. At all. However, if we are interested in finding the truth (and this is also a subjective goal, one might be more interested in, say, propaganda), we should try to eliminate any erroneous beliefs that we have, and our tool for this is rational criticism and experiments. So we should try to deploy these tools as much as we can if we are interested in the truth, and we thus want our theories to be as severely tested as possible. No matter how well-tested, however, our theories remain conjectures tempered by rational criticism.

We are also not mandated by reason (in Popper's view of science) to prefer the better-tested theory. It's not the case that we rationally ought to accept the better tested theory. We could for example be super stoked about a poorly tested theory in preference to a better tested one - but the thing to do then is to try and come up with stronger tests of our preferred poorly tested theory, since in the search for truth we should try to test our theories as strongly as possible in order to eliminate error. This is subjective in the sense that our preference for a theory is our decision, but it's not like a preference for an ice cream flavor - we deploy rational evaluation and empirical experiments to the best of our ability in order to try to guess at the truth and eliminate errors, which we do not do in our ice cream preferences. This use of the rational method of criticism in the search for truth is what makes the difference and what makes our decision rational in the sense of using critical reasoning, although this provides no objective justification for our decision and it does not tell us what we rationally ought to believe.

Those who think rationality can lead to justified beliefs think that justification and evidence can make it so that we objectively rationally ought to believe a justified theory

There is a nuance to my position that this glosses over. In my view, scientific epistemology is not just matter of ought vs ought not; it is a matter of rationally obligatory degrees of preference for better tested theories, on a continuum. However, when one theory is better tested than another on this continuum, and on some occasion we have to choose between the two, then we rationally ought to trust the better tested theory on that occasion.

This is subjective in the sense that our preference for a theory is our decision, but it's not like a preference for an ice cream flavor

If I understand your position correctly, it is an awful lot like the preference among ice cream flavors. Let's say you have to choose from chocolate, vanilla, and strawberry -- but you know the strawberry is poisoned. So strawberry is a not a viable choice, but the choice between vanilla and strawberry remains wholly subjective. Similarly, (in your view as I understand it) when choosing among alternative theories to act on, the choice among those theories that have not been disconfirmed is a subjective preference as much as chocolate vs. vanilla.

For example, suppose a person has a choice between action A and action B, and that their goal in making that choice is to maximize the likelihood that they will continue living. Action A maximizes their chance of surviving if a certain viable (tested, not disconfirmed) theory is true, and B maximizes their chance of surviving if a certain other viable theory, in another domain, is true. They know one of those theories is substantially better confirmed than the other by every relevant criterion (say, the law of gravity vs. the most recent discovery in quantum computing). I say there is only one rational action in that scenario (trust the better tested theory). Do you say the same or different?

My position is that no actions or beliefs are "rational" in this sense, of being justified or mandated by reason. Actions or beliefs can be rational in the sense that we have deployed the method of rational criticism (and, if possible, empirical testing) in order to eliminate errors, with no justification/warrant/likelihood/etc. being involved at any point. So the contents of a belief don't determine its rationality (reason doesn't tell you what to believe), but the methods we have used in order to try to find errors in that belief can be rational. A choice can be rational if we've employed critical thinking in making it, and this is the only sense in which decisions can be rational, since justification is not possible.

In comparison to ice cream preference, yes, both are arbitrary in the sense we have to judge for ourselves (we are the arbiters of) what to believe/which icecream to like. But we generally don't employ critical discussion and experimentation in our ice cream choices, although we certainly can. Again, it's the methods of critical analysis and experimentation that are rational, and a decision can made with deliberation and with the use of reason, in contrast to a preference for ice cream which usually does not involve this. But the beliefs or actions themselves can never be rational in the sense of justified, warranted, mandated by reason, etc.

As for your the law of gravity vs. the most recent discovery in quantum computing example, it's slightly confusing to me. Does option B that uses quantum computing go against the law of gravity? If so, I would reject it, since I believe the law of gravity to be true (tentatively, without justification). Or does option B use both the law of gravity and quantum computing? In that case I'm not really choosing between gravity and quantum computing, but whether to additionally also use quantum computing in my plan, in which case how well-tested quantum computing is compared with gravity is not really relevant, since I'm using gravity as well.

In general, my view of the preference for the better-tested theory (and my reading of Popper's opinion here) is that this is a soft rule-of-thumb methodological advice, but not an "rationally ought" rule. Since we want to test our theories as severely as possible in order to hopefully eliminate error, all else being equal we should prefer the better tested theory - but not in the sense of "rationally ought" but in the sense of "let's test as much as possible". But all else is rarely equal, and "better tested" is not an exact calculation. So sort of like the advice "it's a good idea to castle your king in chess". Yes, that's good advice, but it's not necessarily always the best choice, and you are not "irrational" for deciding not to castle. A more clear formulation of this advice has been advanced by Miller, Popper's former student, who formulates this stuff much more dryly than Popper but in a way more suited to the style of modern analytical philosophy (Out of Error, p. 124):

Prefer the practical proposal that best survives critical scrutiny is more transparent and more obviously sound advice than Act on the best-tested theory, which is often not real advice at all. What must not be admitted is the suggestion that a proposal that has been subjected to critical scrutiny, and has survived it, thereby qualifies as a better proposal than one that has not been subjected to critical scrutiny. That would convict deductivism not only of inductivism by of clairvoyance, and even inductivists and justificationists can be expected to resist a claim at once so far-seeing and so reactionary. Even the advice Prefer the practical proposal that best survives critical scrutiny is defective in this respect. Since subjecting a proposal to criticism is itself a practical action of a kind, it cannot, on pain of infinite regres, always be ill advised to try something yet untried. It is not of course being suggested that it is a mistake to prefer or to adopt the best-criticized proposal, only that it need not be a mistake not to do so. At this point considerations of utility often intervene. The correct advice is, as usual, negative: Refrain from any practical proposal that does not survive critical scrutiny as well as others do. Observe that someone who rejects this advice will at once be vulnerable to critical attack.

More comments