@NelsonRushton's banner p

NelsonRushton


				

				

				
1 follower   follows 0 users  
joined 2024 March 18 00:39:23 UTC

Doctorate in mathematics, specializing in probability theory, from the University of Georgia. Masters in AI from the University of Georgia. 15 years as a computer science professor at Texas Tech. Now I work as a logician for an AI startup. Married with one son. He's an awesome little dude.

I identify as an Evangelical Christian, but many Evangelicals would say that I am a deist mystic, and that I am going to Hell. Spiritually, the difference between me and Jordan Peterson is that I believe in miracles. The difference between me and Thomas Paine (an actual deist mystic) is that I believe the Bible is a message to us from the Holy Spirit, and the difference between me and Billy Graham is that I think there is noise in the signal.


				

User ID: 2940

NelsonRushton


				
				
				

				
1 follower   follows 0 users   joined 2024 March 18 00:39:23 UTC

					

Doctorate in mathematics, specializing in probability theory, from the University of Georgia. Masters in AI from the University of Georgia. 15 years as a computer science professor at Texas Tech. Now I work as a logician for an AI startup. Married with one son. He's an awesome little dude.

I identify as an Evangelical Christian, but many Evangelicals would say that I am a deist mystic, and that I am going to Hell. Spiritually, the difference between me and Jordan Peterson is that I believe in miracles. The difference between me and Thomas Paine (an actual deist mystic) is that I believe the Bible is a message to us from the Holy Spirit, and the difference between me and Billy Graham is that I think there is noise in the signal.


					

User ID: 2940

this post is too smug by half.

Can you please give some examples from the text of the smugness and shady thinking, perhaps with comments, to make this more concrete?

Is this just because gravitation is claimed to be "universal" e.g. for all we know, gravity could suddenly change to work differently tomorrow, or work differently as soon as we leave the solar system?

Yes, it is because of the claim of universality, but this is a different issue than skepticism about induction and causality a la Hume, or the laws of nature turning on a dime. It could be that even yesterday, there were unobserved exceptions to any physical law we think we know. In fact, the point of my argument is that we have no (non-miraculous) reason to doubt that there were.

Is it? Maybe since I live in this world, I am corrupted by it and I can't imagine it any differently. But: I cannot imagine a world where the scientific method doesn't work.

What I claimed is that we have no non-miraculous reason to believe that the scientific methods works, for purposes of inferring universal generalizations, even in this world.

Incidentally, I think this is the deepest and most informed comment in the thread so far.

Sure, the things we call the "laws of nature" may not be the true causal description of the universe at some level. What matters is that the universe acts as if they were universally true, as best we can tell.

This may be the view of many scientists who think about the epistemology of science if you pin them down (their motte!), but I think if you talk to people walking down the street, they think we are in the business of discovering natural laws that are actually true. I suspect that when we are not pinned down, scientists like to think that we are searching for truth ourselves (our bailey!), and it seems like the phrase "May not be the true causal description... at some level [emphasis added]" hedges against giving up that bailey. As I recall, the word for not-true is "false", unqualified by levels.

If you would affirm that science has no hope of attaining even tentative knowledge of natural laws that are literally true -- but instead that its mission is purely to discover useful (but presumptively fictitious) models of the physical world -- then that position is consistent with my argument, with or without miracles. From the post, I am perhaps a little more than halfway confident you would affirm that, but I am not sure, and I'd like to know.

I don't understand how this is different from skepticism in general. Like if I believe that apple pies can't spontaneously appear or disappear, by your reasoning do I have any non miraculous reason to believe that?

It is different from more aggressive forms of skepticism in that I take for granted that the universe is governed by unchanging laws and that inductive reasoning is valid in theory. The principle of abductive inference says, in effect, if I cannot produce a counterexample, there probably are no counterexamples. This requires a certain level of facially hubristic confidence in the power of your mind, relative to the complexity of the system under study -- even if that form of reasoning would work on that same system for a sufficiently intelligent agent.

I must admit, though, that the law of conservation of apple pies strikes me as pretty non-miraculous. I will think that over and get back to you.

Good info. Another factor that exacerbates this problem is exchange rates. If you look at the GDP of China, it is 17.7 trillion nominal, but 33 trillion measured in PPP (link: https://en.wikipedia.org/wiki/China). That is to say, if you take the total output of the Chinese economy in yuan, and convert the yuan to dollars at market exchange rates (a guy grows a bag of oranges and sells them to his neighbor for 15 yuan = 2.1 dollars), you'd get 17.7 trillion dollars -- but if you take the actual stuff that they make, and sold it in the US for dollars (guy grows the same bag of oranges and sells it to you for 4.2 dollars), you'd get 33 trillion. Why the difference? because the dollar is, still for now, the world's preferred reserve currency; ergo, people want dollars more than they want yuan; dollars have a sole source (the US Govt), and so a dollar buys a lot of oranges, and a lot of everything else, on the world market.

This is great for people in the US who buy imported manufactured goods -- but is not so great for people who work manufacturing jobs in the US. If it costs a US auto worker $500 a month to feed his family, it only costs the competing Chinese laborer $268 (dollar equivalent) to buy the same amount of food. The same goes for other necessities across the board, so the Chinese worker can comfortably work for just over half of what the American worker does (in dollars) and enjoy the same standard of living. Pretty hard for American car companies to compete under those conditions.

the Scientific Method is just a bounded, modestly idiot-proofed form of Bayesian reasoning.

I do not see anything Bayesian about the scientific method. When I pick up the text for the lab component of a college course in physics or chemistry, look to see if there are any priors, conditional probabilities, or posteriors, written down in it, I predict that the median number of Bayesian inferences I will find over the course of 15 experiments is zero. Here is one such a text: https://www.lehman.edu/faculty/kabat/PHY166.pdf (I selected it because it was the top hit in my google search that had a full pdf, but if you think I'm cherry picking you are welcome to try a different one). There is no Bayesian reasoning in that text, nor do I recall every seeing any of the half dozen lab science courses I took in high school and college. I think the same will be true if you look, not at an undergraduate course, but in a physics or chemistry journal.

But if physicists what are really doing is a special case of Bayesian inference, I find it peculiar that they do not seem to know what they are doing, because they sure don't talk about it that way. So I'm curious what makes you think they are. It is a pretty important question to me because if you can show me how typical forms of experimental reasoning in the physical sciences is Bayesian, or in any way probabilistic or statistical, that would disprove the miraculous aspect of its success.

It's not going away unless we eventually make some groundbreaking metaphysical discovery, and it's always going to be an inherent Achilles' heel of Positivism in particular and Empiricism in general.

I'm not dabbing on Empiricism. I am an empiricist. I'm dabbing on cold blooded materialism in conjunction with claims of the possibility of knowing universal natural laws.

Correct me if I am wrong, self_made_human, but it seems to me that the unstated premise of your position is this: if someone holds an uncertain belief, and then they see something, and they revise their degree of certainty based on what they saw, and if they are acting rationally, then they must be doing Bayesian updating. Do you affirm that?

I think a lot of people fall into the trap of thinking probabilities are the only rational way of representing and reasoning with uncertain information because, unless they take an AI class, it is the only method covered in a typical undergraduate curriculum. This leaves them with the impression that "probability" means degree of belief, "probability theory" means the logic of reasoning about degrees of belief, and that the problem has been settled of the right way to do such reasoning. If all of that were true, and if Bayes rule were the only way to update beliefs using probability theory, then the unstated premise above would be correct. The problems are that (1) none of that is true, and (2) even when we use probability theory to update our beliefs, we are not always using Bayes rule.

Probability theory is actually a specific set of axioms that constitutes one particular way of reasoning about degrees of belief. There are well developed alternatives to probability theory -- including certainty factors (as used in Mycin: https://en.wikipedia.org/wiki/Mycin), Dempster-Schafer evidence theory, backpropagation (as used in large language models such as ChatGPT), and many others, which are often more effective than probability theory for particular applications -- none of which use Bayes formula or can even be incidentally described by it. Moreover, even among belief-updating methods that do use probability theory, the most frequently used approach in scientific literature is parametric statistics -- which (as I point out in a separate reply) does not use Bayesian updating.

If you claim that physicists, for example, routinely use Bayesian updating, and you claim to hold that belief for a good reason, then you should be able to give evidence that they are thinking in terms of conditional probabilities (satisfying the axioms of probability) and updating them by Bayes equation -- which is a much more specific claim than that they merely change their degrees of belief after making observations in an effective manner.

What is a p-value? It's not a urine dipstick test I can tell you. Still can't read most papers without tripping over one. Don't ask me if they're using it in frequentist or Bayesian terms, but there's a conditional probability for you.

It is not generally true that "you can't read most papers without tripping over one [p-value]". There is a thread of truth to this in medicine and the social sciences, but not in the physical sciences. More importantly, I think the duality you are looking for is parametric vs. Bayesian, not frequentist vs. Bayesian. The tool of p-values is part of parametric statistics, which is the main alternative to Bayesian statistics. If you see a paper with a p-value, it means they are not using Bayesian updating -- so the thing you keep tripping over is evidence against your thesis.

Finally, p-values, while they may not be urine dipsticks, are also not conditional probabilities. A conditional probability is the probability of A given B, where A and B are events in a probability space. A p-value, on the other hand, is the probability of an event A in a probability space conditioned on the value of a fixed-but-unknown parameter of that probability space. That is why parametric statistics does not use Bayes rule.

Thanks for the reply.

I agree that my argument does not merit against your stated belief [Newton's L.O.G is (merely) our best guess for how gravity works]. But I submit that you have good reason to believe something stronger than your stated belief, that in your heart you do believe something stronger than that (as you indicate: " Psychologically, very sure."), that you probably act as if you believe something stronger than that, and that you plan to deliberately continue acting that way. So, either you are acting irrationally (which I doubt), or something is missing from your formal statement (which I submit is probably the case).

For example, imagine you are told that a certain missile defense system uses a model of rocket flight that is based on Newton's law of gravity. Do you think it would be rational to posit that the system is unsafe, and to take costly action to avoid relying on it, specifically on the grounds that Newton's law of gravity is a "just a guess" in which we are not entitled to a smidgeon of confidence? Now, by comparison, imagine that a system protecting your safety were based on the latest theory of a less well studied domain (say, quantum computing); in that case you might be justifiably concerned about that, and, if so, be more likely to take costly actions to avoid relying on that system -- even though the theory it rests on, like Newton's law of gravity, might be our current best guess about the relevant domain.

Here is the point: (1) we have best guesses in many different domains of inquiry; (2) we have more confidence in our best guesses in some domains than we do in others, and (3) it is prima facie instrumentally rational to act on those differences. So, if our stated beliefs are to be consistent with our actions, which we fully intend to continue taking while regarding ourselves as rational, then we should be able to say something stronger than that the law of gravity is merely our best guess in the relevant domain. If we find ourselves unable to justify saying anything stronger, then we have important epistemological work to do.

For further reading along these lines, I recommend "The Logic of Scientific Discovery" by Karl Popper,

As I recall, Popper held that repeated, failed attempts to disprove a hypothesis count as evidence for its truth (though never certain evidence). Am I mistaken?

the difference between one-hundred percent philosophical certainty and something merely being true enough that you can put a satellite into orbit with it isn't a meaningful distinction to most people... I'll be honest, it's not a terribly meaningful distinction to me either.

To be clear, my central claim has nothing to do with 100% certainty. It has to do with nonzero certainty of full generality. The difference is the difference between being 100% sure that at least one apple in the barrel is rotten (full confidence in nonzero generality), and being 1% sure that all of the apples in the barrel are rotten (low, but nonzero confidence in full generality).

Faith is believing in something that can neither be proven nor disproven with existing methods.

I think the idea of faith as firm belief without evidence is a power grab by fundamentalist clergymen who do not want to be questioned on their axioms. My idea of faith is something entirely different. When the Marines say semper fidelis (always faithful), they are not talking about firm belief in the Marine Corps without evidence; they are talking about doing their duty in relation to the Corps.

To illustrate, imagine (God forbid) my wife is in a coma, it is unknown whether she was brain dead, and it is unknown whether she can perceive what was going on around her. I have two choices: (1) I can say, well she may well already be gone, so I am going to leave her alone in the bed indefinitely, or (2) I can say she might be in there; if she is, she wants to hear from me, so I will go sit by her and hold her hand and talk to her until the situation is resolved for better or worse. That is faith in the sense of faithfulness (Greek pistis, Latin fidelis) in the relationship -- even a relationship with someone who may or may not be in there (or out there), as long as there is hope.

Like if I believe that apple pies can't spontaneously appear or disappear, by your reasoning do I have any non miraculous reason to believe that?

I think this issue turns out to be pretty deep. Note, first, that apple pie is not a natural kind in physics, and is not of a character that it ever could or would become a natural kind in the domain of physics. That is, you will not find any mention of "apple pie" in a physics text that is not interchangeable with, say, "blueberry pie". For example, there could be a problem that says "Suppose an apple pie weighs 2 kilograms, and falls from a height of twelve meters in a vacuum..." -- but in this case, the apple pie is interchangeable with any other common sense object that might way 2 kilograms, and is just there to make the problem more fun than if it were a falling rock, or a falling stick. On the other hand, if we changed kilograms to pounds, or "in a vacuum" to "in a pressure of one atmosphere", that would change the problem physically. So, to restate, apple pie is not a concept that is mentioned in any law of physics, nor a concept of the sort that would ever be mentioned in a law of physics.

In that light, an apple pie of all things popping into existence is categorically more unlikely, a priori, than the sorts of things that are explicitly ruled out by the laws physics. Even a 2Kg object (in particular, of all weights) is not a natural kind in physics. The laws that actually prohibit apple pies from materializing and disintegrating -- viz. the law of conservation of matter and energy -- could, in theory, be violated in myriad ways that do not involve apple pies in particular, or flying teapots in particular, or objects that weigh 2KG in particular. And I do stand by my argument in the case of the law of conservation of matter and energy.

I still wouldn't claim to have gotten to the bottom of it (of what makes something a candidate to be a natural kind in physics, that is), but I do think that my argument is only supposed to apply to propositions that are actually candidates to be laws of the physical sciences, and the Law of Conservation of Apple Pies, for whatever reason, does not have that property.

This is a reply jointly to several comments so I will put it as a new semi-top level post. Several of the responses, including such as (what I consider) the most thoughtful ones of @sqeecoo and @Gillitrut, point in the direction that the mission of science is not to discover natural laws that are literally true, but to produce useful fictions -- stories about the world that we are better off believing and acting on. That position, if you really believe it, is immune from my argument. But if you take that position, and at the same time embrace the study of science, then you cannot, at the same time, argue against theism on the grounds that it is literally false.

Science is stories about the world that we are better off acting on. This phrasing seems better to me. In this way, can't I argue against theism (whatever you mean by that) by saying "acting on theism doesn't make us better off"?

Yes, feel free. But not (under the premises I described) on the grounds that there is no objective evidence that God actually exists (since that is also true of universal gravitation).

Is this like a hypocrisy claim? That since science isn't literally true it would be hypocritical to criticize theism for not being literally true?

Yes, that's what I'm saying.

If there is some causally inert god or gods out there, who do not interact with our reality in an empirically testable way, I am not that concerned with their existence.

God's pronouns are He/Him. (For the sarcasm-impaired, that's a joke)

This is a good question.

If inductive reasoning is valid why can't we go from "all observed masses follow Newton's law" to "therefore all masses follow Newton's law."?

I think this puts the burden of proof in a strange place. The question is always why should we be able to make the inference, and according to what articulable rule of inference. But I will pick up the burden of proof and try to explain why we can't make that inference from all observed P are Q* to all P are Q, using the Raven Paradox.

Imagine that I see a few crows and note that they are all black, and I form the hypothesis that all crows are black. I begin to seriously pursue the matter by looking for crows, counting them, and noting their color. How many crows would I need to see, all of which are black, before I can conclude that all crows are black, or, more conservatively, that probably (more than 50% likely) all crows are black? Pick a number you think is reasonable. I'll say a hundred thousand; that sounds conservative.

Now the following is a theorem of first order logic: (for all x, P(x) => Q(x)) <=> (for all x, -Q(x) => -P(x)). Or to instantiate the symbols, all crows are black is equivalent to everything that is not black is not a crow. One way to see that that is a theorem is to see that whichever form you consider, a counterexample would consist of a crow that is not black.

But now the alternative formulation gives me an idea. It's not that easy to find crows, but it's really easy to find things that aren't black. Now there are about 150 million blades of grass in an acre of land, so I can go into my 1/8 acre back yard and find about 19 million non-black things (namely, blades of grass) that are not crows. That's waaaaay over what seemed like a reasonable threshold to establish that probably, everything that is not black is not a crow, which is logically equivalent to all crows are black. Hypothesis confirmed!

But seriously, can I prove that probably most crows are black -- let alone that definitely all crows are black -- by looking at blades of grass in my back yard? of course not. So that shows that this reasoning is not valid, even if some forms of inductive reasoning are:

If inductive reasoning is valid why can't we go from "all observed masses follow Newton's law" to "therefore all masses follow Newton's law."?

I won't spoil the fun by resolving the paradox for you. Unless want me to.

Is there something that singles out the laws of physics as uniquely unjustifiable

This applies to all universal generalizations over any set with large numbers of members we cannot directly test. The first critical part of my top level post is this:

What you will find [in a statistics book] are principles that allow you to conclude from a certain number N of observations, that with confidence c, the proportion of positive cases is z, where c < 1 and z < 1. But there is no finite number of observations that would justify, with any nonzero confidence, that any law held universally, without exception (that is, z can never be 1 for any finite number of observations, no matter how small the desired confidence c is, unless c = 0).

So, statistical arguments cannot establish universal generalizations; nothing unique to physics about that. The second critical part is what I said in my first reply to your first comment:

The principle of abductive inference says, in effect, if I cannot produce a counterexample, there probably are no counterexamples. This requires a certain level of facially hubristic confidence in the power of your mind, relative to the complexity of the system under study -- even if that form of reasoning would work on that same system when deployed by a sufficiently intelligent agent.

There is an old joke that is relevant to the application of the abductive inference principle [credit to Kan Kannan, my doctoral advisor]: I tried whiskey and coke, rum and coke, gin and coke, tequila and coke, and vodka and coke, and got drunk every time. Must be the coke! Maybe nobody would be that dim in real life, but the principle is real. When we are doing experiments to gather evidence for a universal principle (coke and anything gets you drunk), we might be dim witted to actually look where the counterexamples are.

Here is a real-world example. I once assigned a homework problem to write a function in Python that would compute the greatest common divisor of any two integers a and b, and test it on 5 inputs to see if it worked. One student evidently copied the pseudocode found on Wikipeda (which is fine; real life is open book and open google), and submitted this program:

def gcd(a, b):  
    while b != 0:  
       t = b  
       b = a % b  
       a = t  
   return a

and these 5 test cases:

gcd(5,10) = 5
gcd(8,7) = 1
gcd(9,21) = 3
gcd(8,8) = 58
gcd(1000,2000) = 100

He tested big numbers and little ones, first argument smaller than the second, second argument smaller than the first, both arguments the same, one a multiple of the other, and them being relatively prime (having no common factors other than 1), and got correct answers in every case. So, in some ways it is a highly varied test suite -- but he probably could have written ten thousand test cases and still never found that the function is incorrect, because he systematically failed to think about negative numbers in the test suite, just like he did in his code (it gives the wrong answer for gcd(-10,-5). In one way of looking at things, negative number are atypical (in that we don't bump into them as often in ordinary life), and many people wouldn't think to test them; but from an objective way of looking at things, he systematically ignored half of the number line, despite straining to come up with a highly varied test suite. Must be the coke!

The point of the joke, and the example, is to illustrate how, when analyzing complex system with nuanced twists and turns, we might not have enough ingenuity to look where the counterexamples to our hypothesis really are. But what counts as a "complex system with nuanced twists and turns" depends on the complexity of the system under investigation, relative to the mental acuity of the investigator. So, what right do we have to expect that our little brains are up to the task of finding the "bugs" in our hypotheses about the laws of nature, when they are just barely (sometimes) capable of finding the bugs in a six-line program that is wrong for fully half of its possible inputs? If the source code of the universe is that simple, relative to the power of the little meat computers between our ears, it would be a miracle.

So while Newton's L.O.G. is just a guess from an epistemological standpoint, I am also tentatively accepting it as true. I claim it really is true, and I act upon that belief, although my belief in that is just a guess. Does that satisfy what you felt was missing from my position?

The difference I was trying to elucidate with the missile defense system example was a difference in the degree of confidence you would have between two theories A and B, both of which have been tested, neither of which has been disconfirmed, but one of which has been tested more thoroughly (or, for whatever reason, you have more confidence in). The crucial issue is a difference in degrees of confidence (or what Popper called degree of corroboration) between two hypotheses, neither of which has been falsified.

The hypothesis that has passed its tests I can tentatively accept as true, and I prefer the course of action based on that hypothesis. If both hypotheses have passed all their tests, I would try to conceive of a test that distinguishes between them

This is not the situation I was describing. In the hypothetical, the two laws are in different domains (gravity vs. quantum computing), possibly for different purposes (say, missile defence vs. airplane autopilot) and one is better established (or better corroborated) than the other.

I don't see how the principle of abductive inference isn't a statistical argument.

Good question. To answer it, we have to have a concrete picture of what statistical arguments really are, and not just a vague intuition that says "make observations and allow them to change your beliefs" -- see also this post :https://www.themotte.org/post/907/the-scientific-method-rests-on-faith/195677?context=8#context).

Statistical arguments are based, first and foremost, on random samples, and this is a premise of the theorems that justify statistical methods. Abductive inference is not based on random samples. On the contrary, statistical in based on decidedly nonrandom samples chosen in a deliberate search for counterexamples. In a random sample, you must pick with your eyes closed or the test is no good, and sample size is crucial; in abduction, you must cherry pick as the devil's advocate, trying to disprove the hypothesis, or the test is no good. This means you must be an effective enough advocate to have a good chance of finding counterexamples if they actually exist -- which is why abductive inference is not objective evidence, but rests on an article of faith in the capabilities of reasoner as an effective advocate to disprove the hypothesis in case it is false.

@NelsonRushton: As I recall, Popper held that repeated, failed attempts to disprove a hypothesis count as evidence for its truth (though never certain evidence). Am I mistaken?

@sqeecoo: You are mistaken, but it's a common mistake. In Popper's and my view, corroborating evidence does nothing, but contradicting evidence falsifies (although also without any degree of certainty).

Seeing as we recall the text differently, I was probing there for a source there (other than yourself). I am not convinced that I was mistaken. Popper defines corroboration as a diligent attempt to disprove a hypothesis:

So long as theory withstands detailed and severe tests and is not superseded by another theory in the course of scientific progress, we may say that it has ‘proved its mettle’ or that it is ‘corroborated’ [Popper, "The Logic of Scientific Discovery", p. 10]

He goes on to say that the degree of corroboration, which he views as the merit of the theory, increases with the number of non-disconfirming experiments:

When trying to appraise the degree of corroboration of a theory we may reason somewhat as follows. Its degree of corroboration will increase with the number of its corroborating instances. [Popper, "The Logic of Scientific Discovery", p. 268]

If there is a difference between what Popper said, and what I said he said, it would be that I used the word "truth". Fair enough, but so did you:

@squeeco: I think that the mission of science is to discover the actual, literal truth.

and I do not see how the following claim could be correct, in light of the quotes above: "In Popper's view,... corroborating evidence does nothing". [emphasis added]

Thanks for the researched response. I think I finally understand the disagreement now.

@NelsonRushton: As I recall, Popper held that repeated, failed attempts to disprove a hypothesis count as evidence for its truth (though never certain evidence). Am I mistaken?

As you point out, Popper does not regard repeated experiments as progressively raising our confidence in the probability that the theory is true; his notion of the merit of a theory is much more nuanced than "probability of truth". So that is where my statement differs from his view; I am convinced now that I was mistaken and thank you for pointing it out.

@squeecoo: In Popper's and my view, corroborating evidence does nothing, but contradicting evidence falsifies (although also without any degree of certainty).

But I believe you are also mistaken, and your view differs from Popper's in a more profound way. If you open an electronic copy of Popper's book (https://philotextes.info/spip/IMG/pdf/popper-logic-scientific-discovery.pdf), hit ctrl-f, and search for "degree of corroboration" you will find that that phrase occurs 84 times -- about once every five pages for the length of the book. So, while his notion of merit is not defined in terms of truth or probability of truth, he does hold that repeated, diligent, failed attempts to disprove a theory tend to progressively confirm its merit (or to use his word, its "mettle") -- which is a far cry from doing nothing. For Popper, non-disconfirming experiments do something (viz, "corroborate") and greater number of such experiments do more of that thing:

Its [the theory's] degree of corroboration will increase with the number of its corroborating instances. [Popper, "The Logic of Scientific Discovery", p. 268]

I read you correctly, you seem to believe that there should be no difference in our willingness to act on a theory after one rigorous non-disconfirming experiment, versus 1000 of them by 1000 different researchers using different methods and bringing different perspectives and skill sets to the table (say, Newton's law of gravity vs. some new law of quantum computing). Do I read you incorrectly (or did you perhaps misspeak)?

@squeecoo: I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system.

Ok that is a relief to hear, but it is not consistent with your other statement above (corroborating evidence does nothing), so it seems you misspoke.

if both theories A and B have passed all their tests, the evidence says nothing about them. We are free to tentatively accept them as true. We don't have to, though - my guess might be that quantum computing theory is not true, or it might be that I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system.

I think I am beginning to understand your position better. So, here is my question. Do you think that the preference for acting on a better-tested theory over acting on a worse-tested theory is an arbitrary, subjective preference? like, some people like chocolate; some people like vanilla; different strokes? I assert that it is only rational to be more willing to act on a better tested theory.

We are free to tentatively accept them as true. We don't have to, though.

When did anybody ever have to accept a theory? By have to do you mean rationally ought to? If rationally ought to is what you mean, then, as I said, I disagree.

Why can't I let the other shoe drop and say that "theism is literally false" is also a story about the world that we are better off believing and acting on?

A far as I'm concerned, you are welcome to make that argument; be my guest. I just said that, under certain premises, you couldn't rationally make the other one.

So if you say aliens exist and I say they don't, we are both guessing (but not randomly: we are motivated, but not justified, by our other background beliefs). But either aliens exist or they don't. So despite both of us just guessing, one of us is right and has hit upon the truth, the absolute truth. So while Newton's L.O.G. is just a guess from an epistemological standpoint, I am also tentatively accepting it as true.

The fact that you have guessed right, or that you may have guessed right, does not entail that you are rationally licensed to embrace the proposition (I think you agree with this). For example, if a tarot card reader told me that I was going to get a job offer today, and I believed her and acted on it by taking out a car loan, and if the Gypsy turned out to be right by sheer luck, my action would still be irrational.

To clarify my position in this light, I never said that the physical laws we have in our corpus are all false, or anything of that sort. I said that we are not entitled to any rational confidence in them -- just as I am not entitled to any rational confidence in a tarot card reading (unless I am mistaken about that practice), even though they may be sometimes right as well -- except to the extent we also believe in miracles.