@NelsonRushton's banner p

NelsonRushton


				

				

				
1 follower   follows 0 users  
joined 2024 March 18 00:39:23 UTC

Doctorate in mathematics, specializing in probability theory, from the University of Georgia. Masters in AI from the University of Georgia. 15 years as a computer science professor at Texas Tech. Now I work as a logician for an AI startup. Married with one son. He's an awesome little dude.

I identify as an Evangelical Christian, but many Evangelicals would say that I am a deist mystic, and that I am going to Hell. Spiritually, the difference between me and Jordan Peterson is that I believe in miracles. The difference between me and Thomas Paine (an actual deist mystic) is that I believe the Bible is a message to us from the Holy Spirit, and the difference between me and Billy Graham is that I think there is noise in the signal.


				

User ID: 2940

NelsonRushton


				
				
				

				
1 follower   follows 0 users   joined 2024 March 18 00:39:23 UTC

					

Doctorate in mathematics, specializing in probability theory, from the University of Georgia. Masters in AI from the University of Georgia. 15 years as a computer science professor at Texas Tech. Now I work as a logician for an AI startup. Married with one son. He's an awesome little dude.

I identify as an Evangelical Christian, but many Evangelicals would say that I am a deist mystic, and that I am going to Hell. Spiritually, the difference between me and Jordan Peterson is that I believe in miracles. The difference between me and Thomas Paine (an actual deist mystic) is that I believe the Bible is a message to us from the Holy Spirit, and the difference between me and Billy Graham is that I think there is noise in the signal.


					

User ID: 2940

Is there something that singles out the laws of physics as uniquely unjustifiable

This applies to all universal generalizations over any set with large numbers of members we cannot directly test. The first critical part of my top level post is this:

What you will find [in a statistics book] are principles that allow you to conclude from a certain number N of observations, that with confidence c, the proportion of positive cases is z, where c < 1 and z < 1. But there is no finite number of observations that would justify, with any nonzero confidence, that any law held universally, without exception (that is, z can never be 1 for any finite number of observations, no matter how small the desired confidence c is, unless c = 0).

So, statistical arguments cannot establish universal generalizations; nothing unique to physics about that. The second critical part is what I said in my first reply to your first comment:

The principle of abductive inference says, in effect, if I cannot produce a counterexample, there probably are no counterexamples. This requires a certain level of facially hubristic confidence in the power of your mind, relative to the complexity of the system under study -- even if that form of reasoning would work on that same system when deployed by a sufficiently intelligent agent.

There is an old joke that is relevant to the application of the abductive inference principle [credit to Kan Kannan, my doctoral advisor]: I tried whiskey and coke, rum and coke, gin and coke, tequila and coke, and vodka and coke, and got drunk every time. Must be the coke! Maybe nobody would be that dim in real life, but the principle is real. When we are doing experiments to gather evidence for a universal principle (coke and anything gets you drunk), we might be dim witted to actually look where the counterexamples are.

Here is a real-world example. I once assigned a homework problem to write a function in Python that would compute the greatest common divisor of any two integers a and b, and test it on 5 inputs to see if it worked. One student evidently copied the pseudocode found on Wikipeda (which is fine; real life is open book and open google), and submitted this program:

def gcd(a, b):  
    while b != 0:  
       t = b  
       b = a % b  
       a = t  
   return a

and these 5 test cases:

gcd(5,10) = 5
gcd(8,7) = 1
gcd(9,21) = 3
gcd(8,8) = 58
gcd(1000,2000) = 100

He tested big numbers and little ones, first argument smaller than the second, second argument smaller than the first, both arguments the same, one a multiple of the other, and them being relatively prime (having no common factors other than 1), and got correct answers in every case. So, in some ways it is a highly varied test suite -- but he probably could have written ten thousand test cases and still never found that the function is incorrect, because he systematically failed to think about negative numbers in the test suite, just like he did in his code (it gives the wrong answer for gcd(-10,-5). In one way of looking at things, negative number are atypical (in that we don't bump into them as often in ordinary life), and many people wouldn't think to test them; but from an objective way of looking at things, he systematically ignored half of the number line, despite straining to come up with a highly varied test suite. Must be the coke!

The point of the joke, and the example, is to illustrate how, when analyzing complex system with nuanced twists and turns, we might not have enough ingenuity to look where the counterexamples to our hypothesis really are. But what counts as a "complex system with nuanced twists and turns" depends on the complexity of the system under investigation, relative to the mental acuity of the investigator. So, what right do we have to expect that our little brains are up to the task of finding the "bugs" in our hypotheses about the laws of nature, when they are just barely (sometimes) capable of finding the bugs in a six-line program that is wrong for fully half of its possible inputs? If the source code of the universe is that simple, relative to the power of the little meat computers between our ears, it would be a miracle.

Rule utilitarianism sets rules that protect individual liberty as a bulwark against oppression and as a safety valve.

It only does this in the context of valid arguments that protecting individual liberty is in fact such a bulwark/safety-valve, and I don't believe such arguments exist. It is very tempting to think they exist, because I agree with their conclusions, but I do not believe this is not how people actually defend those principles in practice. For example, ...

In my mind, the US constitution is a good representation of rule utilitarianism.

My response to this has a lot in common with my response to @coffee_enjoyer above [https://www.themotte.org/post/966/why-rule-utilitarianism-fails-as-a/205363?context=8#context]. I love the US constitution, but I do not think it has much to do with rule utilitarianism. Most provisions of the American Constitution and Bill of Rights are borrowed almost wholesale from the English Constitution, English Petition of Right, and English Bill of Rights that came just before them in the same tradition. Where there was a discussion of which changes to make,

  1. when the argument was, we should do this rather than that because the calculated consequences of this are better than the calculated consequences of that, I submit that is political science or social engineering, not utilitarian ethics.
  2. when the argument was, we should do this rather than that because that wrongfully infringes on our rights as Englishmen, I submit that argument was based in sacred tradition, not utilitarian ethics, and
  3. when the argument was, we should do this rather than that because that wrongfully infringes on our self-evident natural human rights, the argument was based in deontology.

the net upvotes tell the story of which way TheMotte leans ideologically.

It is a little sad, for The Motte, that it can be assumed people upvote arguments whose conclusions they agree with (as opposed to meritorious arguments on all sides).

Can we make a "universal law" about the angles of all three sided polygons in the infinite box?

I can't think of a statistical rule that would justify it. Can you?

Those who think rationality can lead to justified beliefs think that justification and evidence can make it so that we objectively rationally ought to believe a justified theory

There is a nuance to my position that this glosses over. In my view, scientific epistemology is not just matter of ought vs ought not; it is a matter of rationally obligatory degrees of preference for better tested theories, on a continuum. However, when one theory is better tested than another on this continuum, and on some occasion we have to choose between the two, then we rationally ought to trust the better tested theory on that occasion.

This is subjective in the sense that our preference for a theory is our decision, but it's not like a preference for an ice cream flavor

If I understand your position correctly, it is an awful lot like the preference among ice cream flavors. Let's say you have to choose from chocolate, vanilla, and strawberry -- but you know the strawberry is poisoned. So strawberry is a not a viable choice, but the choice between vanilla and strawberry remains wholly subjective. Similarly, (in your view as I understand it) when choosing among alternative theories to act on, the choice among those theories that have not been disconfirmed is a subjective preference as much as chocolate vs. vanilla.

For example, suppose a person has a choice between action A and action B, and that their goal in making that choice is to maximize the likelihood that they will continue living. Action A maximizes their chance of surviving if a certain viable (tested, not disconfirmed) theory is true, and B maximizes their chance of surviving if a certain other viable theory, in another domain, is true. They know one of those theories is substantially better confirmed than the other by every relevant criterion (say, the law of gravity vs. the most recent discovery in quantum computing). I say there is only one rational action in that scenario (trust the better tested theory). Do you say the same or different?

@NelsonRushton: As I recall, Popper held that repeated, failed attempts to disprove a hypothesis count as evidence for its truth (though never certain evidence). Am I mistaken?

@sqeecoo: You are mistaken, but it's a common mistake. In Popper's and my view, corroborating evidence does nothing, but contradicting evidence falsifies (although also without any degree of certainty).

Seeing as we recall the text differently, I was probing there for a source there (other than yourself). I am not convinced that I was mistaken. Popper defines corroboration as a diligent attempt to disprove a hypothesis:

So long as theory withstands detailed and severe tests and is not superseded by another theory in the course of scientific progress, we may say that it has ‘proved its mettle’ or that it is ‘corroborated’ [Popper, "The Logic of Scientific Discovery", p. 10]

He goes on to say that the degree of corroboration, which he views as the merit of the theory, increases with the number of non-disconfirming experiments:

When trying to appraise the degree of corroboration of a theory we may reason somewhat as follows. Its degree of corroboration will increase with the number of its corroborating instances. [Popper, "The Logic of Scientific Discovery", p. 268]

If there is a difference between what Popper said, and what I said he said, it would be that I used the word "truth". Fair enough, but so did you:

@squeeco: I think that the mission of science is to discover the actual, literal truth.

and I do not see how the following claim could be correct, in light of the quotes above: "In Popper's view,... corroborating evidence does nothing". [emphasis added]

I don't see how the principle of abductive inference isn't a statistical argument.

Good question. To answer it, we have to have a concrete picture of what statistical arguments really are, and not just a vague intuition that says "make observations and allow them to change your beliefs" -- see also this post :https://www.themotte.org/post/907/the-scientific-method-rests-on-faith/195677?context=8#context).

Statistical arguments are based, first and foremost, on random samples, and this is a premise of the theorems that justify statistical methods. Abductive inference is not based on random samples. On the contrary, statistical in based on decidedly nonrandom samples chosen in a deliberate search for counterexamples. In a random sample, you must pick with your eyes closed or the test is no good, and sample size is crucial; in abduction, you must cherry pick as the devil's advocate, trying to disprove the hypothesis, or the test is no good. This means you must be an effective enough advocate to have a good chance of finding counterexamples if they actually exist -- which is why abductive inference is not objective evidence, but rests on an article of faith in the capabilities of reasoner as an effective advocate to disprove the hypothesis in case it is false.

There are two definitions of woke on the table; there is the dictionary definition and there is what people refer to in practice as "woke". These are not the same and I am referring explicitly to the Oxford dictionary definition, which does not reference leftism in any way. Hitler definitely espoused a message of wokeness in the dictionary sense, casting the Jews, Slavs, industrialists as historical class exploiters and using this as a pretext for seizing various assets on behalf of the Volk (folks; people). A case can be made that the Ayatollahs were/are woke as well. I don't consider Hirohito a "mass murdering tyrant" because he was beloved by his people and didn't directly kill them.

One issue I have here is that I am not sure if three generations down the road - when this background morality of "normality" disappears - we actually won't end up eating babies as the new normal.

I believe that is where we are headed, and where we have been headed since the Enlightenment.

Thomas Jefferson wrote that the doctrine of equal, negative human rights under natural law was self-evident. Taken literally, this would mean that any mentally competent person who considered the matter would find it to be true -- like the axiom that m+(n+1) = (m+n)+1 for any two natural numbers m and n. Clearly the doctrine of human rights is not self-evident in this sense -- unless Plato, Aristotle and Socrates were morons after all.

For many years I charitably assumed that Jefferson meant that the doctrine of equal human rights defined us as a people. But now, after further reading, I believe that I was too charitable in my assessment of Jefferson, because I idolized him as a founding father. He actually failed to realize that the doctrine of equal human rights was not self-evident at all, but was part of his heritage as an Englishman and a nominal Christian.

The carrying on and handing down of our traditions takes effort, quite a lot of effort really. To the extent that we accept the Enlightenment liberal view that our moral traditions will take care of themselves, because they are spontaneously evident to any mentally competent person, we will not expend that effort -- and the consequence will be generational moral rot, slowly at first and then quickly. We are seeing this unfold before our eyes.

Standard Econ and political science in the Western tradition has long been effectively rule utilitarian.

Utilitarianism is a stance for reaching moral conclusions, not conclusions of cause and effect. I do not believe economists or political scientists make are in much the business of making assertions of this sort in their academic work -- though you can prove me wrong by citing cases where they do.

@NelsonRushton: It only does this in the context of valid arguments that protecting individual liberty is in fact such a bulwark/safety-valve, and I don't believe such arguments exist.

@SwordOfOccom: I am flabbergasted by this since I’m basically just mirroring the logic the Founding Fathers used to create a system that allowed a lot of liberty to lower the risk of tyranny and internal strife.

To explain your flabbergastedness, can you reproduce, or quote, or outline one of the arguments you are talking about? Then we can talk about whether it does what I say it doesn't do.

Good writeup

I appreciate you saying so.

I suspect that people are drawn to rule utilitarianism because it resolves a certain bind they find themselves in. Let's suppose that I have a landlord who is an all around scumbag, and I don't like him. Suppose I know that, if he needed a live saving medical procedure that cost, say, $10,000, and asked me to help out, I would say no. So his life is worth less to me than $10,000. On the other hand, if I had an opportunity to do him in and take $10,000 in the bargain, and get away clean like Raskolnikov, I would not do it. I think people with certain worldviews feel obliged to articulate an explanation of why that is not irrational, and within those same worldviews, they don't have much to cling to in formulating the explanation. Dostoyevsky's explanation would probably strike them as unscientific. They don't realize that if they keep that up, their grandson might actually do it.

Or maybe, they need a pretext for pointing a finger at the people they feel are doing wrong, and being able to say more authoritative sounding than "boo!", that, again, flies within their worldview.

@NelsonRushton: But of all ways to square with it, to arbitrarily pick one of those alleged sins and lift it up as an abomination on Biblical grounds, while discounting or ignoring the rest, and then to use that capricious choice to justify hating another person,... Yet, as a group characteristic, that is what Evangelicals [my people] have historically done, and to some degree continue to do, in large numbers by comparison with the general population @Felagund: Is this really a depiction of what is going on typically?

Note that I didn't say "is" and I didn't say "typically"; I said historically in disproportionate numbers. Indeed, I don't see it as much as I used to -- but, then again, I don't hang out with as many old rednecks as I used to. Here is one anecdote. In 1997 a gay nightclub (the "Otherside Lounge") was bombed in Atlanta, Georgia; 5 people were injured, one critically, though no one died. A nominally Christian group calling itself the "Army of God" claimed responsibility. That much is not indicative; there are whackos who identify as everything and their existence in small numbers doesn't necessarily reflect on anything. What is more notable is that I heard someone who was not (viewed as) a whacko, on his regular radio show, minimize and nearly excuse the bombing on the grounds that it targeted gays. Before you read the next paragraph, I invite you to guess whether the speaker was (a) a leftist pundit, or (b) a Christian pastor.

Of course he was a Christian pastor. His words as I remember were, "You may have heard that a gay bar was bombed in Atlanta recently. Well, I wouldn't worry about that too much. God bombed Sodom and Gomorrah." This was 1997 in Athens, Georgia (1 hour from Atlanta). It was not a hot mic moment; it was apparently his planned public remark on the event, which he expected to be assented to en masse by likeminded brethren. Now that was a tail event (that is, strange and unlikely); it surprised me to hear it, and even a person my age (56) from the deep South could have gone their whole life without hearing anything that bad from someone in a position of public authority. But what is more important is that, given that somebody did say it, I think any reasonable person who has been around that block would guess (b) rather than (a) -- because we know which group is more likely to have that kind of tail event, and the tail is indicative of milder tendencies of the same sort in larger numbers, of which I saw many.

So it's at least plausible to me that some of the commands in Acts 15 are intended to be for the sake of peace and people's consciences, but I'm not entirely certain.

It's plausible, but I don't think the Christian rednecks who despise gays in the name of God have thought it out far enough to get off the hook; I don't think any Biblical argument justifies the actual level of focus they put on sexual deviance as a sin relative to others that would be rationally subject to the same argument, and I don't think their animus is targeted wholly at the acts rather than the actors. (Nonetheless, those people would be voting with me on almost every living political issue of today -- and if there is ever another civil war in America we will be on the same side. In fact, if it comes to a shooting war, I wouldn't be surprised if they are about the only ones on that side that actually fight.)

Laws are conventionally divided into three sorts: moral laws, which apply universally (e.g. Thou shalt not murder); ceremonial laws, which were for Israel as a church, roughly, and so no longer apply post-Christ (e.g. food laws); and civil laws, which were for Israel as a government (e.g. cities of refuge).

I think the word "Conventionally" here appeals to a vague and precarious authority. I know that there are Hebrew words for the three sorts of laws, and that the idea of giving them different levels of force in modern times goes back at least to Aquinas -- but his scriptural basis for it [Summa Theologica, Question 99] seems pretty thin to me, and most discussions of the distinction that I see give no scriptural basis at all. Anyway, whether it is Aquinas's argument or not, I would be curious to know if you (@Felagund) know of a Biblical argument for the distinction in force, for us today, between the three kinds of laws.

I'll note that I don't think that the prescription of putting them to death is necessary, as we are no longer living under the civil law of ancient Israel.

This suggests that you believe it was necessary and proper, in ancient Israel, to judicially stone people to death for homosexual sodomy, idol worship, sabbath breaking, adultery, premarital sex (in the case of women), etc. To be clear, is that your view?

So that you know where I am coming from, this is my view of scripture (now in my Motte bio): I identify as an Evangelical Christian, but many Evangelicals would say that I am a deist mystic, and that I am going to Hell. Spiritually, the difference between me and Jordan Peterson is that I believe in miracles. The difference between me and Thomas Paine (an actual deist mystic) is that I believe the Bible is a message to us from the Holy Spirit, and the difference between me and Billy Graham is that I believe there is noise in the signal.

The current battle lines of elite and counter elite in the west are once again drawn on a precise difference between two modes of dealing with modernity. And that difference is quite exactly the one we are talking about here, between an individual desire of transcendence, escape and a collective desire of management, control.

Management and control by what agency and to what end?

So you really must "call out" every moment of evil you see in the world or you're guilty too?

Of course not. This all-or-nothing, fall-on-your-sword straw man was first thing the Devil ever said: "Did God actually say, You shall not eat of any tree in the garden’?”"

Most people are just humans trying to get by, and that is alright.

It might be "alright", whatever that means, but it makes them lesser men. We (Americans) live in a relatively free, safe, and prosperous society because the founding fathers and continental soldiers answered the call of duty to a higher purpose than minding their own business. We owe them a monumental debt that we can never pay back. We can only pay it forward by living up to their legacy of duty and sacrifice.

"We do not say that a man who takes no interest in politics is a man who minds his own business; we say that he has no business here at all." [Pericles]

Besides, the best, most accurate superforecasters and people like quants absolutely pull it out and do explicit work. In their case, the effort really is worth it. You can't beat them without doing the same.

I know quants do this, but I think it is a special case. Show me a hundred randomly selected people who are making predictions they suffer consequences for getting wrong, and are succeeding, I will show you maybe 10 (and I think that's generous) that are writing down priors and using Bayes rule. Medical research, for example, uses parametric stats overwhelmingly more than Bayes (remember all those p-values you were tripping over?), as do the physical sciences.

If the effective altruism (EA) crowd are in the habit of regularly writing down priors (not just "there exist cases"), then I must be mistaken in the spirit of my descriptive claim that nobody writes them down. On the other hand, I would not count EA as people who pay consequences of being wrong, or that is doing a demonstrably good job of anything. If they aren't doing controlled experiments (which would absolutely be possible in the domain of altruism), they are just navel gazing -- and making it look like something else by throwing numbers around. I have a low opinion of EA in the first place; in fact, in the few cases where I looked at the details of the quantitative reasoning on sites like LessWrong, it was so amateurish that I wasn't sure whether to laugh or cry. So an appeal to the authority if LessWrong doesn't cut much ice with me.

I should give an example of this. Here is an EA article on the benefits of mosquito nets from Givewell.org. It is one of their leading projects. (https://www.givewell.org/international/technical/programs/insecticide-treated-nets#How_cost-effective_is_it). At a glance, to an untrained eye, it looks like an impressive, rigorous study. To a trained eye the first thing that jumps out is that it is highly misleading. The talk about "averting deaths" would make an untrained reader think that they are counting the number of "lives saved". But this is not how experts think about "saving lives" and there is a good reason for it. Let's suppose that we take a certain child, that at 9 AM our project saves him from a fatal incident; at 10 Am another, at 11 AM another, but at noon he dies from exactly the peril our program is designed to prevent. Yay, we just averted 3 deaths! That is the stat that Givewell is showing you. Did we save three lives? no, we saved three hours of life.

This is the way anyone with a smidgeon of actuarial expertise thinks about "saving lives" -- in terms of saving days of life, not "averting deaths", and the Givewell and Lesswrong people either know that or ought to know it. If they don't know it, they are incompetent; and if they know it, then talking about "averting deaths" in their public facing literature is deliberately deceptive because it strongly suggests "saving lives", meaning whole lives, in the mind of the average reader. To be fair to givewell, their method of analyzing deaths averted apply to saving someone from malaria for a full year (not just an hour), but (1) that would not be apparent to a typical donor who is not versed in actuarial science, and (2) the fact remains that you could "avert the death" of the same person nine times while they still died of malaria (the peril the program is supposed to prevent) at the age of 10. The analysis and language around it is either incompetent or deceptive -- contrary to either one word or the other in the name of the endeavor, effective altruism.

That's not a cherry picked example; it was the first thing I saw in my first five minutes of investigating "effective altruism". It soured me and I didn't look much further, but maybe I'm mistaken. Maybe you can point me to some EA projects that are truly well reasoned, that are also on the top of the heap for the EA community.

If you are thinking of it as a theorem in geometry, there are these things called "axioms", which are needed to prove the theorem, as I mentioned above. To believe the theorem is true of every triangle in the infinite box, we would have to first know that the axioms were true of every triangle in the box. And what gives you that idea?

(A) IRA terrorism is or was morally justified... Yes, in my opinion. (Also the ETA and a lot of other examples like that

This is only half of the argument, my friend. The reason (A) was given a label is because it was conjoined with (B): the IRA's tactics and objectives are morally comparable to those of Hamas. That would entail that the IRA maximizes civilian casualties on their own side tactically, targets primarily civilians on the other side, and has the death of all Englishmen as a persistent and publicly stated objective. I assume you don't assert those things but I could be mistaken.

From reading Nixonland, he documents a bunch of right wing protestors doing the same thing left wing protestors did in the 1960's. We never really hear about it though. We only hear about left wing protestors vs police or the National Guard.

How many is a bunch, and what counts as the same thing? I'm curious to see a list of these and I challenge you to a game: you name a documented act of Republican act of mob violence (where most of the protesters presumably self-identified as republicans and at least one person was injured), and I will name two Democrat acts of mob violence, etc., back and forth for as long as you can come up with them. "A dollar a ball until the loser says quit" [The Hustler].

People also tend to upvote a nice, spicy polemic

Incidentally, I don't care for the term "spicy" as a euphemism for things that are uncomfortable, or potentially expensive, or potentially dangerous to say. If someone declines to make an objectively reasonable post because it is spicy", then maybe they just don't like spicy stuff; different strokes! On the other hand, if someone declines to make a post because it is uncomfortable (or expensive or dangerous), they are keeping their head down, or perhaps cowering, instead of speaking the truth. There are times to keep your head down, but there is never a time to deceive yourself about the fact that you are keeping your head down.

Why don't you hold your self to the same standard you hold others? You demand they prove their math, but we are supposed to believe you because you "looked diligently"?

I think I am holding everyone to the same standard, but not everyone chooses to take the path through the constraints of that standard. As I said in the original post, the principle of abductive inference, which is considered good evidence by research in the physical sciences, says that diligently efforts to disconfirm a theory, which come up empty, are evidence for that theory. I used that rule and you are welcome to use the rule as well. I also argued that the use of that rule rests on a subjective faith in a certain miracle, which I do embrace. For a rough analogy, if I said that only people who believe in the Axiom of Choice can rationally assert the existence of non-measurable sets, I am holding everyone to the same standard -- even though some people will embrace the axiom and the theorem and some will embrace neither.

Why don't you prove that z can never be 1 for any finite number of observations, no matter how small the desired confidence c is, unless c = 0

As far as confidence intervals go, this actually is a theorem, and does not rest on abductive reasoning. For a pretty accessible special case you can read this article: https://en.wikipedia.org/wiki/Rule_of_three_(statistics). I know statistics pretty well and I do not know of any method that gets around this limitation. This includes Bayes rule (see below). You can soundly refute my claim by showing us a statistical inference that does.

I will point out this is another claim you've provided no proof for: It will be mathematically impossible for you to produce a nonzero posterior probability if you do not have a nonzero prior

I assumed this would be obvious to anyone who was familiar Bayes rule in the first place, and that people who are not familiar with Bayes rule are probably not familiar with probability theory, and would not be interested in reading mathematical proofs about it -- but since you asked, here is the proof: Bayes law says that P(A | B) = P(B | A)*P(A)/P(B). Suppose the prior, P(A), is zero; then the righthand side is zero, and so the left hand side, which is the posterior is also zero. This shows that if the prior is then the posterior is zero. Thus, if the posterior is nonzero, then the prior must be nonzero as well.

I mean, we know they are not always true, but you can certainly measure how far a planet's position deviates from that as predicted by Kepler's laws after some time.

You can measure that, and then measure it again and again and again. That comes to four measurements. But the law says that everything orbiting everything in the universe follows the same rule, and those four measurements don't support that conclusion. They don't even support the conclusion that the law continued to hold at each of the infinitely many times between when you took your measurements.

can you come up with an objective, quantifiable number of confidence for whether a coin will flip heads?

No, but I do not claim to believe that the coin will flip heads, much less that it is a universal law that it will flip heads every time. Some people do believe such things, though, about the some of the laws of physics (viz., that they work every time).

Okay well in that case it's also hypocritical to criticize Cthulhu and Star Wars lore for not being literally true. Hooray, solipsism. This entire line of argument advances absolutely nothing.

If someone just jumped into this thread without reading the history, they might gather that I (or someone else) had criticized Cthulhu on the grounds of not being literally true. So for anyone who is jumping in in the middle, nothing of the sort happened.

Moreover, I would never detract from the merit of Shakespeare or Homer on the grounds that there is no evidence for the literal truth of their writings. Nor would I detract from the merit of a physics text on the grounds that there is no objective evidence that its contents are literally true. I do not think I am asking for special status for anything. I am arguing against a special status for the physical sciences, that I believe is widely attributed to them.

It essentially amounts to a theist's special request for their beliefs to be treated as intellectually serious even though they can't point to any justification... request denied until one of these arguments successfully and meaningfully distinguishes Christianity, theism, whatever, from an infinite number of bullshit things I could make up on the spot.

I agree that you should deny that request if somebody made it -- but I don't think I did (unless "whatever" casts a very wide net).

My thesis is that (1) if you hold nonzero confidence in the literal truth of a universal physical law, then you should be able to give reasons for your belief, and (2) the only rule of evidence I know of that would justify such a conclusion (abductive inference) -- and the one that is actually used in the physical sciences to establish credibility of physical theories -- rests on premises that are infinitesimally unlikely to hold in the absence of a miracle.

As for your the law of gravity vs. the most recent discovery in quantum computing example, it's slightly confusing to me. Does option B that uses quantum computing go against the law of gravity? If so, I would reject it, since I believe the law of gravity to be true (tentatively, without justification). Or does option B use both the law of gravity and quantum computing? In that case I'm not really choosing between gravity and quantum computing, but whether to additionally also use quantum computing in my plan, in which case how well-tested quantum computing is compared with gravity is not really relevant, since I'm using gravity as well.

I meant something like this: the safety of A rests on the law of gravity but not the law of quantum computing; the safety of B rests on the law of quantum computing but not the law of gravity. To make the example a little more concrete (but science-fiction requiring some suspension of disbelief), your choices are to take (1) a self-flying plane that is programmed with a model using the Law of Gravity, but no laws of quantum computing, and has been operating safely for thirty years, or (2) the new teleporter -- whose safety has been tested but not disconfirmed, and has been proven safe contingent on the latest law of quantum computing, but not the law of gravity. Your goal in the selection is to maximize the probability of your survival.

Thanks for the researched response. I think I finally understand the disagreement now.

@NelsonRushton: As I recall, Popper held that repeated, failed attempts to disprove a hypothesis count as evidence for its truth (though never certain evidence). Am I mistaken?

As you point out, Popper does not regard repeated experiments as progressively raising our confidence in the probability that the theory is true; his notion of the merit of a theory is much more nuanced than "probability of truth". So that is where my statement differs from his view; I am convinced now that I was mistaken and thank you for pointing it out.

@squeecoo: In Popper's and my view, corroborating evidence does nothing, but contradicting evidence falsifies (although also without any degree of certainty).

But I believe you are also mistaken, and your view differs from Popper's in a more profound way. If you open an electronic copy of Popper's book (https://philotextes.info/spip/IMG/pdf/popper-logic-scientific-discovery.pdf), hit ctrl-f, and search for "degree of corroboration" you will find that that phrase occurs 84 times -- about once every five pages for the length of the book. So, while his notion of merit is not defined in terms of truth or probability of truth, he does hold that repeated, diligent, failed attempts to disprove a theory tend to progressively confirm its merit (or to use his word, its "mettle") -- which is a far cry from doing nothing. For Popper, non-disconfirming experiments do something (viz, "corroborate") and greater number of such experiments do more of that thing:

Its [the theory's] degree of corroboration will increase with the number of its corroborating instances. [Popper, "The Logic of Scientific Discovery", p. 268]

I read you correctly, you seem to believe that there should be no difference in our willingness to act on a theory after one rigorous non-disconfirming experiment, versus 1000 of them by 1000 different researchers using different methods and bringing different perspectives and skill sets to the table (say, Newton's law of gravity vs. some new law of quantum computing). Do I read you incorrectly (or did you perhaps misspeak)?

@squeecoo: I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system.

Ok that is a relief to hear, but it is not consistent with your other statement above (corroborating evidence does nothing), so it seems you misspoke.

the Scientific Method is just a bounded, modestly idiot-proofed form of Bayesian reasoning.

I do not see anything Bayesian about the scientific method. When I pick up the text for the lab component of a college course in physics or chemistry, look to see if there are any priors, conditional probabilities, or posteriors, written down in it, I predict that the median number of Bayesian inferences I will find over the course of 15 experiments is zero. Here is one such a text: https://www.lehman.edu/faculty/kabat/PHY166.pdf (I selected it because it was the top hit in my google search that had a full pdf, but if you think I'm cherry picking you are welcome to try a different one). There is no Bayesian reasoning in that text, nor do I recall every seeing any of the half dozen lab science courses I took in high school and college. I think the same will be true if you look, not at an undergraduate course, but in a physics or chemistry journal.

But if physicists what are really doing is a special case of Bayesian inference, I find it peculiar that they do not seem to know what they are doing, because they sure don't talk about it that way. So I'm curious what makes you think they are. It is a pretty important question to me because if you can show me how typical forms of experimental reasoning in the physical sciences is Bayesian, or in any way probabilistic or statistical, that would disprove the miraculous aspect of its success.

Incidentally, I think this is the deepest and most informed comment in the thread so far.

Sure, the things we call the "laws of nature" may not be the true causal description of the universe at some level. What matters is that the universe acts as if they were universally true, as best we can tell.

This may be the view of many scientists who think about the epistemology of science if you pin them down (their motte!), but I think if you talk to people walking down the street, they think we are in the business of discovering natural laws that are actually true. I suspect that when we are not pinned down, scientists like to think that we are searching for truth ourselves (our bailey!), and it seems like the phrase "May not be the true causal description... at some level [emphasis added]" hedges against giving up that bailey. As I recall, the word for not-true is "false", unqualified by levels.

If you would affirm that science has no hope of attaining even tentative knowledge of natural laws that are literally true -- but instead that its mission is purely to discover useful (but presumptively fictitious) models of the physical world -- then that position is consistent with my argument, with or without miracles. From the post, I am perhaps a little more than halfway confident you would affirm that, but I am not sure, and I'd like to know.