@sqeecoo's banner p

sqeecoo


				

				

				
0 followers   follows 0 users  
joined 2022 October 28 10:34:41 UTC

				

User ID: 1759

sqeecoo


				
				
				

				
0 followers   follows 0 users   joined 2022 October 28 10:34:41 UTC

					

No bio...


					

User ID: 1759

Your post is completely fine in my opinion. I sense way more smugness and shady thinking from that mod post.

Right. Well I'd definitely be interested in testing the teleporter, but I wouldn't risk my safety in a first test of something, so I'd choose the plane, which I believe is safe (tentatively, as my best guess upon rational deliberation that produces no justification but may eliminate errors). Like I said, choices and beliefs can only be rational in the sense of using deliberation and reason to make our best guess, and are never rational in the sense of being justified, warranted, reliable, established, or anything of that sort, as this is not possible.

My position is that no actions or beliefs are "rational" in this sense, of being justified or mandated by reason. Actions or beliefs can be rational in the sense that we have deployed the method of rational criticism (and, if possible, empirical testing) in order to eliminate errors, with no justification/warrant/likelihood/etc. being involved at any point. So the contents of a belief don't determine its rationality (reason doesn't tell you what to believe), but the methods we have used in order to try to find errors in that belief can be rational. A choice can be rational if we've employed critical thinking in making it, and this is the only sense in which decisions can be rational, since justification is not possible.

In comparison to ice cream preference, yes, both are arbitrary in the sense we have to judge for ourselves (we are the arbiters of) what to believe/which icecream to like. But we generally don't employ critical discussion and experimentation in our ice cream choices, although we certainly can. Again, it's the methods of critical analysis and experimentation that are rational, and a decision can made with deliberation and with the use of reason, in contrast to a preference for ice cream which usually does not involve this. But the beliefs or actions themselves can never be rational in the sense of justified, warranted, mandated by reason, etc.

As for your the law of gravity vs. the most recent discovery in quantum computing example, it's slightly confusing to me. Does option B that uses quantum computing go against the law of gravity? If so, I would reject it, since I believe the law of gravity to be true (tentatively, without justification). Or does option B use both the law of gravity and quantum computing? In that case I'm not really choosing between gravity and quantum computing, but whether to additionally also use quantum computing in my plan, in which case how well-tested quantum computing is compared with gravity is not really relevant, since I'm using gravity as well.

In general, my view of the preference for the better-tested theory (and my reading of Popper's opinion here) is that this is a soft rule-of-thumb methodological advice, but not an "rationally ought" rule. Since we want to test our theories as severely as possible in order to hopefully eliminate error, all else being equal we should prefer the better tested theory - but not in the sense of "rationally ought" but in the sense of "let's test as much as possible". But all else is rarely equal, and "better tested" is not an exact calculation. So sort of like the advice "it's a good idea to castle your king in chess". Yes, that's good advice, but it's not necessarily always the best choice, and you are not "irrational" for deciding not to castle. A more clear formulation of this advice has been advanced by Miller, Popper's former student, who formulates this stuff much more dryly than Popper but in a way more suited to the style of modern analytical philosophy (Out of Error, p. 124):

Prefer the practical proposal that best survives critical scrutiny is more transparent and more obviously sound advice than Act on the best-tested theory, which is often not real advice at all. What must not be admitted is the suggestion that a proposal that has been subjected to critical scrutiny, and has survived it, thereby qualifies as a better proposal than one that has not been subjected to critical scrutiny. That would convict deductivism not only of inductivism by of clairvoyance, and even inductivists and justificationists can be expected to resist a claim at once so far-seeing and so reactionary. Even the advice Prefer the practical proposal that best survives critical scrutiny is defective in this respect. Since subjecting a proposal to criticism is itself a practical action of a kind, it cannot, on pain of infinite regres, always be ill advised to try something yet untried. It is not of course being suggested that it is a mistake to prefer or to adopt the best-criticized proposal, only that it need not be a mistake not to do so. At this point considerations of utility often intervene. The correct advice is, as usual, negative: Refrain from any practical proposal that does not survive critical scrutiny as well as others do. Observe that someone who rejects this advice will at once be vulnerable to critical attack.

Truth in the classical sense of correspondence to reality. If I say aliens exist and you say they don't, one of us has hit upon the truth despite both of us guessing. We won't know which of the two claims is true, but one of them is true, i.e. it corresponds to reality.

What would be the truth in the "strict sense", as you put it?

You said that Popper thinks corroboration (failed attempts to falsify a hypothesis) count as evidence for its truth. Instead, Popper says that theories cannot be verified. The first sentence of the chapter you quote is:

Theories are not verifiable, but they can be ‘corroborated’. [Popper, "The Logic of Scientific Discovery", p. 248]

In the footnote soon after:

I introduced the terms ‘corroboration’ (‘Bewährung’) and especially ‘degree of corroboration’ (‘Grad der Bewährung’, ‘Bewährungsgrad’) in my book because I wanted a neutral term to describe the degree to which a hypothesis has stood up to severe tests, and thus ‘proved its mettle’. By ‘neutral’ I mean a term not prejudging the issue whether, by standing up to tests, the hypothesis becomes ‘more probable’ [Popper, "The Logic of Scientific Discovery", p. 249]

And finally, here's Popper stating the difference between psychological questions of one's state of mind (that one can be "very certain") and epistemological questions of the state of the evidence, where evidence cannot verify hypotheses.

Like inductive logic in general, the theory of the probability of hypotheses seems to have arisen through a confusion of psychological with logical questions. Admittedly, our subjective feelings of conviction are of different intensities, and the degree of confidence with which we await the fulfilment of a prediction and the further corroboration of a hypothesis is likely to depend, among other things, upon the way in which this hypothesis has stood up to tests so far—upon its past corroboration. But that these psychological questions do not belong to epistemology or methodology is pretty well acknowledged even by the believers in probability logic. [Popper, "The Logic of Scientific Discovery", p. 252]

So corroboration is a measure of how well-tested a theory is, and the severity of the tests it has undergone. But corroboration does not provide evidence for the truth of the hypothesis. Here's a quote from Popper, "Objective Knowledge", 21f:

From a rational point of view we should not "rely" on any theory, for no theory has been shown to be true, or can be shown to be true. ... in spite of the "rationality" of choosing the best-tested theory as a basis of action, this choice is not "rational" in the sense that it is based upon good reasons for expecting that it will in practice be a successful choice: there can be no good reasons in this sense, and this is precisely Hume's result.

I like my Popper but I hate looking for quotes - I'm much more interested in the substance of the discussion we're having and the view I've outlined as a response to yours.

It's definitely hard to separate negative effects of COVID from negative effects of vaccination, but here's an Israeli study on 200,000 unvaccinated people who had COVID that didn't find any increase in myocarditis in unvaccinated subjects post-COVID: https://www.mdpi.com/2077-0383/11/8/2219

So it seems likely that myocarditis is primarily vaccine-related.

As for health risks from vaccination, there's plenty of data on a clear corellation. Here's some highlights:

A Danish study on the original clinical trials that were performed to authorize the vaccines found higher overall mortality among those vaccinated with the mRNA vaccines than in the the unvaccinated control group (study linked in article): https://brownstone.org/articles/have-people-been-given-the-wrong-vaccine/

Mind you, this is very early on when the circulating strain of COVID actually matched the one in the vaccines, and they offered the best protection against COVID, but vaccination STILL had a negative effect on overall mortality. Vaccination=higher mortality, indeed.

Here's a study finding a 16% increase in serious adverse events after COVID vaccination, also based on the original trials: https://www.sciencedirect.com/science/article/pii/S0264410X22010283

If you look at the VAERS database (the official US database for tracking vaccine adverse events), 75% of total reported vaccine-related deaths in the last 30 years have been in 2021 and 2022 - in other words, more vaccine deaths have been reported since the introduction of COVID vaccines than from all other vaccines over the last 30 years. Go to https://vaers.hhs.gov/data.html, and request data on death from all vaccines by year.

Now VAERS reports are far from fully reliable, but they are submitted by physicians, and the comparison with previous vaccine data is extremely negative for COVID vaccines. More vaccine deaths in the last two years than in the last 30 years combined, in the best vaccine adverse effect tracking database in the world. Wow.

As for the FDA, here's an article in BMJ (top medical journal) basically begging the FDA to relase vaccine safety data on COVID vaccine safety, which they haven't done: https://www.bmj.com/content/379/bmj.o2527?fbclid=IwAR3e8Rv7UdOUjx60Vf7CnrtZAcM7rCVxl5IRpT76ngyTokkALHVCbiO3Naw

Again, these are just the highlights - seems pretty clear to me that there is a "noticable health risk" from the vaccines.

I'll do another reply since I think we're still talking past each other a bit.

And yeah, it's a shame our talk is buried so deep nobody is likely to read it :D Still, I found it really fun and useful!

First, let me say I don't take it for granted that objective reality exists - I believe it does, which is a conjecture like anything else, and open to criticism and revision. Objective truth, however, would exist even if there is no objective reality: in that case, the statement "there is no objective reality" would be objectively true, and this is what I would like to believe if it is true. Popperianism (or, as it's less cultishly called, critical rationalism) requires no assumptions that are not in principle open to critical discussion and rejection, which is in this view the main method of rational inquiry.

And, if I haven't made it clear enough, I'm actually a big fan of Bayesiansim. If I weren't a Popperian, I'd be a Bayesian! I'd even say it could add a lot to Popperiansim: although I think the basic Popperian picture of rational inquiry is correct, the formalization of the process of critical analysis that Bayesiansim could add to Popperiansim could definitely be useful (although I'm not smart enough and too confused by the many, many variants of Bayesiansim to attempt such a project myself). Overall though, some variants of Bayesianism, yours I believe included, are right about almost everything important to Popperians, especially the central point: accepting the skeptical and Humean objections to rational justification, while retaining the use of reason and evidence as the method of science. Popperians would add "and objective truth as the aim of science", on which I'm still not quite sure where you stand. The main disagreement, as I see it, is on the role of evidence, which is negative for Popper - evidence can only contradict theories - and positive for Bayesians - evidence can support theories, raising their subjective probabilty.

I think the discussion of whether objective reality exists and whether we can be certain of it is a bit of a sidetrack here - I completely agree with everything you said on it: we can never have direct access to objective reality (Popper would say that all our observations are "theory-laden"), and we cannot be sure that it exists, and I'm not saying I require you to demonstrate that it does to practice Bayesiansim. My main point is that Bayesian calculations are unmoored from objective reality (say nothing about it), unless you smuggle in additional induction-like assumptions that allow you to make inferences from Bayesian calculations to objective truth, in which case you run into Humean objections. And this is where I'm still uncertain of your position. You say:

So I certainly act as if my observations of reality or the advancements of science are evidence that my subjective reality aligns with (hypothetical) objective reality.

But do you think your observations are evidence that your subjective reality aligns with objective reality? If yes, how does this relationship work, and how does it avoid Humean objections? If no, like I said, that'd be for me an unacceptable retreat from talking about what we are actually interested in, namely objective truth, not subjective probabilty. We can agree to disagree on that, that's not a problem, but I'm not totally clear what your position is on this, given that you have said things like the quote above, but also talked being able to convert subjective probabilty into truth. I'd like to understand how you think this works, from a logical standpoint. Or is it perhaps that your position is something analogous to Hume's solution to the problem of induction (which I also disagree with) - namely that we act as if induction is rational although we are irrational in doing so, for we have no other choice? This would be saying that while strictly speaking Bayesian calculations have no direct relationship to objective truth, we act as if it they do. This would be what I gather from the above quote, but you've also talked about probability-to-truth conversion, so I'm still unclear on that point.

Let me attempt an analogy using the map and territory metaphor to describe how I see our positions. It's a spur-of-the-moment thing, so I apologize in advance if it misses the mark, but in that case you explaining how it does so will likely be illuminating for me.

So we are blind men in a maze (the "territory"), and trying to map it out. We are blind because we can never directly see the maze, let alone get a bird's eye view of it. Now many people, the majority even, think that we are not blind and convince themselves that they can see the maze (that we can have justified true beliefs directly about objective reality). You and I agree that this is not possible, that our mapping of the maze is ultimately guesswork. We can't be sure there even is a maze! But we're trying to figure out how to act and what to believe. Now I think the best way to go about mapping the maze is to propose conjectures on the layout of various parts of the maze (i.e. scientific hypotheses), which will always be guesswork, and then test them out: if this map section I've proposed is correct, for instance, we should be able to walk 36 steps in this direction, and then turn left. If I attempt this and run into a wall, then my proposed map section guess isn't right - I gotta reject it (the hypothesis is falsified). Of course, I might have miscounted the steps, a wall may have collapsed, or any number of things might have messed up my experiment - like in the neutrino example, the experiment might be wrong, and falsification is guesswork too. But this is the role played by evidence: attempting to walk the maze, i.e. confronting our hypotheses with reality, and seeing where they clash, albeit blindly and without any justification. If my conjectural map seems to work out, if it passes the test, this says nothing additional about it corresponding to the maze. Evidence is used to contradict our guesses, not support them, in my view. And this is where we start to disagree. You think that every step you take that doesn't contradict your proposed map (all supporting evidence for the hypothesis) raises your subjective probabilty/expected utility/confidence in your proposed map of the labyrinth. To which I say ok, your confidence is increased by Bayesian calculation, but what does that tell us about the labyrinth? To me it seems you are calculating your confidence in the map, but it's the labyrinth we are interested in, and I'm not sure if and how you translate your confidence in the map into claims about the labyrinth. If you do translate your confidence in the map into claims about the labyrinth, I am not clear on how. I just directly make claims about the labyrinth, which are guesses, and my subjective confidence in them is irrelevant - the correspondece of my guesses to the labyrinth is what matters and what I'm trying to guess correctly. If you don't claim anything about the labyrinth at all and are only talking about your confidence in the map, then I think you're missing the mark - it's the labyrinth that we are interested in.

Looks like we're on the same page on the overall epistemic status of scientific theories, namely that they are not justified by the evidence and always remain conjectural. That's not a knock against Bayesianism, I agree!

Bayesian updates are, as I understand it, the optimal way to update on new evidence in the light of your existing priors

The optimal way in order to do what? What would you say is the aim of science?

For me, it's the commonsense notion of truth as correspondence to reality. Of course, we cannot know or be justified in believing that our theories are true, but they can still be true guesses if they correspond to reality, and it is this reality that we are interested in.

You say that this binary notion of truth is unattainable, so what do you replace it with? Probability calculations? What do those achieve? What is their relationship to reality? There are many variants of Bayesiansim, and they are often very fuzzy about this point, so I'm trying to pinpoint your position.

Popperian falsifiability is simply dysfunctional. Taken at face value, recall those experiments that suggested neutrinos move faster than light? That is evidence that neutrinos move faster than light.

A serious Popperian would immediately give up on the idea that nothing can exceed the speed of light in a vacuum. A sensible Bayesian, which humans (and thus physicists) are naturally inclined to be, would note that this evidence is severely outweighed by all the other observations we have, and while adjusting very slightly in favor of it being possible to exceed the speed of light with a massive object, still sensibly choose to devote the majority of their energy to finding flaws in the experiment.

This is not quite right. For a Popperian, accepting the results of a test is a conjecture just like anything else. We are searching for errors in our guesses by testing them against reality - if we suspect a test is wrong, we are very welcome to criticize the test, devise another test, etc. It is only if we accept a negative test result that have to consider the theory being tested to be false, by plain old deductive logic since it is contradicted by the test result. But a serious Popperian is quite capable of being suspicious of an experiment, and looking for flaws in it.

Nobody has a solution to infinite regress, barring "I said so". As far as I can tell, you've got to start somewhere, and Bayesianism leads to more sensible decision theories and is clean and simple.

I have no problem with starting somewhere, but I don't claim our theories can ever be anything more than a guess, since, as you seem to have agreed, they are ultimately baseless due to infinite regress. In the context of this discussion on justification and the basis of science, I'm ok with Bayesianism that only claims to be decision theory, a formalized account of how we try to temper our guesses by reason and experience with no justification or basis ever being provided, which is also the Popperian view of the epistemic status of science. Bayesianism would then be a methdology to help in our conjectural decisionmaking, but would never elevate our theories beyond the status of a guess, in the sense of them having some sort of justification or basis. Do we disagee here?

Given that English is an imprecise language, feel free to interpret my 99.9999% confidence that the Sun will rise tomorrow as being equivalent to "it's true the Sun will rise tomorrow".

Ok, so if I'm understanding you right, you do care about the truth of your beliefs, not just about your confidence in them. So what's the logical relationship between your calculation of confidence in a theory and the truth of that theory? What is the epistemic benefit of confidence calculation, as opposed to a Popperian conjecture? It seems to me that if you are mistaken about the truth of the belief in question (as you would be with regard to the sun rising tomorrow if you went to, say, Iceland in winter), your high calculated confidence does nothing to mitigate your mistake. You are equally wrong as a Popperian who would just say he guessed wrong, despite your high confidence. And if the belief in question is true, it's just as true for the Popperian who only claims it to be a guess, regardless of confidence calculation. So what is the epistemic benefit of the confidence calculation?

To clarify a bit more, I see two questions we are discussing. First, whether Popper's falsificationist "logic of science" is a better description/methodology of science than Bayesianism. We can set that aside for now, as it is not the focus of the topic. The second question that's relevant to the topic at hand is whether you think Bayesianism can provide some sort of justification or rational basis for claims about the truth of our beliefs that elevates them to something more than a guess. We certainly seem to agree that we can temper our guesses using logic and reason and experience, but in the Popperian view all of this is still guesswork, and never elevates the epistemic status of a theory beyond that of a guess. So tell me if and where we disagree on this :)

Yup, you got it. There's no establishing a rational basis for action, it cannot be done. You have done a good job articulating some of the obstacles to this in your original post. We can, however, still use reason and logic in the method of eliminating errors in the pursuit of truth. That's Popper's insight.

A small note: there is no "known false" category. Falsification is not justified either, it is as conjectural as anything else. So yes, justification doesn't work, and there is no rational basis to be had. But we can still engage in the rational pursuit of truth, in the sense of using reason and experience to temper our conjectures about the world.

As for your future reading, go with your interests, of course, but I can still recommend this short article articulating this position: https://www.science.org/doi/10.1126/science.284.5420.1625

The beauty and clarity of Popper's view is relinquishing justification and the search for a "basis", which reason and rationality are not capable of providing, but still maintaining rationality, empiricism, and the pursuit of truth. It's worth keeping in mind at least, as a possible different path that eschews the use of justification and "good reasons" but retains the use of reason and truth as the aim of science. If ever you stop believing in miracles, you need not despair of reason just yet, give Popper's view a shot first :)

I'll leave you with a final Popper quote:

And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’ The question of the sources of our knowledge, like so many authoritarian questions, is a genetic one. It asks for the origin of our knowledge, in the belief that knowledge may legitimize itself by its pedigree. The nobility of the racially pure knowledge, the untainted knowledge, the knowledge which derives from the highest authority, if possible from God: these are the (often unconscious) metaphysical ideas behind the question. My modified question, ‘How can we hope to detect error?’ may be said to derive from the view that such pure, untainted and certain sources do not exist, and that questions of origin or of purity should not be confounded with questions of validity, or of truth. …. The proper answer to my question ‘How can we hope to detect and eliminate error?’ is I believe, ‘By criticizing the theories or guesses of others and – if we can train ourselves to do so – by criticizing our own theories or guesses.’ …. So my answer to the questions ‘How do you know? What is the source or the basis of your assertion? What observations have led you to it?’ would be: ‘I do not know: my assertion was merely a guess. Never mind the source, or the sources, from which it may spring – there are many possible sources, and I may not be aware of half of them; and origins or pedigrees have in any case little bearing upon truth. But if you are interested in the problem which I tried to solve by my tentative assertion, you may help me by criticizing it as severely as you can; and if you can design some experimental test which you think might refute my assertion, I shall gladly, and to the best of my powers, help you to refute it.

Whew, you wouldn't believe the amount of times I've hear the "Popper is a positivist" claim. From Stephen Hawking, for instance. I don't mean that as an indictment of the person making the claim, really, I mean you don't have to know everything, but of the secondary sources who taught people wrong.

Popper does claim truth for his theories though, in the sense of theories being true through correspondence with reality, but without us being able to know whether they are true. I agree that while interesting, verisimillitude never managed to be very clear or coherent, though. But his basic "logic of scientific discovery" does not rely on it.

There's an interesting bit in Popper's Realism and the Aim of Science on Khun, where Popper basically says he has no problem with Khun (or at least a non-relativist reading of him) and that Khun done good work on describing the scientific process, but that this doesn't really clash with Popper's views.

But I believe you are also mistaken, and your view differs from Popper's in a more profound way. If you open an electronic copy of Popper's book (https://philotextes.info/spip/IMG/pdf/popper-logic-scientific-discovery.pdf), hit ctrl-f, and search for "degree of corroboration" you will find that that phrase occurs 84 times -- about once every five pages for the length of the book. So, while his notion of merit is not defined in terms of truth or probability of truth, he does hold that repeated, diligent, failed attempts to disprove a theory tend to progressively confirm its merit (or to use his word, its "mettle") -- which is a far cry from doing nothing.

Sure, Popper is developing the idea of degree of corroboration in that book, so he mentions it a lot. But no degree of corroboration can change the epistemic status of a theory, which always remains a conjecture. Like I said, it's a common mistake, and Popper shares some of the blame for by speaking about "preference" in the context of corroboration, which sounds a lot like justification or that we "rationally ought" to believe the better tested theory as if it had a greater likelihood of being true, or something like that. Popper did a lot to muddle the waters here. But corroboration is a measure of the state of the critical discussion, and not in any way a measure of the justification, reliability, probability, etc. of a theory. With regard to the epistemic status of a theory being adjusted by evidence, which is what is relevant to our discussion, corroboration does nothing. Here's Popper saying it outright, in Objective Knowledge 1972 (1979 revised edition), p. 18:

By the degree of corroboration of a theory I mean a concise report evaluating the state (at a certain time t) of the critical discussion of a theory, with respect to the way it solves its problems; its degree of testability; the severy of tests it has undergone; and the way it has stood up to these tests. Corroboration (or degree of corroboration) is thus an evaluating report of past performace. Like preference, it is esentially comparative: in general, one can only say that the theory A has a higher (or lower) degree of corroboration than a competing theory B, in the light of the critical discussion, which includes testing, up to some time t. Being a report of past performance only, it has to do with a situation which may lead us to prefer some theories to others. But is says nothing whatever about future performance, or about the "reliability" of a theory.

As the missile example:

@squeecoo: I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system.

Ok that is a relief to hear, but it is not consistent with your other statement above (corroborating evidence does nothing), so it seems you misspoke.

This would be my conjecture, motivated in part by how poorly tested quantum computing is, but not justified or "based" on that. It's my best guess that has taken into consideration the evaluation of the state of the critical discussion on quantum computing (how well corroborated it is), but is not justified by it and remains a guess/conjecture. We can certainly take the degree of corroboration into consideration when deciding what to believe, but it can never elevate our beliefs beyond the status of conjecture, and it is in this epistemological sense that corroborating evidence does nothing.

I think I am beginning to understand your position better. So, here is my question. Do you think that the preference for acting on a better-tested theory over acting on a worse-tested theory is an arbitrary, subjective preference? like, some people like chocolate; some people like vanilla; different strokes? I assert that it is only rational to be more willing to act on a better tested theory. When did anybody ever have to accept a theory? By have to do you mean rationally ought to? If rationally ought to is what you mean, then, as I said, I disagree.

Questions of subjective/objective are always tricky, and I can answer this question on several different levels. Those who think rationality can lead to justified beliefs think that justification and evidence can make it so that we objectively rationally ought to believe a justified theory, as you say. Popper and I reject this. Theories (or beliefs in general) cannot be justified. At all. However, if we are interested in finding the truth (and this is also a subjective goal, one might be more interested in, say, propaganda), we should try to eliminate any erroneous beliefs that we have, and our tool for this is rational criticism and experiments. So we should try to deploy these tools as much as we can if we are interested in the truth, and we thus want our theories to be as severely tested as possible. No matter how well-tested, however, our theories remain conjectures tempered by rational criticism.

We are also not mandated by reason (in Popper's view of science) to prefer the better-tested theory. It's not the case that we rationally ought to accept the better tested theory. We could for example be super stoked about a poorly tested theory in preference to a better tested one - but the thing to do then is to try and come up with stronger tests of our preferred poorly tested theory, since in the search for truth we should try to test our theories as strongly as possible in order to eliminate error. This is subjective in the sense that our preference for a theory is our decision, but it's not like a preference for an ice cream flavor - we deploy rational evaluation and empirical experiments to the best of our ability in order to try to guess at the truth and eliminate errors, which we do not do in our ice cream preferences. This use of the rational method of criticism in the search for truth is what makes the difference and what makes our decision rational in the sense of using critical reasoning, although this provides no objective justification for our decision and it does not tell us what we rationally ought to believe.

I'm not sure I can follow everything you're saying here, but I'm interested in what you find unconvincing about Popper, if you feel like expounding on it. I hope you're not implying Popper was a logical positivist :)

No adverse events have been reported from mistranslation because the mistranslation has just now been discovered. But there's plenty to indicate mrna vaccines are not safe.

According to VAERS, the official US vaccine monitoring program, there have been more reported vaccine-related deaths since the introduction of COVID vaccines than from all the other vaccines in the 30 previous years of monitoring, combined. And while VAERS reports are not 100% accurate, they provide a clear picture of relative safety, and VAERS data clearly shows that COVID vaccines have led to more deaths than all other vaccines combined in the last 30 years (check total vaccine-related deaths by year for all vaccines at https://vaers.hhs.gov/data.html EDIT: Select "Search CDC Wonder", "VAERS Data Search", under 1., group search results by "Year reported", under 5., select Event category "Death", press "Send" under 5. - the resulting table shows that 70.49% of all reported vaccine-related deaths since 1990 are in the years 2021 and 2021).

Based on clinical trial data, overall mortality in the vaccine group and the unvaccinated control group was statistically equal for mrna vaccines, as opposed to adenovirus vaccines where mortality among the vaccinated was lower, indicating a vaccine risk outweighing the protective effect for mrna vaccines (https://pubmed.ncbi.nlm.nih.gov/37163200/). Put more simply, more people died among the vaccinated than among the unvaccinated in the clinical trials used to approve the mrna vaccines.

Clinical trial data also shows increased risk of serious adverse events for mrna vaccines, outweighing risk of severe COVID in younger population groups: https://www.sciencedirect.com/science/article/pii/S0264410X22010283.

And that's just a few of the data points we now have that indicate that the mrna vaccines are dangerous, so I'd say the risk has definitely borne out.

Regardless, a medical product that leads to the unplanned production of random proteins within the body would put the company producing it at extreme risk of legal action without the immunities granted to COVID vaccine manufacturers.

I understand what you are saying, although I don't think it's completely true: the VAERS form asks you to report a vaccine-related adverse event, not simply that someone died post-vaccination. Also, old people were regularly given at least flu vaccines prior to COVID, so the effect you describe of old people coincidentally dying after vaccine administration was at least partially present before COVID, so I don't think that this is a sufficient explanation for the massive increase.

Thank you for the thoughtful response! When we get to this level of analysis, I am of course willing to admit that there are many unknowns, and that the data is not sufficient for strong and clear conclusions on mRNA vaccine safety, although I would argue that there are clear indications that serious concerns exist. But it is the lack of willingness to investigate these worrying signals from the data and the blind repetition of the "safe and effective" mantra that is my main cause for concern. If you refuse to look for problems, you won't find any, right?

VAERS, the main monitoring system for vaccine safety, indicates a massive, and I mean MASSIVE, concern regarding the relative safety of COVID vaccines. I phrased my comment on VAERS carefully - it's definitely not 100% reliable, but it shows a massive relative difference in reported vaccine-related deaths since the introduction of COVID vaccines. Is this not cause for concern? Even if only 3% of the post-COVID VAERS reports are real and 97% are bogus, COVID vaccines still cause as much death as all other vaccines put together (per year instead of in 30 years combined). So even if 97% of post-COVID VAERS reports are trash (and the "increased awareness" argument is a huge stretch to support such a strong claim), the COVID vaccines are still more dangerous than all other vaccines put together, "just" causing more deaths that all other vaccines put together per year instead of more deaths than all other vaccines in 30 years. And if VAERS is complete and utter trash, as you say, isn't that even MORE cause for concern? In that case, we have NO population-level vaccine safety monitoring system of note at all. If you refuse to look for problems, you won't find any, right?

As for the second study I broguht up (https://pubmed.ncbi.nlm.nih.gov/37163200/), I agree that the clinical trials used to approve the COVID vaccines, which are the only large clinical trials that have been run on them, were not designed to assess all-cause mortality risk from the vaccines, and the sample showing no effect on overall mortality is very small, yes. So where's the follow-up? VAERS is trash, and the trials were not designed to assess overall mortality risk. If you refuse to look for problems, you won't find any, right?

Your position on the severe adverse events risk study is not entirely clear to me based on your response. It's not about pediatric populations, it's that they found a greater increase in severe vaccine-related side-effects (that land you in the hospital) than the reduction in severe COVID events compared with the control group. The COVID vaccines cause more hospitalization-level adverse events than the hospitalizations they prevent from COVID, according to that study (https://www.sciencedirect.com/science/article/pii/S0264410X22010283). The authors call for a harm-benefit analysis for mRNA COVID vaccines, which has never been done. But if you refuse to look for problems, you won't find any, right?

Finally, we have the Nature article finding that the mRNA vaccines produce random proteins. Which ones? What are their effects? Surely Pfizer and Moderna tested whether their vaccines were actually producing what they were supposed to, at some point? Or was this a total surprise, and we "could not have known at the time"? Of course, if you refuse to look for problems, you won't find any.

You requested other sources, so here's the BMJ (top medical journal) desperately calling for follow-up studies on COVID vaccine safety: https://www.bmj.com/content/379/bmj.o2527?fbclid=IwAR3e8Rv7UdOUjx60Vf7CnrtZAcM7rCVxl5IRpT76ngyTokkALHVCbiO3Naw

And I wonder how long the spike protein produced by COVID vaccines keeps being produced? Here's a study finding that it's still being produced 60 days after vaccination: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8786601/ I thought the vaccine cleared out in a week or two, as I was told? Does it keep producing (these random) proteins longer than 60 days? How long are these vaccines active in the body? Who knows! If you refuse to look for problems, you won't find any.

Here's COVID vaccines causing myocarditis (perhaps because production of random proteins by the vax causes an autoimmune response in the heart in those unlucky to have the wrong random proteins produced by the vax?): https://academic.oup.com/eurheartj/article/44/24/2234/7188747?login=false

Here's COVID vaccines causing vaginal bleeding: https://www.bmj.com/content/381/bmj-2023-074778 How? Why? Who knows! There's plenty more studies like this showing worrying signals. Modifying immune response in unknown ways for unclear reasons? Sure: https://www.medrxiv.org/content/10.1101/2023.09.29.23296354v1.full.pdf Causing seizures in children? Yup: https://www.medrxiv.org/content/10.1101/2023.10.13.23296903v1.full.pdf

Yes, you can pick apart any of these studies. They are all limited at least by being fairly small given the relative rarity of these events. None of them are proper clinical trials. But that's because these studies are the only ones that have been done. If you also dismiss population-level monitoring systems like VAERS, you can claim that there is no clear evidence, sure. If you refuse to look for problems, you won't find any. But we do now know that vaccines remain active for 60+ days and that they produce random proteins they are not supposed to (these are lab studies on how the vax works). And various data sources, flawed as they are, indicate strong safety concerns. Nevermind that this should have been investigated before giving these vaccines to billions (or coercing people into taking them). The companies are shielded from liability, and politicians will point to the medical community missing or ignoring these issues and say "we could not have known" (although scientists previously considered credible tried to raise concerns, but were sidelined or ostracized). But could we not at least look carefully at the potential issues NOW, before continuing to use this technology that was never deployed in humans before?

Anyway, I hope I've offered some insight on the anti-COVID vax position here. I'll shut up now unless there's something I really need to respond to, since this is the small-scale questions thread :)

I like this discussion, it feels like we're really doing our best to understand each other's position better, as it should be.

Let's start with the easy bit.

I am confused. What is the criteria that a Popperian uses to determine what error even is, without smuggling in a bit of Bayesianism?

A Bayesian would say that even an (in hindsight) erroneous result is still evidence. But with our previous priors, we would (hopefully) decide that the majority of the probability mass is in the result being in error than the hypothesis.

The only criterion for something to count as evidence for a hypothesis is if it is more likely to be seen if the hypothesis is true, and vice versa.

In other words, how are you becoming suspicious without Bayesian reasoning (however subconscious it is)?

That's right! Bayesianism is, on one level, an attempt to formalize exactly this kind of thinking. I'd call it "critical discussion" or "conjecture" or just "reasoning", you can call it Bayesian reasoning, I'm totally fine with that. I think this process is messier than Bayesianism claims, but I also think that plenty of Popperian concepts can do with some formalizing, like "degree of corroboration", "state of the critical discussion", "severity of testing in light of our background knowledge", etc. So on this level I'd say Bayesianism is a worthwhile pursuit of formalizing this kind of thinking, and we can set aside the question of how well it does this for now. I say "on this level" because there are many variants of Bayesianism, and some claim Bayesianism does much more, such as Bayesianism providing a (partial) justification and rational basis for scientific theories instead of induction, which we have agreed it cannot do, i.e. it cannot elevate our theories beyond the status of a guess.

It is with regards to truth and the relationship of Bayesian calculations to external reality that I suspect we disagree, although I'm not quite sure about this. There's definitely still some stuff to clear up here.

You use truth in the sense that something is consistently and robustly reproducible in the observed environment or universe (or more abstract spaces). Sure. I have no beef with that.

No, this is definitely not the way in which I use truth. Like I said, I use truth in the sense of correspondence to reality. Our subjective expectations, reproducibility, and observations are totally irrelevant to truth. The only thing that matters with regard to the truth of a claim is whether our claim corresponds to reality, i.e. whether things really are the way we claim they are. So the claim that moving faster than light is not possible is true only if this corresponds to reality, i.e. if it is really, objectively, impossible to move faster than light in this universe. We can do all kinds of experiments on this, but the only thing that matters with regard to the truth of this statement is whether it is actually possible to move faster than light in reality or not. Of course, we cannot know what is true, but that does not stop our claim from being true if does in fact correspond to reality, despite it being just a guess. The experiments are attempts to eliminate our errors in our guesses about reality. And while we can never be sure (or be partially justified) in thinking what we believe is true, it can still be true, objectively, if we have guessed right, if it really is impossible to move faster than light. So we cannot be justified (fully or partially) in thinking what we believe is true, but our theories can still be true, objectively, if they correspond to reality whether we know it or not.

In my view, this is what science is interested in: how the universe really works regardless of what we think and the evidence we have. And the best we can do in figuring this out is to guess, and temper our guesses by critical reasoning and empirical testing (which is also guesswork) in order to eliminate errors in our guesses.

Bayesianism often gives up on this goal, and confines itself to only our expectations and related calculations. It thus gives up on saying anything about objective reality, and confines itself to talking about our subjective states of expectation and calculations of probabilty regarding those expectations. This, for me, is an unacceptable retreat from figuring out objective reality as the aim of science. I'm interested in how the world really is, I'm not interested in our subjective calculations of probabilty (other than if they can perhaps help in the former).

Would this describe your position though? That we cannot rationally say anything about the external, objective reality, but can calculate our subjective expectations/probabilites using Bayesian reasoning? I'm not quite sure this is your position, although there have been hints that it is and it is common among Bayesians, which is why I was asking questions about the aim of science and the relationship between Bayesian calculations and reality/truth.

Probability and truth can be converted

And I suspect this is another point of disagreement - I don't think this is right. Having certain truth would be great - if we could be certain about something being true, we could infallibly classify it as true. Awesome! However, we have agreed that this is not possible. But say we can calculate something is probable using Bayesianism. This tells us exactly nothing about whether it is actually true or not, about whether reality really works that way. Certain truth implies truth, yes, but probable truth does not imply truth. Like I said, if you are mistaken about the truth of the belief in question, i.e. if reality does not work that way, your belief being probable does nothing to mitigate your mistake. You still believe something false. If moving faster than light is possible in reality, you are wrong in believing it is not, no matter how probable your belief may be. You are equally wrong as a Popperian who would just say he guessed wrong, despite your probability calculation. And if the belief in question is true, if reality is such that moving faster than light is not possible, this belief just as true for the Popperian who only claims it to be a guess and does not reference probabilty, regardless of whether your beleif is probable or not according to Bayesianism. I see absolutely no logical connection, i.e. possibilty of conversion, between probability and truth. Reality does not care about our probability calculations.

To sum up, I have no issues with Bayesianism as a theory of decision-making. I see science as guesswork tempered by reason and experience, aiming at objectively true theories about reality - although we can never know or be (partially or fully) justified in believing them to be true: we have to guess, and eliminate our errors by experiments, which are also guesswork. I think we may disagee with regard to truth and the aim of science, but I'm not totally clear on your position here. I also think we may disagree on the connection between probability and truth/external reality.

Thank you for the response, but it doesn't really address what I was saying. You claimed that "if the vaccines caused noticeable health risks it would be absurdly easy to see a correlation. Vaccination=higher mortality. That correlation isn't there." and that "This is one of those conspiracies that's really hard not to be condescending about because it's just so thinly supported."

I showed you that clinical trials (the best kind of data!) found a significant increase in serious adverse events after vaccination (health risk!) and increased deaths in the vaccinated group compared with controls (vaccination=higher mortality).

Your response that cites a study with different results may indicate that the situation is complex, with contradictory studies supporting different positions, but it does not show that the "correlation isn't there" - it's there in the randomized clinical trials (RCTs), which represent the best type of evidence. Your initial claim was very very strong, and it is what I'm contesting. You did not just say that there are contradictory studies on COVID vaccine health risk, you said that the claim of health risk is so thinly supported it's hard not to be condescending about that claim, when in fact the clinical trials that were used to approve the vaccines show both a serious health risk and that vaccination=higher mortality.

The study you quoted here (https://www.cdc.gov/mmwr/volumes/70/wr/mm7043e2.htm) is not a RCT and thus immediately weaker evidence than the clinical trials, but let's look at it more closely. The study has a very surprising result, namely that vaccine recipients had lower non-COVID mortality than the unvaccinated. Unless you want to claim that the COVID vaccine magically protects against all forms of death, the study has missed something or introduced some kind of bias in their study design that skewed the results.

One explanation that immediately springs to mind when reading the study stems from this decision in the study design:

"To ensure comparable health care–seeking behavior among persons who received a COVID-19 vaccine and those who did not (unvaccinated persons), eligible unvaccinated persons were selected from among those who received ≥1 dose of influenza vaccine in the last 2 years."

So they only included unvaccinated persons who were vaccinated against the flu but were not vaccinated against COVID. The majority of the healthy population does not vaccinate against the flu - some healthy people do, of course, but it's primarily given to older or sick and thus vulnerable people. Among the people who were vaccinated against the flu due to being sick, but did not vaccinate against COVID, some proportion did not receive COVID vaccination because they had gotten sicker in the meantime, and thus COVID vaccination was deemed too risky. It is possible that this group was large enough (you don't need that many such people to skew the results) to lead to the result that unvaccinated people died from non-COVID causes more often more than the vaccinated - because a proportion of the sample was unvaccinated due to being too sick to receive the vaccine, thus resulting in more deaths among the unvaccinated group.

Of course, this explanation is not a certainty, but the decision to only include flu vaccine recipients in the unvaccinated group, without controlling for initial health status and with no mention of ensuring that the vaccinated were also flu vaccine recipients, was very questionable. It may have skewed the results of this study, leading to the result that COVID vaccines seem to magically protect from non-COVID death. Even if this is not the explanation of this weird result, the results of this study just make no sense - something is happening that the study doesn't account for, even if the result were to be correct. I therefore rate the reliability of this study a fairly low - definitely lower than the reliability of RCTs that show vaccination=higher mortality.

To summarize, the study you quoted here has a weird result that indicates some unknown factor at play that is skewing the results, and they made a questionable decision in study design that they did not match in both samples.

In any case, randomized controlled trials are stronger evidence, and they do indeed show vaccination=higher mortality and that there's a health risk from vaccination (16% increase in serious adverse events).

This contradicts your original very strong claim that there is no correlation indicating a health risk from COVID vaccines and that believing that is something it's hard not to be condescending about because it's so thinly supported. That claim is what I'd be interested in hearing you defend.

Bayesianism often gives up on this goal, and confines itself to only our expectations and related calculations. It thus gives up on saying anything about objective reality, and confines itself to talking about our subjective states of expectation and calculations of probabilty regarding those expectations. This, for me, is an unacceptable retreat from figuring out objective reality as the aim of science. I'm interested in how the world really is, I'm not interested in our subjective calculations of probabilty (other than if they can perhaps help in the former).

My stance is that this isn't a form of unwarranted cowardice or retreat, but a pragmatic acknowledgement that we are subjective observers forced to grapple with evidence acquired through sensoria. There is no scope for an actual bird's eye view, we must trudge through the territory and map it as we go.

I think this is our first central disagreement, then, if you've accepted what I've said is for me an "unacceptable" retreat from talking about objective reality. In my view, yes, anything we say about objective reality is guesswork, as we have agreed, but it is this objective reality that interests us - we are trying to figure out the bird's eye view, although we can never be sure (or partially justified) that we have guessed it correctly. While you, if I'm reading you right, confine yourself to talking about our subjective expectations/calculations of probability.

Thus, is science useful? Absolutely. But it can never provide perfect certainty, but we can certainly operate without that, and in practise we do. We can converge ever closer to 100% confidence that our map faithfully represents the territory, and that's good enough for me.

But what does this confidence achieve - what is its relation to external reality? Our subjective calculation of probability and our confidence do not imply the objective truth of the belief in question - there is no logical connection between the two. I'm interested in whether what I believe is objectively true - whether the map actually does match the territory despite being just a guess - and not in our subjective calculation of probability, which tells us nothing about actual, objective truth. I fail to see how to convert claims of subjectively probable truth into claims about objective truth, in other words.

EDIT: Let me put it this way: A Popperian would say "I believe this is the objective truth, but I am only guessing - help me by criticizing my guess and testing it as best we can so we can reject it if it is objectively false, although this rejection will also be just a guess." A Bayesian would say "Based on the supporting evidence and my priors, I calculate a high probability for this hypothesis". At that point, they will either say nothing about the objective truth of that belief, which for me is an unacceptable retreat from talking about objective truth and reality, or they will say "therefore, this belief is objectively true". In the latter case, it is this "therefore" that I object to - I don't think it holds as it then runs into Humean objections, and thus the Bayesian calculation has not added anything to the Popperian's claim.

And this is our second major disagreement, I believe - Popper and I think that the role of evidence is to contradict scientific theories, while you think its role is to support them with regard to our subjective probability calculations. I fail to see the connection between these subjective probability calculations and the external, objective reality which I am interested in. I'm not interested in maximizing our subjective probability, or maximizing our expected utility. I'm interested in correctly guessing the objective truth, and the actual utility of our beliefs rather than their expected utility. In this, evidence plays a negative role, in my view, i.e. one of assisiting in error elimination (which is of course also guesswork). Positive evidence does nothing to adjust our guesses, but negative evidence does adjust them by contradicting and thus falsifying them (if we accept the evidence as true, of course - it may well be that the error resides in a flaw in the experiment as in the neutrino example, rather than in the theory being tested; this is also guesswork).

Overall though, I think we are in agreement about quite a lot. Science is guesswork, rational in the sense of using reason and experience, but not justified or rational in the sense of being rationally established/warranted/justified, whether partially or fully. Where we disagree is that you confine yourself to talking about our subjective calculations of probability, whereas I explicitly aim at objective truth through guesswork. You think the role of evidence in this is positive - it supports the theory in question, thus raising its probability. I think this kind of support has no bearing on the objective truth I am interested in and think the role of evidence is negative - it can contradict our theories, and thus lead us to (hopefully correctly, but with no guarantees, partial or otherwise) to reject those that are objectively false, retaining those that are objectively true.

If there's anything left to clarify, it may be your position on converting claims about the high Bayesian probability of a belief into claims about the objective truth of that belief, where I fail to see any connection between the two. My postion is that we simply make conjectural claims about the objective truth and try to eliminate errors in them, all of which is guesswork, with the role of evidence being only to possibly contradict our guesses. In this view, Bayesian calculations may be correct under Bayesian rules, but they are superflous in our search for objective truth, other than perhaps as an attempt at formalizing our reasoning - where they, in my opinion of course, miss the mark by failing to talk about the external objective reality we are interested in, instead focusing on our subjective confidence.

Is this a fair summary of our positions? Feel free to correct me where I have misunderstood you. If I have understood you well enough, this might be a decent place to stop, and I'll let you have the last word. Either way, I've thoroughly enjoyed our talk, and learned a lot about Bayesianism and reaching mutual understanding with a Bayesian, clarifying my understanding of my own position in the process as well. I'm of course open to further questions or criticism from you or readers like @marten if you have any.

If you demand 100% confidence that the laws of physics are "universal" and timeless, you're SOL unless you assume the conclusion in advance. But we can approach arbitrarily close, and the fact that modern technology works is testament to the fact that we can be goddamn bloody confident in them.

How can we approach arbitrarily close? As stated, this does nothing to address Hume's argument against induction, which holds equally whether you are aiming for probability or for certainty, and does not address the retro skeptical argument that every reason you can give is either based on something else or based on nothing, leading to infinite regress. I don't see how Bayesianism helps with this. Justification is not to be had, with any level of confidence or probabilty. Which is why you need Popper, who explained how you can maintain the use of logic and reason and maintain truth as the aim of science, while also accepting Hume's and the skeptical arguments as correct and consequently discarding justification alltogether.

Another issue Bayesianism often runs into is that many variants of Bayesianism give up on truth - I'm not interested in the confidence we can assign to a theory given our priors and the evidence, I'm interested in whether the theory in question is actually true. Even if we could be justified in Bayesian calculations of probabity/confidence (which we can't be), this would tell us exactly nothing about whether this probable theory is actually true, which is what we are really interested in. There is no logical connection between probable truth and truth (just because something is probably true, it need not be true), and Bayesianism often focuses on subjective calculations of probable truth and abandons actual truth as the goal of science. But if Bayesianism aims at truth rather than solely at subjective calculations of confidence unmoored from reality, if it is interested in what is true rather than just what we can be confident in, it is in no better a position to provide justification than any other epistemology.

Like I said, if both theories A and B have passed all their tests, the evidence says nothing about them. We are free to tentatively accept them as true. We don't have to, though - my guess might be that quantum computing theory is not true, or it might be that I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system. That's fine, but that is the part where I conjecture/guess at the truth. We don't disagree about my mental process, it's just that I think it's conjectural and not warranted by the evidence - the evidence can't tell me what to think and which bet to make and which hypothesis to prefer, the evidence can only contradict a hypothesis and thus force me to reject it if I accept the evidence as true. Everything else is me making my best guess. I'm free to describe my mental state as "very confident" in that process, but that describes my state of mind, not the state of the evidence.

I'll just poke in to say that I think that the mission of science is to discover the actual, literal truth. I've hopefully made this clearer in my response in our conversation below, so I'll just refer to that instead of repeating myself here.

To add content to this post, I'd say that many epistemological perpectives do indeed give up on truth in favor of usefulness or, in some variants of Bayesianism, in favor of our probability estimates. I don't care whether a scientific hypothesis is probably true, I care whether it is actually true - and if it is true, it will also be useful.

If they believe the plane is safer than the teleporter, and their goal is to maximize safety, then by nice clean deductive logic they should chose the plane, given their premises and their acceptance (tentative, without justification) of the rules of deductive logic. The premise that the plane is safer is a conjecture, though; it is their best guess, which they have critically examined using reason, but which is not justified or warranted or something they "rationally ought" to believe. Their decision is rational in the sense that it makes use of the faculties of reason and logic to make their choice, but it is not rational in the sense of being justified or having a rational basis, as these things cannot be had.

In my view, the reasoner could make either decision rationally, as long as they critically examine it using reason and it represents their best guess after rational deliberation and critical evaluation. Their rationality is in the method they have used to make their best guess, not in the contents of their beliefs, which cannot be "rational" in the sense of being justified. The reasoner could be irrational if they don't use reason to critically evaluate their choices and instead flip a coin to make their decision, and they could be irrational if they use faulty logic, for instance by thinking: "Planes are safer than teleporters. My goal is to maximise safety. Therefore, I'll use the teleporter".

Ok, I was going for a plain language simple answer, but you obviously know your stuff. Tarski's STT in the Popper/Miller interpretation is the theory of truth I adhere to, then.