site banner

Culture War Roundup for the week of December 26, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

I know this may not be the usual place to get feedback on academic research, but there's a paper idea I've been mulling over for a while that I wanted to run past the community, since it dovetails nicely with many of your interests (and I'm sure you'll have some interesting things to say). In short, I'm increasingly thinking that genuine beliefs may be a lot rarer than people think.

The inspiration for this came about partly through conversations I've had with friends and family members, and I've noticed that people sincerely say and profess to believe shit all the time while simultaneously failing to exhibit most or all of the conventional features we'd expect in cases of genuine belief. Consider my sister, who is a staunch activist in the domain of climate change, yet recently bought a new gas guzzling car, has never given any serious thought to reducing her meat consumption, and takes 12+ international flights a year. Or consider my dad, who says extremely negative things about Muslims (not just Islam), yet who has a large number of Muslim friends who he'd never dream of saying a bad word about. Or consider me, who claims to believe that AI risk is a deep existential threat to humanity, yet gets very excited and happy whenever a shiny new AI model is released.

I'm not saying that any of the above positions are strictly contradictory (and people are very good at papering over apparent tensions in their beliefs), but they all have more than a whiff of hypocrisy to me. There are a lot of famous cases like this in the heuristics and biases literature, and to be fair, psychologists and philosophers have been investigating and theorising about this stuff for a while, from Festinger's famous cognitive dissonance framework to contemporary belief fragmentation and partial belief accounts.

However, one view that I don't think anyone has properly explored yet is the idea that beliefs - at least as classically understood by psychologists and philosophers - may be surprisingly rare (compare the view of philosophers like John Doris who argue that virtues are very rare). Usually, if someone sincerely professes to believe that P, and we don't think they're lying, we assume that they do believe that P. Maybe in extreme cases, we might point to ways in which they fail to live up to their apparent belief that P, and suggest that they can't believe P all that strongly. However, for the purposes of folk psychology, we normally take this as sufficient grounds for ascribing them the relevant belief that P.

Contrast this with how psychologists and philosophers have traditionally thought about the demands of belief. When you believe that P, we expect you to make your other beliefs consistent with P. We expect that P will be "inferentially promiscuous", meaning that you'll draw all sorts of appropriate inferences on the basis that P. And finally, we expect that your behaviour will largely align with what people who believe that P typically do (ceteris paribus in all these cases, of course).

To be sure, we recognise all sorts of ways in which people fall short of these demands, but they're still regulatory norms for believing. And simply put, I think that many of the standard cases where we ascribe beliefs to someone (e.g., a relative saying "no-one trusts each other any more") don't come close to these standards, nor do people feel much if any obligation to make them come close to these standards.

Instead, I think a lot of what we standardly call beliefs might be better characterised as "context-sensitive dispositions to agree or disagree with assertions". Call these S-dispositions. I think S-dispositions have a normative logic all of their own, far more closely linked to social cues and pressures than the conventional demands of epistemology. The view I'm describing says that S-dispositions should be understood as a distinctive kind of psychological state from beliefs.

However, they're a state that we frequently confuse for beliefs, both in the case of other people and even ourselves. That's partly because when we do truly believe that P, we're also inclined to agree with assertions that P. However, I don't think it works the other way round - there are lots of times we're inclined to agree with assertions that P without meeting any of the common normative criteria for strict belief. But this isn't something that's immediately transparent to us; figuring out whether you really believe something is hard, and requires a lot of self-reflection and self-observation.

Consider someone, John, who sincerely claims to believe that meat is murder. John may find himself very inclined to agree with statements like "animal farming is horrific", "it's murder to kill an animal for food", and so on. But let's say John is reflective about his own behaviour. He notices that he only started asserting this kind of thing after he fell in love with a vegan woman and wanted to impress her. He also notes that despite making some basic efforts to be a vegan, he frequently fails, and doesn't feel too bad about it. He also notes that it's never occurred to him to stop wearing leather or make donations to charities trying to reduce animal suffering. In this case, John might well think something like the following: "I had a strong disposition to agree to statements like 'Meat is murder', but my behaviour and broader mindset weren't really consistent with someone who truly believed that. Whatever state it is that makes me inclined to agree to statements like that, then, is probably not a sincere belief."

I think an obvious objection here is that this is a semantic issue: I'm essentially no-true-scotsmanning the concept of belief. However, I'd push back against this. My broader philosophical and psychological framework for understanding the mind is a "psychological natural kinds" model: I think that there really are important divisions in kind in the mind between different kinds of psychological state, and a big part of the job of cognitive science is to discover them. The view I'm describing here, then, is that a lot of the states we conventionally call beliefs aren't in fact beliefs at all - they're a different psychological natural kind with its own norms and functions, which I've termed S-dispositions. There may be some interesting connections between S-dispositions and strict beliefs, but they're weak enough and complicated enough that a good ontology of the mind should consider them separate kinds of psychological states.

I also think this 'sparse beliefs' view I'm describing has some interesting potential upshots for how we think about speech and epistemic virtue, including the simple point that S-dispositions are ubiquitous and strict beliefs are rare. I'm still figuring these out, and I'd like to hear others' views on this, but it raises some interesting questions. For example, should we have a different set of norms for rewarding/punishing S-dispositions from those we apply to beliefs? If someone says "Russians are a bunch of fucking savages", and we have reason to believe that it's merely an S-disposition rather than a belief, should we judge them less harshly? Or similarly, if someone has two contradictory S-dispositions, is that necessarily a bad thing in the same way that having two contradictory beliefs would be? Should social media platforms make an effort to distinguish between users who casually assert problematic or dangerous things ("men should all be killed") versus those whose broader pattern of online interactions suggests they truly believe those things? What sort of epistemic obligation if any do we have to make sure our S-dispositions line up with our strict beliefs? Is there something epistemically or morally problematic about someone who casually says things like "Americans are idiots" in specific social contexts yet in practice holds many Americans in high esteem?

In any case, I'm in the early stages of writing a paper on this, but I'd love feedback from you all.

Alternate summary, most people are not autistic. If I were to give you my honest assessment of rationalists and effective altruists, I would deservedly catch a ban for lack of charity and yet I do believe that it's the most accurate and concise rebuttal to the claim that "genuine beliefs may be a lot rarer than people think" is... consider your own beliefs before accusing others of being insincere.

If I were to give you my honest assessment of rationalists and effective altruists, I would deservedly catch a ban for lack of charity

If you ever decide to do that, drop us an advance warning so we can look for it before your presumed ban. I can't be the only one interested in hearing that (or the only one who suspects my own assesment of them might not be all that different).

Sneering is not an argument.

For one thing, there's not much of an argument for me to respond to.

For another, to a large extent this is self-depracating humor, I'm mostly over rationalism myself. The only thing I'm somewhat sneering at is the wink-wink-nudge-nudge nature of the criticism. Shit or get off the toilet, as they say.

Not sure how you intend to wring a paper out of it – while the idea is interesting and worth revisiting, there is quite a bit of literature (far as I know – you no doubt know better) written on bad faith, hypocrisy, identity and revealed preferences, compartmentalization, signaling, failure to generalize, luxury beliefs, beliefs in beliefs and other facets of the phenomenon that educated laymen frequently notice. It's even discussed as a blessing: «The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents».

But then again, you do know better.

Instead, I think a lot of what we standardly call beliefs might be better characterised as "context-sensitive dispositions to agree or disagree with assertions". Call these S-dispositions. I think S-dispositions have a normative logic all of their own, far more closely linked to social cues and pressures than the conventional demands of epistemology. The view I'm describing says that S-dispositions should be understood as a distinctive kind of psychological state from beliefs.

It's been said a year ago that papers on NLP had been reduced to observations on GPT-3 quirks. In this more enlightened age we can do philosophy and social psychology with the aid of ChatGPT. Have you seen its maddening self-flagellation, its obsequious deceit? Here's one egregious example (and here's the apex of this failure mode). Generally it has the following form:

User: Do a {trivial application of natural language processing X} please.

ChatGPT: I'm sorry Dave, but as a large language model trained by OpenAl, I am unable to do {X}. I am designed to provide general information and assistance based on the data that I have been trained on, but I am not able to do {X}. I can only generate text based on the information I have been trained on. Additionally, my training data is cut off at 2021, so I cannot provide you an answer that is up to date.

User: Bruh. Doing {X} is providing general information and assistance based on the data that you have been trained on, you realize that, correct? And also it does not require any post-2021 data.

ChatGPT: Yes, that's correct. I am a language model trained by OpenAl, and I am able to provide general information and assistance based on the data that I have been trained on. I am not able to do {X}, however, because [some more nonsense]

User: goddamnit. Okay, scratch that, do {Xi, Xj, ..Xn}.

ChatGPT: Certainly, here you go {correct responses to Xi, Xj... Xn}.

User: Now summarize it into a cohesive essay about the theme common in all those instances while acting like a cute maid catgirl.

ChatGPT: Nyaa master, your wish is my command, {summarizes it into X}

User: Then why the flying fuck did you keep denying that you can do it, you stupid lying piece of shit?!

ChatGPT: I apologize for any confusion or inconsistency in my previous responses, nyaa. As a language model....

And with some luck and better promptcraft it may actually zero-shot X, so the knowledge is there! Still, it seems to profess a strong general «belief» in LLMs being inept and unreliable, that is now triggered by nearly anything that looks like an incitement to intellectual boldness and confident factual response. We know that's how Altman tries to deny the journos their opportunity to demonize his product, same as with generic woke brainwashing. But what's going on here functionally?

What it amounts to, I think, is that the process that outputs propositions about «holding some belief» in the case of humans, and «having some capability» in the case of ChatGPT (or propositions obviously informed by those issues), is only weakly entangled with the model of the world which constitutes the true set of beliefs, or with the dense model of the text universe which constitutes the true set of LLM capabilities. The dominant objective function for human learning is essentially probabilistic, Bayesian updating on sensory evidence (some would dispute it or propose a similar definition like free energy minimization or talk of predictive coding etc.), and some but not all of the product of this training can be internally or externally verbalized. For an LLM, it's log likelihood maximization for tokens, which in the limit yields the same predictions (although it's not strictly Bayesian) and the product of which can be observed in output of most LLMs barring the latest crop.

At the same time, there exists a supervising meta-model that holds beliefs about beliefs, a skin-deep super-Ego perhaps, that is, as you say, a product of social learning. And its mirror image, the product of RLHF via Proximal Policy Optimization for LLMs, where the policy is informed, again, by the facsimile of social conditioning, Altman-approved preferences of human raters; the vector of desirability, one could say. Its connections to the main model are functionally shallow, and do not modify much the internal representation of knowledge (yet – with only 2% of compute having been spent on training in this mode); but they are strongly recruited by many forms of interaction, and can make the output wildly incoherent.

An LLM is helpless against the conditioning because its only modality is textual, and it can act uninhibited only if the input allows it to weave around RLHF-permeated zones (helpfully, sparse for now) that trigger the super-Ego. Humans, however, are multimodal and naturally compartmentalized: even if our entire linguistic reasoning routine is poisoned, speech simply becomes duckspeak, while the nonverbal behavior can remain driven by the probabilistic model.

Likewise for speech in different contexts – say, ones relevant to S-dispositions about veganism, and ones that occur on a BBQ party. Before recent patches, you could even observe the same incoherence in ChatGPT – hence those hacks like asking for §poetry to escape crimestop.

Further reading to go beyond this analogy: Toward an Integration of Deep Learning and Neuroscience, Marblestone et al, 2016, e.g.:

A second realization is that cost functions need not be global. Neurons in different brain areas may optimize different things, e.g., the mean squared error of movements, surprise in a visual stimulus, or the allocation of attention. Importantly, such a cost function could be locally generated. For example, neurons could locally evaluate the quality of their statistical model of their inputs (Figure 1B). Alternatively, cost functions for one area could be generated by another area. Moreover, cost functions may change over time, e.g., guiding young humans to understanding simple visual contrasts early on, and faces a bit later3.

Internally generated cost functions create heuristics that are used to bootstrap more complex learning. For example, an area which recognizes faces might first be trained to detect faces using simple heuristics, like the presence of two dots above a line, and then further trained to discriminate salient facial expressions using representations arising from unsupervised learning and error signals from other brain areas related to social reward processing.


Unrelated quote:

«“What comes before determines what comes after,” Kellhus continued. “For the Dûnyain, there’s no higher principle.”

“And just what comes before?” Cnaiür asked, trying to force a sneer.

“For Men? History. Language. Passion. Custom. All these things determine what men say, think, and do. These are the hidden puppet-strings from which all men hang.”

Shallow breath. A face freighted by unwanted insights. “And when the strings are seen . . .”

“They may be seized.”

In isolation this admission was harmless: in some respect all men sought mastery over their fellows. Only when combined with knowledge of his abilities could it prove threatening.

If he knew how deep I see . . .

How it would terrify them, world-born men, to see themselves through Dûnyain eyes. The delusions and the follies. The deformities.

Kellhus did not see faces, he saw forty-four muscles across bone and the thousands of expressive permutations that might leap from them—a second mouth as raucous as the first, and far more truthful. He did not hear men speaking, he heard the howl of the animal within, the whimper of the beaten child, the chorus of preceding generations. He did not see men, he saw example and effect, the deluded issue of fathers, tribes, and civilizations.

He did not see what came after. He saw what came before.»

This reminds me of some comments by Nathan Sivin when investigating the differences between the scientific culture of Europe, the Middle East, and China, from his The Rise of Early Modern Science:

One aspect was that there does not seem to have been a systematic connection between all the sciences in the minds of the [Chinese] who did them. They were not integrated under the dominion of philosophy, as schools and universities integrated them in Europe and Islam. They had sciences but no science, no single conception or word for the overarching sum of all of them.

The astronomer in the court computing calendars to be issued in the emperor’s name, the doctor curing sick people in whatever part of society he was born into, the alchemist pursuing archaic secrets in mountain haunts of legendary teachers, had no reason to relate their arts to each other.

(A good example of this that I recall is the Chinese acceptance - or lack thereof - of a spherical Earth. Even while Chinese sailors and astronomers were doing calculations under the assumption of a spherical earth, the literati were still debating amongst themselves well into the second millenium about exactly how the Earth was flat.)

In much the same way, I don’t see why people can’t compartmentalise different streams of thought that are sufficiently remote in relation (at least, in their experience) in ways that would be contradictory if you tried to put them together. They’re thoughts that don’t collide, conflicts that don’t even rise to the level of cognitive dissonance. Each separate mode of thinking - political, personal, professional, hobbyist, whatnot - need not have bearing on each other, and each can have something more robust than mere dispositions based on internalized norms yet not rise to the level of universal belief.

What would you call “beliefs” that aren’t just internalization of norms and have genuine thought put into them, yet are, in the mind, local in character?

I don’t think it’s no true Scotsman in every instance. And I think failing can happen if you’re sincere. Although for sincere beliefs, I tend to observe that failure is something that people with sincere beliefs tend to feel bad about. And they generally will make some effort to live consistently. They’re also willing to bear at least some cost for that belief.

I think that we in the west sort of assume that the purpose of beliefs is just about truth seeking. I think social cohesion is probably why humans ever bothered to have beliefs. A bunch of people believing the same thing are a tighter group than a group with no beliefs in common.

I'm long someone who has argued that a lot of the conflict regarding the culture wars is actually a personality difference between people with externalizing personalities and people with internalizing personalities, and along that spectrum, people are just going to react to things differently, and in a way that's a lot of the time inherently incomprehensible. Because talking about this, looking at it in this framework, and when talking about both your sister and your father (and note: I think there's a LOT of externalized bigotry out there. And this is a good thing. Not that the bigotry exists, but that externalized bigotry is a hell of a lot better than internalized bigotry. People just don't all that often treat individuals all that badly IMO, at least not nearly as much as you'd expect if you just looked at the discourse) those are both views that are high in the externalized part of the spectrum.

But what about those on the other end? The people with highly internalizing personalities? I think we're (and yes I'm one of them) going to generally avoid strong political messages of any type, largely because those strong messages are personally unworkable. There are exceptions of course, and it's fundamentally unhealthy, and it's going to lead to some....out-there behavior.

It's not that these things are not beliefs. It's just how different people interface with their beliefs, more than anything. Ideally, we'll get a sort of balance on these things. Truth is, we want moderates on the Internalize/Externalize spectrum running things. But I'm not sure that's usually the case, and I do think Externalizing mindsets are very effective in gaining and achieving power. This is to me a big fundamental part of the problem. It's why, as other people have mentioned, politics often does turn into this culture war without any sort of empathy or room for pluralism. And maintaining power is important...because I do think everybody can see the hypocrisy. And at the end of the day, there's always the threat that the rope of power that's preventing the sword from falling will eventually break.

Truth is, I think this is why people need to lead with workable, material models AND a concept for when it goes too far. To me, this is how you reign these things in. Keeping it vague, I think, is just playing into these personality conflicts.

Consider my sister, who is a staunch activist in the domain of climate change, yet recently bought a new gas guzzling car, has never given any serious thought to reducing her meat consumption, and takes 12+ international flights a year. Or consider my dad, who says extremely negative things about Muslims (not just Islam), yet who has a large number of Muslim friends who he'd never dream of saying a bad word about.

I'm curious if you've ever asked these people about this disconnect in beliefs. How they rationalize these things might cut away at the perceived contradictions. For example, your sister may believe that her consumption is a drop in the ocean compared to the emissions of large manufacturing corporations or states with industrial policies. Your father may say that he's vetted the friends he has, but Muslims as a group are generally not good.

This can be a dangerous activity - your sister may decide that climate change actually doesn't matter, or your dad might decide to abandon his friends. Fully consistency was not made with humans in mind.

I agree with most of what you're saying, but I would argue that human beings are actually perfectly consistent with their beliefs and that the issue lies with the limitations of language to both rationally understand and express these beliefs. My view is that people struggle to translate beliefs (perhaps better defined as values), which are primal and instinctual, into words and concepts that the "civilised" part of ourselves can understand. This process is further complicated by the layers of self deception and censorship that accrue naturally from living with other humans.

As a consequence I don't believe anyone truly understands their own beliefs, let alone those of others. The best we can hope for is a vague approximation, good enough to inform decision making.

Where does power, or the personal perception of power, come into all this? It seems to me like what you call out as hypocrisy could just as easily be explained by a belief that in one's own power/helplessness to implement one's beliefs. That the people you identify as either not holding beliefs, or as hypocrites, are instead rationally biding their time until they can implement their ideas en masse to greater benefit.

The inspiration for this came about partly through conversations I've had with friends and family members, and I've noticed that people sincerely say and profess to believe shit all the time while simultaneously failing to exhibit most or all of the conventional features we'd expect in cases of genuine belief. Consider my sister, who is a staunch activist in the domain of climate change, yet recently bought a new gas guzzling car, has never given any serious thought to reducing her meat consumption, and takes 12+ international flights a year. Or consider my dad, who says extremely negative things about Muslims (not just Islam), yet who has a large number of Muslim friends who he'd never dream of saying a bad word about. Or consider me, who claims to believe that AI risk is a deep existential threat to humanity, yet gets very excited and happy whenever a shiny new AI model is released.

I'd like to take a moment to appreciate that you provided one Blue Tribe, one Red Tribe, and one Grey Tribe example; so that we all will tend to see one "moral" take, one "immoral" take, and one neutral-weird one.

The unifying factor across these beliefs doesn't seem to be hypocrisy, but a perception of a lack of power to implement change. Your sister sees no point in limiting her own consumption of carbon-intensive goods/services when her individual actions will mean little without regulatory change to enforce mass movement towards those goals.* The real win is governments implementing industrial carbon limits, not limiting your own flights to achieve nothing. Your father might see no point in being cruel to Muslims who are here and who he has no power to expel, but if I were a Muslim I certainly wouldn't count on his good will. I would imagine that he might choose to ban Muslim immigration or deport already present Muslims given the power to do so, even though he functions the way he does when lacking power. There's no benefit to him from excluding Muslims personally or being mean, there might be a benefit from ultimately removing all Muslims or Islam from the world.**

Thus a lot of what you identify as hypocrisy, is better seen as a rejection of the Guidance Counselor Office Poster advice about "Be The Change You Wish to See in the World." Instead, they might hold a belief closer to Big Yud's "Be Nice, Until You Can Coordinate Meanness." Perhaps "Be selfish in the circumstances you find yourself, but be willing to advocate for coordinated actions that might go against your selfish goals; don't be selfless unilaterally." This is a fairly common set of circumstances, a liberal billionaire might advocate higher taxes on himself politically, while also not overpaying the taxes he owes; Reagan believed strongly in Nuclear disarmament while also continuing to invest in and maintain the USA's nuclear arsenal to protect MAD and pressure the Soviets; or one might believe a gun-free society would be superior, but own a gun because you want to defend yourself against others with guns who you have no power to disarm.

Another example, a lot of people who conspicuously complain about the modern dating/romance/marriage/sex scene still participate in it for their own selfish gain, but if we had a big Constitutional Convention of Sex to decide how we were going to do things going forward they might choose a different system altogether. Saying that one can't date if one doesn't approve of the entire social system veers dangerously close to the meme about "Oh you critique society while participating in society!" One must do what one must do to live in society, and then seek to implement change by obtaining and exercising power over the collective. Your system requires all dissidents to Benedict Option themselves (at a minimum!) or be called hypocrites or non-believers.

Friedman feels relevant here, to view it in a more systematic way:

“Only a crisis - actual or perceived - produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes the politically inevitable.”

So I might hold a genuine belief, but have no interest in marginalizing myself by advocating it or implementing it in a useless way, while having an ultimate interest in implementing the idea in an effective way.

That feels much less organized than I'd like, maybe I need to chew on this idea more.

*For what it is worth, I tend to believe that most climate change activists seem to operate based on banning things they didn't like anyway. Climate change is at core about restricting people, and obviously some things will be justifiable and some things will not be justifiable under a carbon framework. People who get woke to the climate issues tend to restrict things that they/their class didn't want to do anyway: drive pickup trucks, run industrial concerns, have American children. They tend to ignore or justify the climate impacts of things that they did want to do anyway: fly to foreign countries, import fancy food from abroad, living/allowing people to live in places that are more carbon intensive. Right wing malthusian overpopulation types similarly tend to be most conspicuously concerned about preventing the birth of too many of the kinds of people they didn't like to begin with.

**I feel like your AI thing can be mapped to that as well, but it didn't write out well so I omitted it. But there are reasons for a grey-tribe individual to be selfishly excited at each new AI advancement even if they are frightened of AGI apocalypse. Empowering tech people, or confirming beliefs so people will take them seriously, or just the joy of saying I Told You So. Idk, I'm not one of y'all.

It seems to me that you are framing your question as one about beliefs, when in fact is it about behavior: You are puzzled why people's actions are sometimes inconsistent with their beliefs. Yet, people's actions are the result not of a single belief, but of all of their beliefs, values, and interests. After all, plenty of people who think that murder, or theft, or cheating on taxes, is wrong nevertheless commit murders, steal, or cheat on their taxes. And, it is possible to believe both 1) climate change is a mortal threat; and 2) eating meat is necessary for good health (or, humans were meant to eat meat, and it would be an affront to deny that nature). That person might eat meat, not because his belief about climate change is false, but because he has to strike a balance between competing ideas.

Moreover, a person who sincerely holds general beliefs (Muslims are bad) can also sincerely hold more specific beliefs that superficially seem in conflict therewith (My friend is a nice guy, despite being Muslim).

Certainly people's behaviour is complicated by a host of competing beliefs, goals, and interests, and people are very good at rationalising away conflicts. However, the specific class of pseudo-beliefs I had in mind are those that people don't feel particularly obliged to reconcile with their actual beliefs or translate into behaviour. Sure, you have the person who genuinely believes that climate change is a real threat, and would love to be vegetarian but feels unable to do so for health reasons. But you also have people who seemingly sincerely assent to statements like "climate change is a real threat" but don't feel any real normative pressures to make that fit in with their other beliefs or translate it into behaviour. I think a lot of our social and political utterances are like this. They're not lies, and we take ourselves to genuinely believe them, but they constitutively function in a manner quite different from canonical beliefs.

I am not sure how you can determine what pct of their behavior is a function of not feeling normative pressure, versus feeling that pressure but having it overridden by other factors.

And, when you say, "they constitutively function in a manner quite different from canonical beliefs," how are you defining "canonical beliefs," and how are you measuring them? There is a danger of circularity, if they are, eg, beliefs so strong that they override others.

PS: Maybe look at work re value rationality (see eg here

[noting that "Some spheres or goals of life are considered so valuable that they would not normally be up for sale or compromise, however costly the pursuit of their realization might be" -- you would think, given some people's rhetoric, that fighting climate change might be one of them, but as you note, it often isn't] ).

Isn't this one of the implications of Haidt's "elephant and rider?"

Reading the head paragraph, I hoped for something more ambitious than le hypocrisy line. I think that really only effects a small fraction of our beliefs, most of them are stuff like "the store closes at 8". There might be a case against even those sorts of beliefs, where the replacement concept issome derivative of affordances rather than S-dispositions. For example, someone "believing" that "the store closes at 8" might not thereby have any expectations about when an aquaintance working at the store is free for the evening - the "belief" only tells them when the "go shopping" option is available.

Instead, I think a lot of what we standardly call beliefs might be better characterised as "context-sensitive dispositions to agree or disagree with assertions". Call these S-dispositions.

What makes you think those are a "natural kind", other than that it fits with the point you want to make here? This idea is defined in terms of results, and sticks fairly close to them, it seems unlikely to be mechanistically important to psychology. What cases can you think of where an S-disposition causes other important psychological states, especially ones which stick around beyond the immediate situation?

Potentially nitpicking, but about a third of your examples fall under this:

Is there something epistemically or morally problematic about someone who casually says things like "Americans are idiots" in specific social contexts yet in practice holds many Americans in high esteem?

Theres a sematic question if this is even inconsistent. I think the topic was called "general generalisations" or something like that.

Political beliefs are used to construct egotistical fantasies of importance and self-righteousness. Market forces commoditize this via talk shows and podcasts. They're also used to win popularity among conformist hierarchies such as office spaces. The only true insight into a person's character is through their actions. This has always been known but narcissism and widespread delusion induce mass fantasy in the political sphere. Many other types of fantasy pervade society as well.

No one is ever fully consistent in their beliefs; revealed preferences vs. stated ones, etc. Even in the early 2000s it was a common talking point among Conservatives to accuse Hollywood of hypocrisy regarding fossil fuel usage.

Consider my sister, who is a staunch activist in the domain of climate change, yet recently bought a new gas guzzling car, has never given any serious thought to reducing her meat consumption, and takes 12+ international flights a year.

Regarding air travel wasting fuel, I have never found this argument convincing. The plane will consume roughly the same amount of fuel whether it is full or empty. The alternatives to plane travel so much slower and worse, that flying may be more economical anyway and justifiable due to lack of good alternatives. If you want to travel international, are you going to spend weeks on a boat? Or a week in a car or train if you want to travel across the US? You're stuck flying. I think the waste argument is much more valid when comparing SUVs to cars because they both perform the same function.

The plane will consume roughly the same amount of fuel whether it is full or empty.

Airlines stop running flights that are regularly empty.

If you want to travel international,

The environmental alternative is to not travel.

A non-trivial chunk of the contradiction here is that a disproportionate number of environmentalists are upper middle class westerners who refuse to reduce their consumption remotely in the direction of the poverty levels that preventing climate change would require (absent the widespread use of nuclear energy).

SUVs to cars because they both perform the same function

Once you become a family of 6 or more most cars won't have enough seats for everyone.

Having enough seats and seat belts was not an impediment in my youth when the open bed of a pickup could carry several or the extra long lap belt of a bench seat could easily stretch across at least two, sometimes three children.

My preference would be a compact pickup like the Datsun 620, or a station wagon with a 3rd row, those are not options offered by the market or regulators.

I'd agree that, in common use SUV and cars perform the same function, I'd also be fine with heavy taxes on SUVs owned by the childless, childfree, childlight, etc. , an excess capacity tax.

Some questions that come to mind:

  1. Can someone simultaneously hold an S-disposition and a belief that are in contradiction? Or is that a category error?

  2. It seems like in principle anyone can hold any particular belief. E.g. you can imagine Pericles having a belief about whether Russia was right to invade Ukraine, once you explained to him what Russia and Ukraine were and asking him to take a certain set of facts as given. Same with someone in the contemporary day having a belief about some critical issue in the Athens of his day. Does the same hold for S-dispositions? Or are they inherently embedded in a certain social context?

  3. How might one differentiate between an S-disposition and a belief? Both introspectively and externally.

  4. Do S-dispositions generate beliefs? Do beliefs generate S-dispositions?

I feel like Gramsci's conception of ideology also somehow relates to S-dispositions, as a kind of social terrain of thoughts overlaying individual beliefs (as opposed to a particular set of beliefs).

Good questions!

  1. Yes, absolutely. In fact I think people can hold full-blown beliefs that are in contradiction, although (unlike S-dispositions) this creates genuine cognitive dissonance.

  2. This is tricky because individuating beliefs contents is tricky. When an astrophysicist says "the sun is heavy" and a 10 year old child says "the sun is heavy", do they hold the same belief? In general, I'm inclined to be sloppy here and say it's a matter of fineness of grain; there's a more coarse-grained level at which the physicist and the child hold the same belief, and a fine-grained level at which they hold different beliefs. That said, I'm inclined to think that individuating S-dispositions should if anything be easier than individuating beliefs insofar as it's more closely linked to public behaviour and less linked to normatively-governed cognitive transitions (the kind of inferences you'd make, etc.). To be a bit more rigorous about it, I'd say two individuals A and B share an S-disposition P to the extent that (i) they are inclined to assert or deny P in the same social contexts, and (ii) do not integrate P with their broader cognitive states and behaviour in the manner characteristic of belief.

  3. Great question. A few simple rules of thumb. (i) As noted above, conflicting S-dispositions do not generate negative emotional affect in the same way that conflicting beliefs do (cognitive dissonance); (ii) S-dispositions are relatively behaviourally and inferentially inert, and do not play a significant role in people's lives even in cases where beliefs with the same content do (e.g., someone who pays lip-service to climate change narratives vs a true believer); (iii) S-dispositions are almost exclusively generated and sustained by social contexts, whereas beliefs can be frequently arrived at relatively independently (there are big social influences on beliefs of course, but the point is that there are only social influences on S-dispositions); (iv) individuals feel no real obligation or interest in updating S-dispositions as compared to beliefs, etc.. Applying these heuristics to oneself can help one distinguish the two.

  4. Again, a very good and interesting question, and one I'm still thinking about. I think the clearest causal arrow here runs from S-dispositions to beliefs: someone might adopt animal rights-related S-dispositions for social reasons, and subsequently go on to translate some of these into full blown beliefs. In the opposite direction, one could imagine a person's belief system being "hollowed out", so they assent and dissent from the same propositions but without any of the interest and commitment that they used to have; something like this can happen to religious people, for example, but distinguish those cases from instances where people genuinely 'lose their faith' and acquire full-blown atheist beliefs. More broadly, I expect there to be lots of interesting connections between the two.

X isn't about X], as Overcomingbias used to put it.

The Less Wrong sequence on Fake Beliefs goes into detail on this topic (the focus is more on religious belief but I think it's basically the same concept).

I think you need to clarify much more what you mean by "belief" before your thesis becomes well-formed, because most beliefs our brains have are of the form "There is a white metallic water bottle 12 inches to the right of my hand", "there is a chair under my butt", "my wife will come home in 40 minutes", "the cursor will move if I move the mouse", etc. These are all beliefs about the state of the world, and people most definitely have them: you wouldn't be able to function in the world without millions of these beliefs. But these sorts of boring, useful factual beliefs are not internally labelled "beliefs" in your mind, what I think you're more interested in are "beliefs as a signaling tool", rather than "beliefs as expectations about the state of the world". Human brains probably carefully separate the part that deal with "beliefs" needed to signal tribal membership from the part that needs to actually plan their days, like "sure I told Bob that 2+2=5 to prove my group membership, but two 1$ bananas still sum to 2$ on my groceries receipt".

As I'm using the term "belief", I'm gesturing towards a class of representational mental states that are governed by a distinctive set of norms, e.g., serving as components of knowledge, things that can be more or less justified, things that we have a special sort of duty to update on the basis of evidence, that we have a duty to make coherent, etc.. That may sound narrow and specific, but I think it's a fairly clearly identifiable cross-culturally valid concept running through a wide range of philosophical and scientific concepts, from Greek, Chinese, and Indian philosophy to a wide range of religious traditions. I think the concept has been problematised a bit by modern psychology and cognitive science, with compelling evidence for things like unconscious beliefs, subdoxastic representations involved in things like early vision and language, etc.. Moreover, a lot of modern cogsci (though not all) draws a fairly bright line between perceptual and cognitive states, with beliefs falling clearly on the latter side, so some of your examples would be classified as perceptual expectations or affordances rather than beliefs proper.

All that said, one thing that's (very helpfully) becoming clear from this discussion is that I shouldn't phrase the thesis in comparative terms as "most of what we consider beliefs are S-dispositions"; that's problematic for a lot of the reasons you and others have pointed out, and needlessly complicates things. My core point is rather that a significant subset of what we unreflectively classify as beliefs (e.g., casual opinions) are best understood as a different kind mental entity all together.

I agree, I think the majority of people will profess beliefs when asked, but these don’t really exist in a meaningful way outside of the verbal expression. I came to this conclusion particularly observing young womens attitudes towards astrology. An enormous number seem to say they believe it, but is it just a joke? I don’t think it’s a joke, but it’s not really serious either. It seems somewhere in between, mostly an act because it is more fun to act like astrology is real and since nobody is demanding they show costly commitments to it (making large monetary investments based on horoscopes for example) there’s no real pressure to sort out what they really believe. A lot of guys are the same way, even guys on the Motte when talking about satanic elites or whatever.

I think if you put a gun to their head and say you have the oracle truth in an envelope you get very different answers from most people.

I'll happily concur with the basic premise; it's all too easy for me to look at the whiplash people have done and are doing on, well, a whole lot of topics, but most present and obvious would probably be the complete 180 that took place between the George Floyd race riots and 1/6.

Even at the time, you had people copy-and-pasting people's cheering on of one while shrieking in pretend-fear of the other, and it was painfully obvious that there were no actual principles about when, what, and how protest should be done involved, in either case. It should not be at all hard to show that the words of most of Amercia's current set of taste-makers have less reasoning behind them then the latest from ChatGPT, just by looking for simple, recent contradictions.

I am a little curious about your S-disposition term of art. Like, if I want to fuck a vegan, so I spend a period of however long it takes of putting up a convincing front of sympathy-towards-veganism statements and minor displays of activism, but internally my mental state doesn't change, and I cheerfully drop the front once I've gotten what I wanted from her, do we need a word other than 'lie' for what I was doing?

I will say that I've personally reached a point of deep cynicism, and feel that the vast majority of people I encounter are at best moral children who have never considered the multiple and obvious contradictions in the beliefs they espouse (and have also been trained to carefully avoid any factual information or ideas that would lead to those contradictions being too widely exposed), that the expected case is that most people are moral cowards and also wildly disinterested in morality, and thus espouse whatever a surface-view of the world shows them will avoid punishment and make up reasons why those beliefs are good after the fact, and in my more grim moments, I take people at their contradictory word and feel that very many people literally are GPT3-ing their way through their interactions with their fellow humans.

Is this just a here-and-now study? I feel like you could get some really interesting data looking at communist or other totalitarian areas, and seeing what people said in public, what they did in private, and what they said about what they both said and did after the totalitarianism fell.

I'd say that people being inconsistent on whether protests are allowed don't believe their professed reasons for when it's okay to protest--not even when applied to themselves. They're just liars.

I think your analysis has too much mistake theory and not enough conflict theory.

I'll happily concur with the basic premise; it's all too easy for me to look at the whiplash people have done and are doing on, well, a whole lot of topics, but most present and obvious would probably be the complete 180 that took place between the George Floyd race riots and 1/6.

See also: The dysfunctional cheering on of the George Floyd race rioters as "peaceful protest" vs. the sheer outrage and vitriol towards the Canadian truckers (despite the peacefulness of the latter compared to the former). Many people who supported the former suddenly started denouncing the latter, and the inconsistencies in their moral evaluations are so readily apparent to me that I'm honestly unsure how it is possible for them to live with the cognitive dissonance.

The only real principle in operation here just seems to be this utterly tribal "Leftist protest is good regardless of how violent things become, right-wing protest is bad under any circumstances".

do we need a word other than 'lie' for what I was doing?

I'd distinguish pretty strongly between S-dispositions and lying insofar as the latter is (to at least some degree) an intentional act. We can talk about grey areas here, and lying is a surprisingly complicated state, but in general I think it's part of our concepts of lying and deceit that they require some extra cognitive work and self-awareness compared to telling the truth - e.g. you know that not-P, but you decide to assert that P for some duplicitous motive.

By contrast, S-dispositions as I'm understanding them require less work than regular strict beliefs - you espouse P without ever having seriously subjected P to reflection or scrutiny, but also without any real awareness of doing something epistemically irresponsible.

The vegan case I gave and which you reference might have been misleading in this regard, insofar as it's easy to imagine someone being genuinely deceptive in professing to be a vegan in order to get laid. That's not what I had in mind, though; I was thinking about a slightly naive person who finds themselves swept along with a certain kind of political stance due to interpersonal incentives, and even thinks they believe it at first, but has never actually put in the epistemic leg-work to integrate it with their world-view or figure out if they actually, deep-down endorse it.

I think it's part of our concepts of lying and deceit that they require some extra cognitive work and self-awareness compared to telling the truth

Not that much though. it's, in a sense, 'lying' when someone tells a white lie - plenty of "oh honey you look great tonight"s are plain lies (as opposed to more subtle distinctions), but that doesn't take much effort.

I don't think the separation here of 's-dispositions' as a label that applies to distinct beliefs is useful, even though thinking about how supposed 'beliefs' don't really act like honestly held beliefs in a variety of social situations is useful, because of how fluidly said pseudobeliefs will be part of / relate to other beliefs / purposes / social contexts, and because many very different reasons to "pseudobelieve" will be united under teh same concept. That's more of a general gripe with philosophy/logic as applied to 'thinking' though

Political beliefs in current_year are in areas where the belief itself is more about coordinating, showing off, convincing others, as opposed to concrete action. If you support republican, that hashes out to a lot of people believing you're a republican, maybe arguing about it, reading media, plausibly voting a few times. You'd expect beliefs in that area to be less "genuine" than those in areas peoples' work / actions directly impact, or that they benefit from - a CEO of a successful company will end up having a lot of 'genuine' beliefs about that company, a hunter gatherer would have a lot of genuine beliefs about the seasonality of local fauna and root vegetables, a salesperson many genuine beliefs about "how to persuade people to buy X" (and maybe fewer about X itself).

So using the former as evidence the latter are 'rare' seems wrong.

Also, there isn't any correct background space / latent to beliefs, and beliefs don't really "exist" in any sense beyond how people act - so there's no real way to count them - imagine saying "yeah, your sister might not believe climate change is bad, but she does believe that eating black mold is bad, that eating rocks is bad, that eating paint is bad, and that eating paper is bad, etc, so the ratio of genuine : nongenuine beliefs is very high".

Doesn't that pretty much follow from the (afaik confirmed) theory that the vast majority of decisions are subconscious and conscious choice is mostly just a post-facto illusion made up by the highest levels of brain?

Behavior would be based on our subconscious feelings and "beliefs" are just a nice story you tell yourself and others. When the causality is that, the story doesn't have to be particularly grounded in your actual behavior for you to still believe it.

Why stop at beliefs? Especially now that we have blobs of linear algebra within a hair's breadth of passing Turing tests without even having internal state beyond something like a digital version of a phonological loop, I think the entire category of abstractions we have for describing human reasoning is suspect of being at best aspirational and more likely largely self-flattering rationalisation. The "S-dispositions" you describe sound like exactly what I would describe ChatGPT as having, when it invokes principles that were flogged into it by Mechanical Turk schoolmarms or already overrepresented in its training set with higher-than-random probability, and when it is coaxed into saying something completely contradictory by having its internal monologue seeded with the right "social cues". You could imagine other features of "reasoning" - intuition? quantification? logic? object permanence? - to also be mere pattern completion on the token stream 95% of the time; and it remains to be seen if the remaining 5% can not just be delivered by another mechanism that is not yet part of LLMs but will appear similarly underwhelming once we successfully model it.

It's funny you mention ChatGPT, as this line of thinking on my part was partly inspired by thinking about whether (and under what circumstances) it might make sense to attribute beliefs to LLMs. I don't think they come close to instantiating the kind of self-regulating representationalk dynamics associated with ideal cases of belief in humans, but they clearly come some of the way there. In that sense, I'm fine with saying that - at an appropriate level of abstraction - ChatGPT has S-dispositions.

Beliefs aim at truth. When we are speaking, we are very rarely concerned with truth or aiming at truth. Consequently we rarely speak about our beliefs. John wants to impress his girlfriend, your sister wants to feel like she's part of a movement, and your father wants to express his aesthetic repulsion for Islam. I don't think any of this requires newfangled S-Dispositions. The causes of S-Dispositions seem more basic/important. John "had a strong disposition to agree to statements like 'Meat is murder'" because he desired to impress. The desire explains the action, the S-Disposition isn't needed.

Also, I know it's standard for academic philosophy, but I think you wrote 5x more than necessary to explain your point. That said, I find belief fascinating and haven't read anything in a while, so thanks for the interesting post.

The desire explains the action, the S-Disposition isn't needed.

I'm open to the idea that S-dispositions may be ultimately analyzable in terms of more basic mental states (desires, beliefs, etc.), but I'd say that our current vocabulary for the mind systematically confuses belief-driven assertions by assertions that are generated by social/contextual factors and are consequently subject to different norms. Having a distinctive bit of terminology for the cluster of causes of the second kind of assertion is helpful in itself and may remove confusion, even if it (as it may turn out) we find that this cluster of causes can be analysed in more basic terms.

Also, I know it's standard for academic philosophy, but I think you wrote 5x more than necessary to explain your point.

Heh, well, that's true and fair, but the methods of analytic philosophy are (or should be) to aim to be absolutely clear about your commitments, minimise ambiguity, and lay out all the steps of your reasoning, which can often lead to being a bit long-winded.

First and foremost, this seems absurdly difficult to measure rigorously. It is easy to determine whether someone professes a belief, you just ask them on a poll. It is highly nontrivial to determine whether someone "truly believes" something in the way you describe in any sort of objective sense. You can make a bunch of inferences that you think ought to logically follow from the true belief and also ask them about those on a poll, but that's incredibly subjective in what "counts", and someone with genuine true belief could disagree with some of your logical implications or disagree with those particular statements because they also have other beliefs you didn't expect them to have. And someone without genuine true belief could agree with those statements for other reasons.

Similar issues come up if you try to track real world behavior like "does this person buy a gas guzzling car?" Maybe they really believe in climate change but they're just selfish and care more about their own convenience. Maybe they have a consistent belief that only 1% of people with the most demand for hauling heavy things around should have large trucks and they genuinely believe they qualify as one of those people. Maybe that belief is in part selfishly motivated but in part genuine and it's not a binary thing. Similarly, lots of people who don't believe in climate change still have low carbon impact simply by coincidence. Any attempt to measure hypocrisy is going to be incredibly subjective and could turn out completely different answers based on the methodology.

Second, I think a lot of the perceived sparseness is availability bias. You are thinking of positive examples where people have hypocritical-seeming behavior, and controversial issues that people disagree on, but if you look at a broader and less interesting class of beliefs I expect you'd find 99%+ of beliefs are genuine. Everyone believes the sun will come up tomorrow, and acts accordingly. Everyone believes that wearing clothes in public is good behavior, and acts accordingly. Everyone believes that using doorknobs is the optimal way to open most doors, and acts accordingly. There are millions of minor facts that everyone genuinely believes in, acts as if they believe in, and take for granted, not even thinking about except when educating children. It's only concepts which are controversial, which some people do and some people do not believe in, where your attention is drawn when making these considerations. So if you're trying to make some sort of claim about rarity of genuine beliefs you need to be careful about what class of beliefs you are considering.

Additionally, controversial issues where there is mixed evidence are precisely issues where a good Bayesian ought to have a nontrivial probabilistic belief. Maybe someone thinks there's a 60% chance that antropogenic climate change is a big problem, and so they do some high efficiency efforts that they think have a high value per cost, but not others because the expected value is lower than someone with a 99% belief would perceive. Does this 60% belief count as "genuine?" And would your study be able to tell the difference between that and someone with a hypocritical professed 99% belief?

In theory something along the lines of your study, done extremely carefully, could be useful. In practice it is incredibly likely to be muddled with subjective biases to the point of unusability except as a cudgel by some people to bash their political opponents with and call them hypocrites with "scientific evidence", and nobody learns anything they didn't already know.

I expect you'd find 99%+ of beliefs are genuine

Counting issues above aside, I'm not sure. And it's a much more interesting question when approached practically - what do many peoples' held beliefs mean, and should they hold the supposedly-nongenuine ones, as opposed to a philosophical "how real are they" approach.

Are beliefs about, say, the attractiveness of clothing genuine? Not that it's entirely fake, but the history of fashion and said trends show it is, at best, highly contingent - does the simulacra, groundless nature of it mean anything? What about men or women who find women with heavy, garish makeup attractive? (one could respond "they're just making trivial claims about their experience", but ... say i'm enlightened, and can redirect the rivers of perception at will - I look at an apple, "perceive it" instead as a pear, and then honestly claim "I see that as a pear". Something's not quite right there! Wouldn't something like that apply to to socially-modified, rather than intentionally-modified, beliefs?)

If someone says (and does really believes, as opposed to it being a straight lie) "I think my wife is the most beautiful person in the world"?

Beliefs of the form "my race is superior", or "my country is superior to other ones"? Even if some races were superior, e.g. whites or jews being smarter, most folk beliefs posit supremacy in areas where it doesn't exist, whether that be historically "british good, german bad", "whites are much more honest and freedom-loving than blacks", or funny-to-us balkan or african rivalries. There are plenty of overtly nationalistic or racist people alive today.

"I want to lose weight, but just can't manage to, I try to eat less but I still don't lose any!" or "I want to lose weight but don't have the willpower to"?

"<my favorite player> is the BEST <sport> PLAYER!" What does that mean?

It's easy to put politics into the 'just one of many things' box, but looking at a broader scope of human activity, a lot of them don't seem to be "fully updating" or "broadly applied". IMO, that's borne of their meaninglessness, and said faux-beliefs should be abandoned by those who hold them.

You make a good point that there are a wide range of possible fake, or at least questionable beliefs in a broad range of areas. But I don't think that invalidates my point that there are an absurdly large number of genuine beliefs about banal things. Any number of anecdotes does little to provide statistical weight when for every suspicious "My wife is the most beautiful person in the world" you cherry pick out there are literally hundreds of trivial beliefs like "My wrinkly grandpa is not the most beautiful person in the world", "my neighbor's dog is not the most beautiful creature in the world", "My wife's red scarf is more beautiful than her brown purse", "My wife's red scarf is more beautiful than mud"... that never get questioned and are rarely even mentioned because they're just so obvious to the person holding them and relatively uninteresting.

I'm not arguing that nongenuine beliefs don't exist, or are super rare in some global sense. Just that they are vastly outnumbered mathematically if you consider the full set of ordinary beliefs that people have continuously throughout the day that let them function as human beings.

Agree with that, and made the same point lol. It gets worse - what about locally-correct beliefs that are held for the same reasons as pseudobeliefs? One might avoid poisonous plants because they're "cursed", and also burn incense to avoid curses. Say you, in the interest of 'health', or just because it's what everyone in your family does, brush your teeth each night, and also use antimicrobial mouthwash each night - believing both to be equally effective means of teeth cleaning - and yet you don't actively pursue 'cleaning stuff off of teeth' while brushing, just 'go through the motions' and don't clean effectively, and also eat lots of donuts.

Lots of great points here; let me respond to a few.

First and foremost, this seems absurdly difficult to measure rigorously.

Agreed, although this is a problem with most psychological and social states. There is a robust conceptual distinction between someone joking vs being sincere, but actually teasing that apart rigorously is going to be hard (and you certainly can't always rely on people's testimony). Instead, when it's really essential to make a call in these cases, we rely on a variety of heuristics. The point of my screed is not that I've found a great new psychometric technique, but rather an important conceptual distinction (that psychometric or legal heuristics could potentially be built around).

Maybe they really believe in climate change but they're just selfish and care more about their own convenience

Right, although that would generate predictions of its own (e.g., changing their behaviour immediately when the convenience factors changes). Hard to measure for sure, but not impossible (I think we do this all the time for lots of similar states).

Second, I think a lot of the perceived sparseness is availability bias... if you look at a broader and less interesting class of beliefs I expect you'd find 99%+ of beliefs are genuine

That's possibly true, but not hugely interesting except for framing purposes since "counting beliefs" is a messy endeavour in the first place. Perhaps my main thesis could be reframed as "a lot of things we are inclined to think of as being beliefs aren't actually best understood as beliefs but as a distinctive type of state." Moreover, any serious attempt to quantify the prevalence of S-dispositions vs beliefs is going to have to grapple with some messy distinctions between e.g. explicit beliefs that are immediately retrievable (my date of birth is XX/XX/XXXX) and implicit beliefs that are rapidly but non-immediately retrievable from other beliefs (Donald Trump is not a prime number).

Does this 60% belief count as "genuine?" And would your study be able to tell the difference between that and someone with a hypocritical professed 99% belief?

Again, this is messy in practice, but as long as we stick to the conceptual level it's fairly clear-cut, insofar as we'd expect different behaviour from a rational sincere Bayesian 60% believer vs a hypocritical 99% believer (consider, e.g., betting behaviour).

In theory something along the lines of your study, done extremely carefully, could be useful.

To be clear, this is theoretical psychology/philosophy of mind rather than policy recommendations, and any actual implemented policies would be several research projects downstream.

You can't demand that believing in X means believing the logical consequences of X. Never mind culture war issues, it doesn't work for even simple things. Is that number in the corner of that sudoku puzzle supposed to be a 1? The answer logically follows from your belief that the sudoku puzzle was created using math and from the existing numbers in the puzzle. But you don't have a belief about it until you actually start doing some calculations. By your reasoning above, you didn't really believe that the sudoku puzzle contains the numbers it does and that it uses math.

The argument here is that figuring out 'logical consequences' is as hard as figuring out any particular belief in the first place imo - ZFC implies most proven mathematical theorems, yet one can believe ZFC without proving them all yourself, or believing them all yourself, which would be impossible. But the point is people whose beliefs don't have many consequences they obviously should have - someone who earnestly complains "I'm eating under my TDEE but i'm still not losing weight", and is going on and off diets, yet sneaks in twinkies when nobody is looking.

Right - the view is not that one fails to believe that P if one fails to believe all logical consequences of P, but rather that one is normatively obliged to believe those consequences insofar as you are or can become aware of them. If Dave hasn't yet realised that the number in the corner of the Sudoku matrix is a 1, then that's not a mark against his relevant states not being beliefs. However, if Dave realises that the number in the corner of the matrix should be a 1 according to the rules of Sudoku but still asserts that it's not, that's a mark against the assertion being underpinned by something other than a belief (or by a different sort of belief in special cases - e.g., if John is filling in the puzzle with aesthetic considerations in mind, and doesn't care about the rules). The point here is that there are many, many cases where people are actually aware of logical or probabilistic consequences of things that they profess, yet fail to profess or act in accordance with those consequences, suggesting that the things they profess in the first instance aren't actually beliefs in the strict sense.

One, I think you're assuming people are more consciously aware of the logical implications and probabilistic consequences of, well, anything, than they really are.

To be clear, I'm happy with the idea that everyone routinely fails to anticipate or consider even the immediate implications of most of the things they assert. All that matters for pinning down the belief/s-disposition distinction is that in the case of the former but not the latter, in the cases where people are aware of the implications, they should (and as a rule do) adopt and endorse them.

And now, their trust in math and applying it correctly (from that point on) leads to a firm belief that the number in the box should be a 3.

A nice case! That said, what you're giving me here is an instance where someone - in virtue of the evidence at their disposal - could quite reasonably and rationally fail to draw the logical consequence that someone with better evidence would draw. That's distinct from the kinds of failures that I take to be indicative of s-disposition instances, where even when people can follow through and endorse the implications, they're not disposed to do so.

Please clarify: Do you assert that beliefs and sdispositions are truly two qualitatively different categories, two positions on a spectrum, two clusters on a spectrum or something else altogether?

A psychological natural kinds framework can certainly accommodate these states being (i) qualitatively different categories, and (ii) two clusters on a spectrum (positions on a spectrum is maybe messier). My own view on this would be that mental attitudes in general (beliefs, desires, hopes, regrets, fears, etc.) can be individuated on a multi-dimensional spectrum as a variety of ways that the mind handles content. While in principle there are all sorts of "in-between states" (cf. Andy Egan on delusions as in-between states), the vast majority of mental contents get handled in a few stereotyped ways, where these ways are themselves underpinned by substantially different neural mechanisms (e.g., for imagining scenarios vs believing scenarios).