@doglatine's banner p

doglatine


				

				

				
17 followers   follows 2 users  
joined 2022 September 05 16:08:37 UTC

				

User ID: 619

doglatine


				
				
				

				
17 followers   follows 2 users   joined 2022 September 05 16:08:37 UTC

					

No bio...


					

User ID: 619

I don’t think it’s an insuperable problem. A difficult one to be sure, but academic incentive structures are a lot more mutable than a bunch of other social problems if you have the political will. There’s also the fact that the current blind review journal-based publishing system is on borrowed time thanks to advances in LLMs, so we’ll need to do a fair amount of innovating/rebuilding in the next decade anyway.

Top or bottom?

I’m glad anyone got it! Very much an imperfect analogy but it felt right somehow. /u/zeke5123 has the core of it — that Vivek will end up using Trump as a figurehead to advance his own ends and ambition. Maybe I’m overestimating Vivek and/or underestimating Trump, but for all his animal cunning, I still see some confused generous boomer in Donald, whereas Vivek is all 2nd gen migrant ambition and ruthlessness. There’s also the fact that Puzzle is vastly more virtuous than either of them, but as I say, it was mostly a vibes-based analogy.

There is no stealth in space.

Just FYI, and also this. Obviously the programmes are super classified so we don't know how stealthy the satellites are, but hiding stuff in space isn't impossible.

Good questions!

  1. Yes, absolutely. In fact I think people can hold full-blown beliefs that are in contradiction, although (unlike S-dispositions) this creates genuine cognitive dissonance.

  2. This is tricky because individuating beliefs contents is tricky. When an astrophysicist says "the sun is heavy" and a 10 year old child says "the sun is heavy", do they hold the same belief? In general, I'm inclined to be sloppy here and say it's a matter of fineness of grain; there's a more coarse-grained level at which the physicist and the child hold the same belief, and a fine-grained level at which they hold different beliefs. That said, I'm inclined to think that individuating S-dispositions should if anything be easier than individuating beliefs insofar as it's more closely linked to public behaviour and less linked to normatively-governed cognitive transitions (the kind of inferences you'd make, etc.). To be a bit more rigorous about it, I'd say two individuals A and B share an S-disposition P to the extent that (i) they are inclined to assert or deny P in the same social contexts, and (ii) do not integrate P with their broader cognitive states and behaviour in the manner characteristic of belief.

  3. Great question. A few simple rules of thumb. (i) As noted above, conflicting S-dispositions do not generate negative emotional affect in the same way that conflicting beliefs do (cognitive dissonance); (ii) S-dispositions are relatively behaviourally and inferentially inert, and do not play a significant role in people's lives even in cases where beliefs with the same content do (e.g., someone who pays lip-service to climate change narratives vs a true believer); (iii) S-dispositions are almost exclusively generated and sustained by social contexts, whereas beliefs can be frequently arrived at relatively independently (there are big social influences on beliefs of course, but the point is that there are only social influences on S-dispositions); (iv) individuals feel no real obligation or interest in updating S-dispositions as compared to beliefs, etc.. Applying these heuristics to oneself can help one distinguish the two.

  4. Again, a very good and interesting question, and one I'm still thinking about. I think the clearest causal arrow here runs from S-dispositions to beliefs: someone might adopt animal rights-related S-dispositions for social reasons, and subsequently go on to translate some of these into full blown beliefs. In the opposite direction, one could imagine a person's belief system being "hollowed out", so they assent and dissent from the same propositions but without any of the interest and commitment that they used to have; something like this can happen to religious people, for example, but distinguish those cases from instances where people genuinely 'lose their faith' and acquire full-blown atheist beliefs. More broadly, I expect there to be lots of interesting connections between the two.

Lots of great points here; let me respond to a few.

First and foremost, this seems absurdly difficult to measure rigorously.

Agreed, although this is a problem with most psychological and social states. There is a robust conceptual distinction between someone joking vs being sincere, but actually teasing that apart rigorously is going to be hard (and you certainly can't always rely on people's testimony). Instead, when it's really essential to make a call in these cases, we rely on a variety of heuristics. The point of my screed is not that I've found a great new psychometric technique, but rather an important conceptual distinction (that psychometric or legal heuristics could potentially be built around).

Maybe they really believe in climate change but they're just selfish and care more about their own convenience

Right, although that would generate predictions of its own (e.g., changing their behaviour immediately when the convenience factors changes). Hard to measure for sure, but not impossible (I think we do this all the time for lots of similar states).

Second, I think a lot of the perceived sparseness is availability bias... if you look at a broader and less interesting class of beliefs I expect you'd find 99%+ of beliefs are genuine

That's possibly true, but not hugely interesting except for framing purposes since "counting beliefs" is a messy endeavour in the first place. Perhaps my main thesis could be reframed as "a lot of things we are inclined to think of as being beliefs aren't actually best understood as beliefs but as a distinctive type of state." Moreover, any serious attempt to quantify the prevalence of S-dispositions vs beliefs is going to have to grapple with some messy distinctions between e.g. explicit beliefs that are immediately retrievable (my date of birth is XX/XX/XXXX) and implicit beliefs that are rapidly but non-immediately retrievable from other beliefs (Donald Trump is not a prime number).

Does this 60% belief count as "genuine?" And would your study be able to tell the difference between that and someone with a hypocritical professed 99% belief?

Again, this is messy in practice, but as long as we stick to the conceptual level it's fairly clear-cut, insofar as we'd expect different behaviour from a rational sincere Bayesian 60% believer vs a hypocritical 99% believer (consider, e.g., betting behaviour).

In theory something along the lines of your study, done extremely carefully, could be useful.

To be clear, this is theoretical psychology/philosophy of mind rather than policy recommendations, and any actual implemented policies would be several research projects downstream.

One simple way for twitter to monetise would be to charge for Blue Checkmarks. Maybe offer some power-tools in response (e.g., analytics). Maybe you could even charge different amounts for different tiers of Bluecheck.

Absolutely - the deterrent effect of a missile shield isn't to protect against a general nuclear war in which Russia, China, or the US decides to hit the big red button. Given the constraints of MAD, I'd like to think that no state would rationally launch a first strike at scale. The point of the shield is to prevent countries engaging in low-level nuclear bullying, or attempts to use nuclear weapons to gain a limited battlefield advantage. Existing MAD doctrine doesn't really cover these kinds of contingency: the US isn't going to nuke Moscow just because Russia uses a battlefield nuke against a Ukrainian airbase.

Eh, feels more like milquetoast centre-leftism to me. A giveaway to the middle classes. At least when I use the term "progressivism", I mean to refer specifically to the complex of identity politics movements.

FWIW I like your answer a lot and I don’t think preventing violence against Israel would be unattainable for a Gazan leader with a strong enough power base. I’m thinking here of Kadyrov in Chechnya. You’d want to start by finding a smart powerful and mercenary figure within Hamas. Give them enough money to build up their power base, bribe minor players, have major rivals killed. Give them weapons and allow them to build a Praetorian Guard of elite Hamas fighters who live like kings and get all the chicks. Develop very strict internal messaging norms around Israel and violence — general calls for a unified Palestine one day are fine, but no direct exhortations to violence. Make it so that anyone who fucks with you ends up dead, and anyone who works with you gets money and women.

This shouldn’t be politically impossible. Everyone is responsive to multiple social incentives and these in turn can be influenced with money. It will just take a lot of time, money, and finding the right people.

Someone whose opinions and actions are purely formed in response to their informational environment; who toes the line about anything from COVID origins to which movie to watch. They are thus merely reactive to the world around them, like an NPC from a videogame.

100% agree. That's what really impressed me when I heard the story. It made it go from the kind of system that could lead to feuding to something like justice by-social-consensus.

As I'm using the term "belief", I'm gesturing towards a class of representational mental states that are governed by a distinctive set of norms, e.g., serving as components of knowledge, things that can be more or less justified, things that we have a special sort of duty to update on the basis of evidence, that we have a duty to make coherent, etc.. That may sound narrow and specific, but I think it's a fairly clearly identifiable cross-culturally valid concept running through a wide range of philosophical and scientific concepts, from Greek, Chinese, and Indian philosophy to a wide range of religious traditions. I think the concept has been problematised a bit by modern psychology and cognitive science, with compelling evidence for things like unconscious beliefs, subdoxastic representations involved in things like early vision and language, etc.. Moreover, a lot of modern cogsci (though not all) draws a fairly bright line between perceptual and cognitive states, with beliefs falling clearly on the latter side, so some of your examples would be classified as perceptual expectations or affordances rather than beliefs proper.

All that said, one thing that's (very helpfully) becoming clear from this discussion is that I shouldn't phrase the thesis in comparative terms as "most of what we consider beliefs are S-dispositions"; that's problematic for a lot of the reasons you and others have pointed out, and needlessly complicates things. My core point is rather that a significant subset of what we unreflectively classify as beliefs (e.g., casual opinions) are best understood as a different kind mental entity all together.

Right - the view is not that one fails to believe that P if one fails to believe all logical consequences of P, but rather that one is normatively obliged to believe those consequences insofar as you are or can become aware of them. If Dave hasn't yet realised that the number in the corner of the Sudoku matrix is a 1, then that's not a mark against his relevant states not being beliefs. However, if Dave realises that the number in the corner of the matrix should be a 1 according to the rules of Sudoku but still asserts that it's not, that's a mark against the assertion being underpinned by something other than a belief (or by a different sort of belief in special cases - e.g., if John is filling in the puzzle with aesthetic considerations in mind, and doesn't care about the rules). The point here is that there are many, many cases where people are actually aware of logical or probabilistic consequences of things that they profess, yet fail to profess or act in accordance with those consequences, suggesting that the things they profess in the first instance aren't actually beliefs in the strict sense.

Touché! I don’t have useful evidence at hand — it was a grumpy sideswipe, on anecdotal and observational grounds. If I were to make more a extended argument, I’d start by operationalising the specific communication style I have in mind, probably in terms of Trait Agreeableness (Compassion), which is robustly higher in women and higher in progressives, and then look for (or run) a sentiment analysis on left wing vs right wing social media spaces.

Of course, the answers I provide here are calculations, and anyone can do that; the challenge I’m posing is for readers to come up with mental estimates which they can check against the calculations to see if they got within an OoM.

Out of interest, when did the term «egregores» enter the Motte’s common lexicon? Who popularised it?

This is all fair, and I'm aware of the complex situation underlying the conflict including the first war in the 90s, and was gliding over complex nuances. Interestingly, back in the early 1920s, Artsakh was going to be been awarded to the Armenia SSR based on predominant ethnic makeup, but Stalin personally intervened to prevent it.

And you're right about Turkey. Armenia has been very unlucky with its neighbours.

Why do you have so much hatred for the Russian state...

This comes rather close to Bulverism, especially given your final question; it reminds me a lot of lines like "Why do care so much about other people's genitals?" that are frequently used to disarm dissenting views in debates around trans issues, implying that someone has scurrilous or questionable motives for their investment in an issue. I will say, though, that I identify strongly as a European, and Russia soldiers squatted on half the old capitals of Europe for a half-century, oppressing, impoverishing, and killing. After throwing off the Soviet yoke and joining the Western bloc, these nations became richer, stronger, and more politically inclusive. Russia, by contrast, has made little to no investment in itself since the fall of the Soviet Union; its economic growth has been almost entirely led by the petrochemical sector, and it has let its excellent scientific and technological gains rot while its physicists went off to work on Wall Street. I would say moreover that it is morally worse to pretend to hold elections and fake the results than to deny them all together; assuming the net result is the same, the former simply adds deceit to coercion.

In any case, that's a sample of my reasons for caring about this conflict. As for Yemen, I know and care very little about the country aside from the fact that it has been fighting civil wars since before I was born, it is extremely poor, and has a crazy high TFR (also that khat use is endemic among men). Whether or not Saudi Arabia wages its war (which in turn involves a complex mix of sectarian and political motives), Yemen is likely to remain an impoverished and dysfunctional place, much like every other Muslim country in the Middle East that doesn't have oil.

But perhaps all of this is indulging your question a bit too much. Rather than turn this into a therapy session, it is clearest and simplest for me to say that as a citizen of the West who identifies with the aims and values of the liberal international order, I see it very clearly as being in our interests to make this war as painful as possible for Russia: we rebut the clearest threat to the LIO this century, we disincentivise China from attacking Taiwan, we weaken a long-term strategic adversary and non-status quo power, we weaken Russia's ability to control its authoritarian and extractive vassal states, we humiliate Russian military might and weaken their ability to compete with the West on arms contract, we reinvigorate the Western alliance and increase NATO's total budget, etc., etc.. By contrast, we should stay as far removed from the war in Yemen as we can without causing permanent damage to our ties to Saudi Arabia, on whom we'll be moderately dependent for another decade or so. After that, I'd be happy to let that particular alliance wither on the vine.

Bing Chat largely doesn't have this problem; the citations it provides are genuine, if somewhat shallow. Likewise, DeepMind's Sparrow is supposedly extremely good at sourcing everything it says. While the jury is still out on the matter to some extent, I am firmly of the opinion that hallucination can be fixed by appropriate use of RLHF/RLAIF and other fine-tuning mechanisms. The core of ChatGPT's problem is that it's a general purpose dialogue agent, optimised nearly as much for storytelling as for truth and accuracy. Once we move to more special-purpose language models appropriately optimised on accuracy in a given field, hallucination will be much less of a big deal.

In some ways Brilliant Pebbles is even more exciting. In principle, space-based BMD is actually viable even for mass launches of ICBMs. In fact, because (i) MIRVs are only deployed on re-entry, (ii) objects moving in a vacuum are relatively predictable, and (iii) any kind of collision in space is likely to be terminal due to the insane velocities involved, the economics and physics might even favour the defender: 10,000 or so baseball-sized micro-missiles could take out huge numbers of ICBMs reliably (again, in principle - so much of this stuff is untested).

It definitely doesn't need to be 6 months, especially if you plan in advance and do your homework. My sister-in-law (Filipina) met her fiancé while he was on a 1-month surfing holiday in Siargao, and they connected and bonded and he came out to visit her a couple more times in the next 12 months, and now she's living in the Netherlands with him. She's also (I hasten to add) a very impressive woman in her own right, with graduate degrees from US and European universities, so their case isn't typical, but if anything that supports my case.

do we need a word other than 'lie' for what I was doing?

I'd distinguish pretty strongly between S-dispositions and lying insofar as the latter is (to at least some degree) an intentional act. We can talk about grey areas here, and lying is a surprisingly complicated state, but in general I think it's part of our concepts of lying and deceit that they require some extra cognitive work and self-awareness compared to telling the truth - e.g. you know that not-P, but you decide to assert that P for some duplicitous motive.

By contrast, S-dispositions as I'm understanding them require less work than regular strict beliefs - you espouse P without ever having seriously subjected P to reflection or scrutiny, but also without any real awareness of doing something epistemically irresponsible.

The vegan case I gave and which you reference might have been misleading in this regard, insofar as it's easy to imagine someone being genuinely deceptive in professing to be a vegan in order to get laid. That's not what I had in mind, though; I was thinking about a slightly naive person who finds themselves swept along with a certain kind of political stance due to interpersonal incentives, and even thinks they believe it at first, but has never actually put in the epistemic leg-work to integrate it with their world-view or figure out if they actually, deep-down endorse it.

The desire explains the action, the S-Disposition isn't needed.

I'm open to the idea that S-dispositions may be ultimately analyzable in terms of more basic mental states (desires, beliefs, etc.), but I'd say that our current vocabulary for the mind systematically confuses belief-driven assertions by assertions that are generated by social/contextual factors and are consequently subject to different norms. Having a distinctive bit of terminology for the cluster of causes of the second kind of assertion is helpful in itself and may remove confusion, even if it (as it may turn out) we find that this cluster of causes can be analysed in more basic terms.

Also, I know it's standard for academic philosophy, but I think you wrote 5x more than necessary to explain your point.

Heh, well, that's true and fair, but the methods of analytic philosophy are (or should be) to aim to be absolutely clear about your commitments, minimise ambiguity, and lay out all the steps of your reasoning, which can often lead to being a bit long-winded.

It can’t be inferred from the calculation I provided, of course, but apparently the average distance between galaxies is just 1 million light years, making the distance between the Milky Way and Andromeda greater than average! (although we also have the Magellanic Clouds for company, and they are MUCH closer to us)