@Harlequin5942's banner p

Harlequin5942


				

				

				
2 followers   follows 0 users  
joined 2022 September 09 05:53:53 UTC
Verified Email

				

User ID: 1062

Harlequin5942


				
				
				

				
2 followers   follows 0 users   joined 2022 September 09 05:53:53 UTC

					

No bio...


					

User ID: 1062

Verified Email

The domain of priors are propositions, that is, assignments of truth values to possible world-states, not strings of words

From a mathematical point of view, you can have a probability function defined over all sorts of domains. IIRC, Rudolf Carnap initially defined probability functions over sentences (in the sense of strings of symbols in an artificial language) while John Maynard Keynes and Harold Jeffreys did so over propositions (meanings of sentences) and later Carnap over models (in the formal logic sense). Then there's frequentism and other event-based definitions...

However, I agree with your comment, as we are thinking from the point of view of probability as an epistemologically meaningful magnitude, e.g. a measure of degrees of belief or evidential support. "Bloxers are greeblic" is not part of my languages. In general, I shall have at least some background evidence about any proposition in a language I speak, and thus not have pure uncertainty.

That was one of the objections listed in the post, Scott's response was that you should only be neutral about elementary propositions, not about compound ones ("bloxors are greeblic AND bloxors are grue").

How do I know that "bloxor-1 is greeblic" is elementary, if I am totally uncertain about this proposition, and I don't even understand the terms? Additionally, it's arbitrary to say that one should be neutral about the elementary propositions.

I personally think that this entire kind of objections can be dismissed by pointing out that Bayesian math works correctly and without contradictions

What do you mean "correctly"?

when looking at actual priors there's not much disagreement about how to choose them either, in practice

Depends. If you interpret the probabilities as subjective degrees of belief and interpret degrees of belief in terms of idealised betting dispositions, then it's not obvious that people can introspect their own odds. Experimental work from about the Allais paradox onwards doesn't suggest that Bayesianism is a good fit with how humans actually reason under uncertainty, and without some evidence of reliability of personal introspection of priors, "My prior is X" is potentially just hot air.

Nobody actually has arguments against assigning a symmetric prior to a coin bias

How many of the arguments in probability theory have you read to come to this judgement? Because I can think of large parts of the literature dedicated to exactly this point.

Fair point, I was assuming that Scott would think you should also assume independence unless you have evidence otherwise, but I should have stated that assumption.

Scott's claim is about statements, so there's still the problem I mention: 50% with respect to the hypothesis "Almost all bloxors are greeblic" implies very non-neutral beliefs about other statements. Similarly, if it's all bloxors that are being described, then that leaves just 50% of the probability mass to allocate among all the other possible statistical distributions, so e.g. "50% of bloxors are greeblic" and "0% of bloxors are greeblic" can't both have 50% probability as well.

There's also the problem that 50-50 is not actually a neutral probability, if you're a coherent Bayesian and you don't have an ultra-simple sample space. For example, if I think that the probability of each possible bloxor being greeblic is 50%, then I am committed to thinking that the probability that 70/100 bloxors being greeblic is 0.004%. So my "neutral" prior commits me to extremely strong confidence that the distribution of greeblic among those 100 bloxors is not 70!

If I set my prior for each bloxor being greeblic to 69.5%, then it is approximately neutral with respect to 70/100 bloxors being greeblic. But now I'm obviously far from neutral with respect to any individual bloxor being greeblic.

This is one of the limitations of Bayesianism as a formalism: it can model neutral belief with respect to any individual partition of the sample space, but not all partitions of the sample space. So, Scott is just wrong and frankly hasn't understood the mathematics, given his statement "If you have total uncertainty about a statement (β€œare bloxors greeblic?”), you should assign it a probability of 50%," since this norm implies incoherence, but coherence is a fundamental Bayesian norm.

Put briefly, what Scott is saying requires that you reject Bayesian epistemology/decision theory. I haven't read the whole post yet, but I would be surprised if he realised that.

Even airborne HIV would still have great difficulty getting to hermits

Hermits are not a promising way to avoid human extinction.

uncontacted tribes

These are definitely vulnerable extinction events, including those more likely than large asteroids.

I'm not saying that a pandemic is a huge x-risk event, but rather that it's easy to underrate its connection with x-risk if one just looks at first-order impacts.

I think that even if possible this would probably need a superintelligence to design it, which means it's most sensibly placed under "AI risk" rather than "biorisk".

Agreed. "What would happen if the Thing had reached civilization?" has been one of my favourite daydream questions recently, but the Thing makes most sense as a specially engineered bioweapon developed by a very advanced intelligence.

It's plausible to me that lots of examples of anarcho-tyranny are driven by public opinion. African tribe A wants the country's government apparatus to be asymmetrically used in their favour against African tribe B. Bleeding heart liberals in the West want a legal system that "doesn't punch down."

Because the great powers are incapable of cooperating in this unselfish way. Nuclear arms control is my example - the whole idea started just after the big powers acquired their nuclear arsenal and only applies to weaker latecomers.

A notable example of how this works is the war in Ukraine. Ukraine was coaxed into not being a nuclear power post-USSR. At the time, this definitely seemed like a good idea. Now they are in a situation where they and the West are always afraid of escalation, because of Russia's nuclear weapons.

(The US has a track record getting on the wrong side of escalation: in the Vietnam War, the US held back from e.g. a naval blockade of Vietnam, to avoid China and the USSR becoming more involved. In the Korean War, the US didn't use its nuclear arsenal, long before MAD, to avoid escalation. I'm not saying that either decision was wrong.)

Even if pandemics can't kill off humanity alone, but they can radically inhibit our ability to handle existential risks, e.g. an asteroid; if there are a few thousand people left, they're not going to be able to develop a sufficient space program to handle such a problem. Often when a species goes extinct, there seem to be a number of factors that accelerate each other, e.g. hunger in a changed environment, then disease, then increased predation, then problems of fertile males and fertile females hooking up, then inbreeding...

However, a virus with a sufficiently long asymptomatic period when it can spread could kill off humanity, if it could spread to 100% of the population in time. Think of something like airborne HIV. Is that likely? No. Is it scientifically possible? Yes.

I agree that synthetic biology is the more plausible threat.

I share your pessimism. I think of it this way: the amount of time it takes to do anything increases with the amount of time available; the amount of admin that a bureaucracy requires of you increases with the amount they think you can do.

So LLM increases in people's capacity to handle admin work will result in an increase in the amount of required admin. "Of course you can complete this form as well; you can use an LLM to help you..."

Being weak martyrs worked pretty well for the early Christians and it seems to work even better now that Christian martyrdom is entrenched (usually not under that name) in Western culture. Even someone with a "strongman" image like Trump hasn't suffered from being (from his supporters' perspectives) persecutable by the legal elites.

Reminds me a bit of 2001: A Space Odyssey, where (IIRC) Hal makes a mistake because of conflicting requirements created by secrecy about what happened with the alien monolith on the moon, but Hal's self-awareness makes him try to cover up this mistake, at the cost of the crew's lives...

An example of this is labels for legislation. The reason why they tend to have fuzzy "apple pie" names like "Inflation Reduction Act," "Patriot Act," "Social Justice Act" etc. is because a lot of voters will never think too far beyond the labels.

Yes, one of the marks of bad social science (and other sciences, but it's particularly tempting in complex open systems) is not to ask the question, "And then what?"

Ah, that additional data does give a clearer picture.

Maybe "Peter-Singer-essay-style austerity", to ensure accurate phrasing just in case those allegations aren't all faked?

I was thinking specifically of his ideas about giving aid rather than his sex life, but yes.

I fear the real distinction here is that abuse of other people (the ones who can't afford to isolate themselves from crime) is easier for most people to accept than privation of themselves.

That's certainly easier, but people also seem willing to tolerate e.g. volunteering their pronouns when it's required for the job.

Although, in this analysis, "donating all your income to the poor" is deprivation that's also often tainted with various levels of abuse.

Depends on the poor people in question. Singer was thinking of starving children in the Third World, IIRC. However, as you say:

a second-order look at incentives suggests that rewarding abuse might merely engender more abuse.

Malthusian logic would suggest that the response of many of the Third World parents would be to have more children... At this step, serious benevolence-only types might start consider measures to encourage smaller families, such as "More education for women" or "More encouragement of contraception."

Hell, the biggest predictor of homelessness rates in an area is housing prices.

Could that be because both homeless people and non-homeless people want to live in certain areas, while the latter pay for the privilege and thereby drive up housing prices?

The part I struggle with is, how does a society argue against compassion?

This is the problem brought up in David Stove's What's Wrong with Benevolence? His answer to the title is: nothing, if it is combined with other virtues. The elevation of benevolence to the status of fundamental virtue, which began around the 18th century and which was accelerated by utilitarians.

What is required is the recognition that other virtues have a fundamental value, e.g. justice and prudence. This is not easy, even if the arguments are good, because most people are highly agreeable (in the Big Five sense) so they fear conflict, and they tend to see benevolence as a route to conflict-avoidance: "If only we are kind enough to the unhoused darlings, they won't cause any trouble to us."

It's the same dynamic with a lot of woke activism. Disagreeable radicals can bully around most people, because most people's default model for handling such conflicts is to bend the knee and hope it saves their own necks.

So elevating benevolence as the sole virtue has the persuasive power of elevating most people's submissive natures into approved virtues, and hence it has both philosophical arguments and self-interest in its favour. That's also why people's benevolence tends to extend to e.g. accepting misbehaviour by the homeless, but not Peter Singer-style austerity of living like a monk and donating all your income to the poor. Accepting abuse is much easier to market than undertaking privation.

In this case, I think the worry is that you should expect apparent divergence between the data for the two populations (men and women) to be different because of a selection effect, so there's no reason to infer that the divergence is actually a property of the populations (rather than your samples). It's like estimating the number of bats in two forests, but measuring one forest at night and the other in the day.

But what other data is there to trust about that timeline?

If there's a lack of reliable data to form an estimate, then the rational response is agnosticism, not using unreliable data.

The data is there to be combined, I think, because data on levels of transfers/taxes is available due to this debate: https://www.brookings.edu/articles/measuring-income-inequality-a-primer-on-the-debate/

Also worth noting that the US had a national minimum wage long before many countries in Europe (e.g. the UK or Germany) and some European countries (I think Sweden and Denmark?) don't have national minimum wages.

The US beats Western Europe in many aspects of regulation, which is partly why the US economy does better, but there are exceptions.

I think the argument is that the welfare state's consequences are more apparent in the long run, as you get e.g. intergenerational welfare dependency, people not saving enough for their own retirement, people choosing safe careers rather than taking risks (and getting taxed heavily on the rewards), people not having kids because they trust in the state to look after them in their retirement etc.

True, the US has some of these incentives, but arguably not to the same degree as Western Europe.

There's also the argument that the rising dependency ratio with an ageing population is when the welfare state really becomes a drag, and the developed world is facing a rising dependeyc ratio due to demographics. Most welfare states were created for completely different population structures. That's why, despite rising taxes, cuts to services, and reforms, the fiscal outlook in most of Western Europe is still bleak: no matter how you walk, it's going to be uncomfortable to walk in shoes that are too small.

There is a hilarious literature on ice cream and dieting.

Seems like a good hussle. It's hard to verify someone's diet and there is a lot of wishful thinking when it comes to diets.

Anecdote: I remember a friend I had who had been watching lots of vegan Youtube/Instagram videos and was doing a vegan diet for "aesthetics." She was struggling to make it work. We had a mutual friend who was a vegan, so I said, "Why not ask him for advice?" Her response? "Have you seen how he looks?" Our mutual friend looked noticeably unhealthy. While a lot of that had to do with the drugs and drink, the vegan diet was no panacea. Turns out that to look like a Californian influencer, copying their diet is not enough.

Annoyingly for her, I'd lost a lot of weight in that period on a diet where my main principle was just reduce sugar consumption and eat high-satiety foods (fat, fibre, protein, complex carbohydrates).