@roystgnr's banner p

roystgnr


				

				

				
0 followers   follows 0 users  
joined 2022 September 06 02:00:55 UTC
Verified Email

				

User ID: 787

roystgnr


				
				
				

				
0 followers   follows 0 users   joined 2022 September 06 02:00:55 UTC

					

No bio...


					

User ID: 787

Verified Email

He lost 48% to 50%, though, right? With Alabama something like 62% Republican, that means perhaps a quarter of Republicans there who would have voted for him didn't, and the rest held their nose and voted for him anyway.

I suspect the vast majority of his voters believed the allegations to be false, so their votes aren't evidence of evil, but willful ignorance isn't a great alternative. The guy's denials were waffling, self-contradictory, and self-incriminating. "I don't remember ever dating any girl without the permission of her mother." is not the sort of thing you say when you're into adult women.

Not allying with Stalin doesn't mean the Soviets collectively drop their guns and meekly submit to slaughter.

Some of them wouldn't even have had guns to drop - a hundred billion (current) dollars of war materiel here, a hundred billion dollars there, and pretty soon you're talking about real support.

I admit, based on how tragically well-situated the USSR was in the aftermath of WWII, it's easy to think there must have been some lower amount of assistance that would have been a much better outcome for the world as a whole, by putting the same resources to work on the Western Front instead, but:

  1. What Western Front, at that point? We didn't start assisting the USSR until a year after Dunkirk, at which point we had no beachhead and wouldn't be able to establish one for years. The choice wasn't "bleed the Nazis by supporting the Soviets" vs "bleed the Nazis ourselves", it was vs "do nothing".

  2. What is the minimal lower amount of assistance that would have left us with "weakened USSR unable to drop the Iron Curtain" but not with "conquered USSR unable to keep the Nazis fighting on two fronts"? Get this one wrong and we still end up with multiple genocides (only like a third of the Soviet victims were post-WWII; by the time they became one of the Allies it was too late to save the other 2/3rds), except one of those becomes a much larger, nearly unstoppable genocide.

I suppose we'd get The Bomb and Hitler was further from catching up to that than Stalin was, so perhaps that makes both questions moot? But that only works with a lot more hindsight than anyone could have been expected to have in 1941. For that matter, even "not allying with genocidal maniacs" only makes sense with hindsight. In the 30s Duranty was getting his Pulitzer for reporting on how totally fake the Holodomor was, and in the 40s FDR was telling everyone how trustworthy he thought Stalin was. Enough people bought it that we demobilized like 85% of our military in the 2 years before the Berlin Blockade.

Ah, Benford's Law. Great in other contexts, but here that one didn't pass the smell test for me; the "law" only applies if you're sampling from distributions spread over orders of magnitude, not voting districts drawn to be nearly equally sized multiplied by vote percentages centered around .5. I later learned there's a clever trick where you can look at later digits' distributions instead of the first digit's, but all the skeptics I saw in 2020 were just misapplying the basic version of the law.

I've seen final vote tallies that were obvious fakes from the numbers alone, but for elections like Saddam's or Putin's, not Trump's or Biden's.

I still heartily approve of trying to check, though. An election isn't just about getting the right result, it's also supposed to be about getting the right result in a transparently trustworthy way.

Yes, homicides fell.

This isn't the data I linked to. The violent crime rate is about 75x the homicide rate, and both fell in half.

https://journals.sagepub.com/doi/abs/10.1177/108876790200600203?journalCode=hsxa

This talks about a lethality decrease from the 1960s to 90s. I'm talking about reductions in 98% non-lethal crime from the 90s through 2010s.

it's hard to claim the increase in violence as a good thing.

It's especially hard if violence decreased 50%.

Does Chabad influence Jewish beliefs about Gentile souls? That purported inherent Jewish contempt for Gentile souls was the bailey, right? I thought "You can find such awful beliefs in one subsect's founder's centuries-old book" was a small motte to retreat to, but "The sect gives Jewish college kids community centers and only 84% of Jews aren't 'semi-regular' service attendees" is a motte so tightly walled in I can't even find a window from which to see out. Wait until you hear about the Salvation Army.

Even the "network of camps" stuff needs fleshing out. I went to (Christian) religious summer camp at one point as a kid. We never got an "unbaptized babies end up in hell" lesson there, though, despite it being fairly fundamental to the denomination's roots. Do Chabad camp attendees get the adults' "Gentile souls are crummy" lessons, or is "eh, gloss over the creepy stuff in front of the kids" a common trait?

We did get the "Abraham was great for being willing to kill his son when the voices only he could hear told him to" lesson occasionally in (again, Christian) church. Likewise for Noah's Ark and non-Noahs' Watery Graves, though that was treated as more parable than literal. I also reached the "Moses getting chided by God for not quite being genocidal enough" parts when reading the Bible by myself. There is indeed lots of really awful stuff in actual Jewish scripture! The catch is that it got eagerly adopted by billions of Christians, too, because "form moral judgments independently" and "treat all human life as equally sacred, yes even some of those outsiders" haven't been very popular among any groups. That Chabad book actually predates the last time some Christian authorities hanged a man for heresy! ("according to Ripoll, it was not necessary to hear Mass in order to save one's soul from damnation"? String him up, for that?) The claim that Judaism has "moral quandaries" is impossible to argue against, but suggesting that it's somehow special in this respect can't be done without ignoring all other human ideology, and then picking out one subsect to speak for a whole is like a willful rejection of all the tragicomedy of religious belief, Jewish belief in particular.

The answer for Bayesians is p=0.5, and they don't encode uncertainty at all.

This is false. Bayesian calculations are quite capable of differentiating between epistemic and aleatory uncertainty. See the first Google result for "Bayes theorem biased coin" for an example.

(edit to add: not a mathematically perfect example; the real calculations here treat a bias as a continuous probability space, where a Bayesian update turns into an integral equation, and instead discretizing into 101 bins so you can use basic algebra is in the grey area between "numerical analysis" and "cheating".)

It's a shame I can only upvote this once; thank you.

It just says "having previously taken an oath" - shouldn't that apply to former office-holders as well, even if their term(s) ended before the insurrection?

(still doesn't seem like it should have applied to Cox, who was neither a present nor former office-holder before the Civil War)

Frequently what happens is that it gets comically enormous and useless as various stakeholders fill it with random bullshit.

Could you give any examples of "erroneous"? I've certainly seen "enormous"/"useless"/"random bullshit", and burying important truths in so much filler they get ignored might have consequences as bad as falsehoods, but I just don't recall seeing any likely falsehoods. Even the random bullshit is unevidenced rather than obviously untrue, along the lines of "let's put X in the list of possible side effects, as CYA, even though our only evidence for X is that in one study the treatment group reported it almost as often as the control group"...

The Motte never comes anywhere near "universal" agreement on anything.

You worded this as an unqualified absolute just to troll all of us who disagree with it at that extreme, didn't you?

I know I shouldn't let myself be swayed further by an argument's rhetoric than by it's logic, but Michael Shermer really found a damning framing supporting your point.

I'm still solidly "Collective punishment is bad", but I have to admit: if we could mete out omnisciently accurate individual punishment, some collectives might have a much larger fraction of punished individuals than others...

I'd agree that your alternative course of action would have been a much better idea, at least because acknowledging the importance of subtext and escalating away from plausible deniability gradually is a good way to communicate "if I were your boyfriend I probably wouldn't do anything to suddenly embarrass you". I also sympathize with anyone who feels so uncomfortable about delivering rejection that they'll avoid a person they've recently had to reject. But...

pretending to remain friends with her, when she knows you want more, which is not sustainable

"pretending"?

This sounds so much like a pot-shot Scott Alexander thought was embarrassing enough to delete:

They always use this phrasing like "Man, I thought he liked me as a person and enjoyed spending time with me. But then he said he wanted to date me! What a dirty rotten liar!" It sounds for all the world like not only are there two ladders, but that women can't even conceive of the idea of having a single ladder where liking someone and wanting to date them are correlated."

I thought that was a productive post overall because "just ask for dates in socially-recognized venues or via friends-of-friends" was a useful takeaway for some people, but if his overgeneralization actually applies to some women, then "don't reject suitors specifically because they were attracted to your personality first" might have been even more useful.

Hunter told one of the Chinese business men his father wanted to understand why he wasn’t laid yet.

I assume that "laid" was a typo for "paid", but it's Hunter, so I'm not 100% certain there...

Either way, could you link a source for this?

Honestly, I've wanted to reply to like half your comments with that same request. There's so much playing Telephone on the internet and so many people playing it poorly that my first instinct is to filter out anyone who makes a surprising claim without either an identity plus word-for-word quote or a hyperlink to the claim's source. It's bad enough when places like CNN so often do that, but if TheMotte commenters can't be held to a higher standard than mainstream reporters then what are we even doing here?

(really the only Turtledove you need tbh)

Come on; the trick to this game is to "Use simple lies that seem believable."

Were you picturing the resources all being used on Earth? Spread among a Dyson cloud of colonies, that much energy is a nice standard of living for quadrillions of people. Concentrated on Earth the waste heat would vaporize us.

That all makes sense; thank you!

I'd say you should be using a recovery plugin for your browser ... but if you go that route, make sure you check that it works here. Typio Form Recovery works for me on Reddit but not on TheMotte.

They're definitely going to be paying off some of the R&D that way. Starshield has its own separate satellites and its own network, so you'd think Starlink revenue would still have to cover marginal costs for the commercial sats, but even if Starshield never needs to piggy-back on the commercial network, I wouldn't be surprised if SpaceX is getting extra cash to guarantee the presence of all that (from an asat perspective) "chaff"...

a big disaster (not extinction, or close to it, but some kind of big spectacle) that prompts serious regulation

Fingers crossed. With typical normalization of deviance this is how it happens, because eventually you push farther than is safe and that causes a spectacular disaster. But does that still hold when a disaster is an agent with obvious incentives to avoid spectacle? It could be that the first thing smart enough to cause a real disaster is also smart enough to hold back until the disaster would be irrecoverable.

Homicide isn't tracked by victimization surveys. Unless there's vampire homicides and a particularly brave interviewer, I suppose.

Is the decoupling of homicide from other violent crime during a mass panic something to be really surprised about, though? With 98% of violent crime non-lethal, it only takes a tiny change in conversion rate. If a burglar is suddenly looking at a bunch of Covid-locked-down houses that no longer ever seem to empty, it doesn't seem a priori implausible that a few percent of them are going to say "no, too risky for me" (so the violent crime rate component of robberies still drops) while a few percent are going to say "I need the money, and if it's not empty, I can fix that" (so the homicide rate skyrockets). For that matter, what happens to the other side of the equation during the post-Floyd period? A homeowner who might have said "I'll run and call the police" is now more likely to conclude "the police might just shoot me by accident" or "the police might not even show up tonight" and take things into their own hands. Still a robbery, still 1 violent crime, but maybe now it's 4% likely to turn into a homicide instead of 2%.

All this said .. could you answer my original question? "(Counter) citation needed?" I'm getting the impression that you're so confident of "an increase in violence" over these decades that no new evidence will change your mind, and I'd really like to know whether the explanation is that there's some far-more-compelling old evidence that you've neglected to mention, or whether this is just confidence not based on evidence. I can come up with a dozen reasons the latter sort of confidence might exist (witness the long tails of these responses - surely the news wouldn't hammer on a category of story 24/7 if it was about as common as deaths by lightning!) but I'm hoping to stick with the former for myself.

Recognizing good output is half the problem; generating an enormous array of problems is important too. With complex board games every non-deterministic opponent (including every iteration of an AI during training) is a fount of problems; with math the new problems practically generate themselves as conjectures based on previous problems' intermediate definitions. I don't see how to come up with millions of independent programming problems automatically. That might just be a failure of my imagination (or more charitably, I just haven't thought about the idea for long enough), though.

Oh, yeah; I'd expect that sort of "self-play" to get peak model performance from "average human responses" to "best-human responses recognizable by an average human". And the economic effects of getting there, even if it caps out there, might be astounding. But frankly if we just end up having to retool 90% of the economy I'd be glad to see it, when the alternatives include extinction-level scenarios.

I think the most "real" thing I can imagine high-speed self-play for is math (theorem proving). Adding an inference step to a computer-verifiable theorem seems like it's as basic an output as choosing a move in a board game, and coming up with "interesting theorems" seems like it could be done automatically (some function combining "how short is it to state" with "how long is the shortest proof"?), and yet an AI training to play that "game" might eventually come up with something more useful than a high ELO.

I don't work in finance, but I can easily see how a zero-sum financial transaction from one perspective can be positive sum for the economy as a whole. If gullible layman day traders effectively hand their money to you, it sucks to be them but at least no value is destroyed, it's only transferred. If there were instead nobody smart in finance to take the other side of their bets, and their money ended up in the hands of ventures that can't possibly succeed, it would get destroyed just as surely but so would the value it represented. Even if the counterfactual were that they end up funding some safe-but-low-yield investment instead of funding startups that would have wildly succeeded, that's still a real loss due to opportunity cost, although in that case it's harder to say a priori that this is worse than the utility lost via decreasing marginal value of money when poorer amateurs lose gambles against richer experts.

I don't understand what would make you think I believe that.

It's the straightforward interpretation of

they don't encode uncertainty at all.

If you wanted to say "they don't encode uncertainty-about-uncertainty in the number 0.5", and not falsely imply that they don't encode uncertainty at all (0.5 is aleatory uncertainty integrated over epistemic uncertainty!) or that they don't encode all their uncertainty anywhere, you should have made that true claim instead.

You said of "They use many numbers to compute",

They don't.

This is flatly false. I just gave you two examples, still at the "toy problem" level even, the first discretizing an infinite-dimensional probability measure and using 101 numbers to compute, the second using polynomials from an infinite-dimensional subspace to compute!

You said,

Whatever uncertainty they had at the beginning is encoded in the number 0.5.

Which is false, because it ignores that the uncertainty is retained for further updates. That uncertainty is also typically published; that posterior probability measure is found in all sorts of papers, not just those graphs you ignored in the link I gave you. I'm sorry if not everybody calling themselves "Bayesian" always does that (though since you just ignored a link which did do that, you'll have to forgive me for not taking your word about it in other cases).

You said,

My conclusion is the same: p=0.5 is useless.

This is false. p=0.5 is what you need to combine with utilities to decide how to make an optimal decision without further data. If you have one binary outcome (like the coin flip case) then a single scalar probability does it, you're done. If you have a finite set of outcomes then you need |S|-1 scalars, and if you have an infinite set of outcomes (and/or conditions, if you're allowed to affect the outcome) you need a probability measure, but these are not things that Bayesians never do, they're things that Bayesians invented centuries ago.

the result is a single value.

This is trivially true in the end with any decision-making system over a finite set of possible decisions. You eventually get to "Do X1" or "Do X2". If you couldn't come up with that index as your result then you didn't make a decision!

If maximizing expected utility, you get that value from plugging marginalized probabilities times utilities and finding a maximum, so you need those probabilities to be scalar values, so scalar values is usually what you publish for decision-makers, in the common case where you're only estimating uncertainties and you're expecting others to come up with their own estimates of utilities. If you expect to get further data and not just make one decision with the data you have, you incorporate that data via a Bayesian update, so you need to retain probabilities as values over a full measure space, and so what you publish for people doing further research is some representation of a probability distribution.

I was not the one having trouble, they were.

Your title was literally "2 + 2 is not what you think", and as an example you used [2]+[2]=[4] in ℤ/4ℤ (with lazier notation), except you didn't know that there [0]=[4] so you just assumed it was definitively "0", then you wasted a bunch of people's time arguing against undergrad group theory.

Or do you disagree that in computing you can't put infinite information in a variable of a specific type?

What I disagreed with was

one bit can only fit one bit of information. Period.

This is the foundation of information theory.

And I disagreed with it because it was flatly false! The foundation of information theory is I = -log(P); this is only I=1 (bit) if P=1/2, i.e. equal probability in the case of 1 bit of data. I gave you a case where I=1/6, and a more complicated case where I=0.58 or I=1.58, and you flatly refuted the latter case with "it cannot be more". It can. -log₂(P) does exceed 1 for P<1/2. If I ask you "are you 26 years old", and it's a lucky guess so you say "yes", you've just given me one bit of data encoding about 6 bits of information. The expected information in 1 bit can't exceed 1 (you're probably going to say "no" and give me like .006 bits of information), but that's not the same claim; you can't even calculate the expected information without a weighted sum including the greater-than-1 term(s) in the potential information.

Distinctions are important! If you want to talk like a mathematician because you want to say accurate things then you need to say them accurately; if you want to be hand-wavy then just wave your hands and stop trying to rob mathematics for more credible-sounding phrasing. The credibility does not rub off, especially if instead of understanding the corrections you come up with a bunch of specious rationalizations for why they have "zero justification".

Where are you getting 75% from?

From a pair of embarrassing mistakes. 73% would be the minimum (counting "64% was really 64.5% rounded down and 60% really 59.5% rounded up" cases) number of people who think there aren't adequate safeguards against a single innocent death but who didn't let that make a difference to their practical vs their theoretical opinions ... but of course I shouldn't have counted people who already think the death penalty is morally wrong in that number, plus I thought about rounding in the wrong direction.

The minimum overlap of 64% and 78% is 42%

It would be even more interesting if every person who thinks the death penalty isn't morally justified also thinks that its safeguards are perfect, but you're right, there's no inherent incompatibiilty there.