@roystgnr's banner p

roystgnr


				

				

				
0 followers   follows 0 users  
joined 2022 September 06 02:00:55 UTC
Verified Email

				

User ID: 787

roystgnr


				
				
				

				
0 followers   follows 0 users   joined 2022 September 06 02:00:55 UTC

					

No bio...


					

User ID: 787

Verified Email

Sure, but when I can't take a derivative I'll still prefer a finite difference over nothing. How would you think those estimates change when we reduce the delta?

When the causal graph has more than two nodes, something can have a negative correlation (when measured with no controls) despite having a positive causative effect (which would show a positive correlation in an RCT), or vice versa. People who get chemotherapy are way more likely to die of cancer than people who don't.

I can't imagine the education/fertility relationship being an example of that, though. Nerds go to college more and have fewer kids, but not as many fewer as they'd have had without going to college? Sounds like a stretch.

On #1: try the second button from the top on the right of the gas pump's screen. It's almost never labeled as such, but it's usually set up as Mute. I've heard of one pump brand that uses top right instead, but never encountered it myself.

It just says "having previously taken an oath" - shouldn't that apply to former office-holders as well, even if their term(s) ended before the insurrection?

(still doesn't seem like it should have applied to Cox, who was neither a present nor former office-holder before the Civil War)

Frequently what happens is that it gets comically enormous and useless as various stakeholders fill it with random bullshit.

Could you give any examples of "erroneous"? I've certainly seen "enormous"/"useless"/"random bullshit", and burying important truths in so much filler they get ignored might have consequences as bad as falsehoods, but I just don't recall seeing any likely falsehoods. Even the random bullshit is unevidenced rather than obviously untrue, along the lines of "let's put X in the list of possible side effects, as CYA, even though our only evidence for X is that in one study the treatment group reported it almost as often as the control group"...

The Motte never comes anywhere near "universal" agreement on anything.

You worded this as an unqualified absolute just to troll all of us who disagree with it at that extreme, didn't you?

I know I shouldn't let myself be swayed further by an argument's rhetoric than by it's logic, but Michael Shermer really found a damning framing supporting your point.

I'm still solidly "Collective punishment is bad", but I have to admit: if we could mete out omnisciently accurate individual punishment, some collectives might have a much larger fraction of punished individuals than others...

I'd agree that your alternative course of action would have been a much better idea, at least because acknowledging the importance of subtext and escalating away from plausible deniability gradually is a good way to communicate "if I were your boyfriend I probably wouldn't do anything to suddenly embarrass you". I also sympathize with anyone who feels so uncomfortable about delivering rejection that they'll avoid a person they've recently had to reject. But...

pretending to remain friends with her, when she knows you want more, which is not sustainable

"pretending"?

This sounds so much like a pot-shot Scott Alexander thought was embarrassing enough to delete:

They always use this phrasing like "Man, I thought he liked me as a person and enjoyed spending time with me. But then he said he wanted to date me! What a dirty rotten liar!" It sounds for all the world like not only are there two ladders, but that women can't even conceive of the idea of having a single ladder where liking someone and wanting to date them are correlated."

I thought that was a productive post overall because "just ask for dates in socially-recognized venues or via friends-of-friends" was a useful takeaway for some people, but if his overgeneralization actually applies to some women, then "don't reject suitors specifically because they were attracted to your personality first" might have been even more useful.

Hunter told one of the Chinese business men his father wanted to understand why he wasn’t laid yet.

I assume that "laid" was a typo for "paid", but it's Hunter, so I'm not 100% certain there...

Either way, could you link a source for this?

Honestly, I've wanted to reply to like half your comments with that same request. There's so much playing Telephone on the internet and so many people playing it poorly that my first instinct is to filter out anyone who makes a surprising claim without either an identity plus word-for-word quote or a hyperlink to the claim's source. It's bad enough when places like CNN so often do that, but if TheMotte commenters can't be held to a higher standard than mainstream reporters then what are we even doing here?

(really the only Turtledove you need tbh)

Come on; the trick to this game is to "Use simple lies that seem believable."

Were you picturing the resources all being used on Earth? Spread among a Dyson cloud of colonies, that much energy is a nice standard of living for quadrillions of people. Concentrated on Earth the waste heat would vaporize us.

That all makes sense; thank you!

I'd say you should be using a recovery plugin for your browser ... but if you go that route, make sure you check that it works here. Typio Form Recovery works for me on Reddit but not on TheMotte.

They're definitely going to be paying off some of the R&D that way. Starshield has its own separate satellites and its own network, so you'd think Starlink revenue would still have to cover marginal costs for the commercial sats, but even if Starshield never needs to piggy-back on the commercial network, I wouldn't be surprised if SpaceX is getting extra cash to guarantee the presence of all that (from an asat perspective) "chaff"...

a big disaster (not extinction, or close to it, but some kind of big spectacle) that prompts serious regulation

Fingers crossed. With typical normalization of deviance this is how it happens, because eventually you push farther than is safe and that causes a spectacular disaster. But does that still hold when a disaster is an agent with obvious incentives to avoid spectacle? It could be that the first thing smart enough to cause a real disaster is also smart enough to hold back until the disaster would be irrecoverable.

Yes, homicides fell.

This isn't the data I linked to. The violent crime rate is about 75x the homicide rate, and both fell in half.

https://journals.sagepub.com/doi/abs/10.1177/108876790200600203?journalCode=hsxa

This talks about a lethality decrease from the 1960s to 90s. I'm talking about reductions in 98% non-lethal crime from the 90s through 2010s.

it's hard to claim the increase in violence as a good thing.

It's especially hard if violence decreased 50%.

Recognizing good output is half the problem; generating an enormous array of problems is important too. With complex board games every non-deterministic opponent (including every iteration of an AI during training) is a fount of problems; with math the new problems practically generate themselves as conjectures based on previous problems' intermediate definitions. I don't see how to come up with millions of independent programming problems automatically. That might just be a failure of my imagination (or more charitably, I just haven't thought about the idea for long enough), though.

Oh, yeah; I'd expect that sort of "self-play" to get peak model performance from "average human responses" to "best-human responses recognizable by an average human". And the economic effects of getting there, even if it caps out there, might be astounding. But frankly if we just end up having to retool 90% of the economy I'd be glad to see it, when the alternatives include extinction-level scenarios.

I think the most "real" thing I can imagine high-speed self-play for is math (theorem proving). Adding an inference step to a computer-verifiable theorem seems like it's as basic an output as choosing a move in a board game, and coming up with "interesting theorems" seems like it could be done automatically (some function combining "how short is it to state" with "how long is the shortest proof"?), and yet an AI training to play that "game" might eventually come up with something more useful than a high ELO.

I don't work in finance, but I can easily see how a zero-sum financial transaction from one perspective can be positive sum for the economy as a whole. If gullible layman day traders effectively hand their money to you, it sucks to be them but at least no value is destroyed, it's only transferred. If there were instead nobody smart in finance to take the other side of their bets, and their money ended up in the hands of ventures that can't possibly succeed, it would get destroyed just as surely but so would the value it represented. Even if the counterfactual were that they end up funding some safe-but-low-yield investment instead of funding startups that would have wildly succeeded, that's still a real loss due to opportunity cost, although in that case it's harder to say a priori that this is worse than the utility lost via decreasing marginal value of money when poorer amateurs lose gambles against richer experts.

I don't understand what would make you think I believe that.

It's the straightforward interpretation of

they don't encode uncertainty at all.

If you wanted to say "they don't encode uncertainty-about-uncertainty in the number 0.5", and not falsely imply that they don't encode uncertainty at all (0.5 is aleatory uncertainty integrated over epistemic uncertainty!) or that they don't encode all their uncertainty anywhere, you should have made that true claim instead.

You said of "They use many numbers to compute",

They don't.

This is flatly false. I just gave you two examples, still at the "toy problem" level even, the first discretizing an infinite-dimensional probability measure and using 101 numbers to compute, the second using polynomials from an infinite-dimensional subspace to compute!

You said,

Whatever uncertainty they had at the beginning is encoded in the number 0.5.

Which is false, because it ignores that the uncertainty is retained for further updates. That uncertainty is also typically published; that posterior probability measure is found in all sorts of papers, not just those graphs you ignored in the link I gave you. I'm sorry if not everybody calling themselves "Bayesian" always does that (though since you just ignored a link which did do that, you'll have to forgive me for not taking your word about it in other cases).

You said,

My conclusion is the same: p=0.5 is useless.

This is false. p=0.5 is what you need to combine with utilities to decide how to make an optimal decision without further data. If you have one binary outcome (like the coin flip case) then a single scalar probability does it, you're done. If you have a finite set of outcomes then you need |S|-1 scalars, and if you have an infinite set of outcomes (and/or conditions, if you're allowed to affect the outcome) you need a probability measure, but these are not things that Bayesians never do, they're things that Bayesians invented centuries ago.

the result is a single value.

This is trivially true in the end with any decision-making system over a finite set of possible decisions. You eventually get to "Do X1" or "Do X2". If you couldn't come up with that index as your result then you didn't make a decision!

If maximizing expected utility, you get that value from plugging marginalized probabilities times utilities and finding a maximum, so you need those probabilities to be scalar values, so scalar values is usually what you publish for decision-makers, in the common case where you're only estimating uncertainties and you're expecting others to come up with their own estimates of utilities. If you expect to get further data and not just make one decision with the data you have, you incorporate that data via a Bayesian update, so you need to retain probabilities as values over a full measure space, and so what you publish for people doing further research is some representation of a probability distribution.

I was not the one having trouble, they were.

Your title was literally "2 + 2 is not what you think", and as an example you used [2]+[2]=[4] in ℤ/4ℤ (with lazier notation), except you didn't know that there [0]=[4] so you just assumed it was definitively "0", then you wasted a bunch of people's time arguing against undergrad group theory.

Or do you disagree that in computing you can't put infinite information in a variable of a specific type?

What I disagreed with was

one bit can only fit one bit of information. Period.

This is the foundation of information theory.

And I disagreed with it because it was flatly false! The foundation of information theory is I = -log(P); this is only I=1 (bit) if P=1/2, i.e. equal probability in the case of 1 bit of data. I gave you a case where I=1/6, and a more complicated case where I=0.58 or I=1.58, and you flatly refuted the latter case with "it cannot be more". It can. -log₂(P) does exceed 1 for P<1/2. If I ask you "are you 26 years old", and it's a lucky guess so you say "yes", you've just given me one bit of data encoding about 6 bits of information. The expected information in 1 bit can't exceed 1 (you're probably going to say "no" and give me like .006 bits of information), but that's not the same claim; you can't even calculate the expected information without a weighted sum including the greater-than-1 term(s) in the potential information.

Distinctions are important! If you want to talk like a mathematician because you want to say accurate things then you need to say them accurately; if you want to be hand-wavy then just wave your hands and stop trying to rob mathematics for more credible-sounding phrasing. The credibility does not rub off, especially if instead of understanding the corrections you come up with a bunch of specious rationalizations for why they have "zero justification".

Where are you getting 75% from?

From a pair of embarrassing mistakes. 73% would be the minimum (counting "64% was really 64.5% rounded down and 60% really 59.5% rounded up" cases) number of people who think there aren't adequate safeguards against a single innocent death but who didn't let that make a difference to their practical vs their theoretical opinions ... but of course I shouldn't have counted people who already think the death penalty is morally wrong in that number, plus I thought about rounding in the wrong direction.

The minimum overlap of 64% and 78% is 42%

It would be even more interesting if every person who thinks the death penalty isn't morally justified also thinks that its safeguards are perfect, but you're right, there's no inherent incompatibiilty there.

If you have a suggestion that isn't "ban certain topics/arguments," let's hear it.

I've always been a fan of rate-limiting, in theory. If the "weekly" thread was a yearly thread then my reaction would probably be an excited "wow, here comes the fight again!" rather than an apprehensive "is anyone going to wade into the fight again or has everyone been exhausted to apathy now?"

It works great over Wifi6 once linked, but the startup for SteamVR and the Oculus program and the Quest2 itself is a pain. Every time I take the Quest2 off (which is frequent since the order of startup seems to matter for some reason so I end up having to see my laptop screen again unexpectedly...) it gets confused and wants to redraw a boundary when put back on ... which is especially weird since it seems to remember boundaries for days when I only use its internal apps.

Oh, and no Linux support, so before I even start setting up I have to make sure everything on my laptop is saved and then reboot.

Using the internal apps isnt always painless either. Some apps make some upgrades mandatory before they'll start again, like I'm going to use a 0-day to hack my exercise high score otherwise or something, and it sucks when I have 15 minutes to play but an update takes 8 of them. But delays there are the exception, not the rule.

How would it not be? My wife's Quest 2 fits over my glasses with room to spare. I think she might have gotten an aftermarket face pad for more comfort, though; maybe that also gave me more clearance?

I'm not sure what's worth playing on it, though. There are a few great exercise games, and a handful of great 360 videos, but the good VR game games seem to be on PC and the process of linking Steam VR to a Quest 2 is a PITA; when I only find 20 or 30 minutes to play at a stretch, I don't want to spend 5 of those minutes getting everything set up.

Doesn’t that just mean “traits” if we combine “traits that are innate plus traits that aren’t”?

I think the implied meaning in context wasn't "heritable refers to every member of A and B" but rather "heritable can refer to members of both A and B".

The typical breakdown is "genetic" vs "shared environment" vs "non-shared environment", isn't it? The shared-environment part would be considered heritable in the colloquial sense but not the biological sense, the genetic part would be heritable in both, the non-shared environment part in neither.