@official_techsupport's banner p

official_techsupport

who/whom

2 followers   follows 2 users  
joined 2022 September 04 19:44:20 UTC
Verified Email

				

User ID: 122

official_techsupport

who/whom

2 followers   follows 2 users   joined 2022 September 04 19:44:20 UTC

					

No bio...


					

User ID: 122

Verified Email

How do I know that "bloxor-1 is greeblic" is elementary, if I am totally uncertain about this proposition, and I don't even understand the terms?

Skill issue.

What do you mean "correctly"?

That I, doing Bayesian math about some bets against you, will leave you poor and destitute in the long run, unless you're using Bayes too. What do you want to use instead of Bayes for the record?

the Allais paradox

My point is not that the poors are always instinctively right. My point is that they have well-honed instincts for when someone is trying to take advantage of them, and the usual Bayesian reasoning like the above rightfully triggers it, even if they don't have the concepts or the introspection to communicate to us what was that, that triggered them.

My point is that a Bayesian megamind is entirely justified in asking the yudkowsky what fraction of his prediction came from the data, and basing his bet amount on that, and grumbling about the yudkowsky being useless if he refuses to answer.

Nobody actually has arguments against assigning a symmetric prior to a coin bias

How many of the arguments in probability theory have you read to come to this judgement? Because I can think of large parts of the literature dedicated to exactly this point.

Huh?

That was one of the objections listed in the post, Scott's response was that you should only be neutral about elementary propositions, not about compound ones ("bloxors are greeblic AND bloxors are grue").

I personally think that this entire kind of objections can be dismissed by pointing out that Bayesian math works correctly and without contradictions, and when looking at actual priors there's not much disagreement about how to choose them either, in practice. Nobody actually has arguments against assigning a symmetric prior to a coin bias, or even can muster a lot of enthusiasm to argue that you should use a gaussian instead of a uniform prior.

People get hot and bothered when they feel that someone tries to hide how much information they have actually updated on and how much is their prior.

They are making correct arguments that fail only in a very small subset of problems, those with information acquisition that is affected by the decisions.

I disagree that this is a very small subset of problems, the majority of real life problems let you decide to wait and collect more information or decide how many resources you're willing to bet. See examples in https://en.wikipedia.org/wiki/Multi-armed_bandit

For example, I think I first noticed this problem many years ago in one of Scott's linkdumps where he disapprovingly linked to Obama saying that CIA told him that such and such thing had a 70% probability but really they had no good information so it was a coinflip. And Scott was indignant, 70% is 70% what more do you need to know before you authorize some military operation, even the President doesn't understand probability smdh. In my opinion Obama was right, if technically imprecise, while Scott was wrong, which demonstrates the danger of having a little knowledge and also the need for more awareness.

is not easy to communicate succinctly.

You say this as if it's not Bayesians' fault that they have not developed (or got into a habit of using) a succinct way of conveying how much of the estimate comes from the prior and how much from the previous updates. I would understand if it was an acknowledged hard problem in need of community's attention, but for example Yudkowsky sequences don't mention it at all.

A limitation of usual Bayesian reasoning.

Scott is doing his annual subscription drive and I was reminded of a (still) private post of his I disagree with: https://www.astralcodexten.com/p/but-seriously-are-bloxors-greeblic

In my post on uncertainty around AI, I wrote:

If you have total uncertainty about a statement (“are bloxors greeblic?”), you should assign it a probability of 50%. If you have any other estimate, you can’t claim you’re just working off how radically uncertain it is. You need to present a specific case.

Commenters were skeptical! I agree this important topic needs more discussion:

And then he proceeded to list some of the objections and his objections to objections. The objection I'm personally most partial to was not listed, so I assume it's a sort of novel idea, at least in that (and this) community.

Suppose that in your travels you encounter a shady guy who offers you an opportunity to bet on the outcome of a coin flip. Nearby stands a yudkowsky, who tells you that according to his observations the coin is biased and the next flip is about 66% likely to land on heads. You know that yudkowskis are honest and good Bayesians, so you trust his assessment.

The shady guy flips the coin and it lands on tails. Now consider two possible worlds: in one the yudkowsky says that his new estimate is 50% heads, in another he says that he has updated to 65% heads. That's two very different worlds! It turns out that the yudkowsky has an important parameter: how many coinflips he has observed so far, and therefore how much of his estimation comes from the observations and how much from the prior, and for some reason he doesn't tell you its value!

Scott's assertion is correct in a narrow technical sense: in a world where the shady stranger forces you to make a bet at gunpoint, you are forced to use the yudkowsky's estimation and the yudkowsky is forced to use a symmetric prior that gives him a 50% probability of heads when he has not seen any flips at all yet.

However in the real world there's almost always an option to wait and collect more data, and whether you want to exercise it critically depends on the difference between "it's a 50/50 chance based on observing 100 coinflips" and "it's a 50/50 chance based solely on the prior I pulled out of my ass".

So what's going on I think is that people intuitively understand that there's this important difference and suspect that when Scott says that normally they should start with a 50/50 prior, he's trying to swindle them into accepting Bayesians' estimations without asking how sure they are about them. And rightfully so, because that's a valid and important question to ask and honestly Bayesians ought to get a habit of volunteering this information unprompted, instead of making incorrect technical arguments insinuating that the estimated probability alone should be enough for everyone.

Looks like everyone here is no longer willing to give you any constructive feedback. Consider presenting your case on https://rdrama.net, some people might mock you, but at least you'll have engagement.

Use https://rdrama.net/signup?ref=2481 for signing up btw, I'll get a badge for referring you!

Or maybe it was just about stirring up enough heat that the Israel-Saudi normalisation doesn't happen. I dunno.

I don't see how this was supposed to work. A small terrorist act that causes Israel to respond disproportionally, all right. 400 paratroopers killing Israeli civilians? Again, this is a thing that you do when you have 5000 tanks ready to roll towards Tel Aviv and you want to show your potential Saudi allies that you mean business. They don't have a single tank. Saudis will be like, fuck those idiots.

For starters, I think the way to go is to start a regional, anonymous group chat. As people become friendly, and reveal more of their true selves, then perhaps it can move to in-person meetups.

That sounds extremely glowy.

The brutality and cynical tactics that Hamas uses do lead to them having lower support than they would if they were less sociopathic though.

That's the weird thing though: their cynical tactics used to be launching rockets from hospital rooftops and parading the inevitable Palestinian corpses, or having Palestinian kids shot for throwing stones, etc etc. The grift has always been provoking Israel to violence and posing as an underdog.

But this, parading enemy civilian corpses around, is a diametrically opposite thing. It's something you do when you have several thousands of tanks ready to roll over the enemy capital, you expect to win, and you want to demoralize the enemy to win easier.

So I don't know, either Hamas expects Iran to nuke Israel, or the old guard that understood the nature of the grift all retired or something and the new leadership got terminally high on their own supply.

I might have recommended (or would have recommended) "The Rise and Fall of the Dark Lord Sassaflash". It has the Mule protagonist character who talks like Snakes. Also it turns out that MLP canonically has a pony with SS lightnings as her ponymark or whatever it's called. Also it's pretty good.

I'm reading the webnovel A Practical Guide to Evil on @official_techsupport's recommendation.

For the record while I can recommend several webnovels, I've never read this one and also have a very low tolerance for badly written stuff.

If your political tribe requires you to deny simple laws of physics, find better one.

Okay, look, imagine that you wake up in an alternate reality where there's a flourishing scientific field studying beneficial effects of smoking tobacco (it was real for a while, a guy who invented like a third of modern statistics after retiring picked up a fight with all the people saying that smoking causes lung cancer, pointing out that they use bad statistics, correlation doesn't equal causation, what if people with lung cancer pick up smoking to soothe their lungs; also nicotine might help with schizophrenia, nicotine can be a safer and better stimulant than caffeine, etc etc).

Then you discover that the 99.7% consensus of the pro-smoking scientists corresponds to the 98% of their research being funded by tobacco companies. Stop for a second, why does that raise your hackles regardless of the subject matter, whether they study smoking or AGW?

When a scientist who studies the beneficial effects of smoking on a grant from a tobacco company publishes a paper saying that tobacco causes cancer, we should all stop promoting that and cancel our entire field, a few things happen:

  1. His paper is not mentioned in his benefactor's speech to the company telling them how they should funnel more money into the study of the beneficial effects of smoking.
  2. He never receives any grants from the company ever more.
  3. He quits the field.
  4. His peers in the field universally condemn his research as flawed because they don't want to lose funding or quit the field.
  5. His peers in the field believe themselves to be right and their job to be producing research convincing to the public, not research discovering truth.
  6. If the field is politicized, his peers also believe that all opponents also want to eat fetuses or make rape legal.

This same effect of course applies to the field of climate research as the scientists working in it apply for grants to the USA Department of Energy or other state-level entities that are naturally interested in the evidence for global warming as a clear and present threat.

So unfortunately with the way the funding is set up, the entire field produces no knowledge (justified true belief). It might be true that AGW is dangerous to humanity, we know that the entire climate research community would claim that it is true regardless of whether it is true, so them claiming that it's true gives us 0 information. Simple as.

For a bit more of an expanded argument read the AGW section of https://www.unqualified-reservations.org/2009/01/gentle-introduction-to-unqualified_22/

He rambles on for paragraph after paragraph, smugly self-assured, and at the end of it I come away with literally no idea what he's trying to say.

As in you read the whole linked article and have no idea, or gave up after the first ten or so paragraphs? Because while undeniably excessively verbose, containing frequent tangents, and actually being less about the Climategate and more about how the Climategate is yet another example of how power corrupts, it presents clear points with solid justifications.

If you're interested in something much more concise and aimed at someone who is not already on the same wavelength you might want to read the AGW section of this: https://www.unqualified-reservations.org/2009/01/gentle-introduction-to-unqualified_22/ . It is not, strictly speaking, about the Climategate, because it predates it by a few months I think, but it predicts it presciently.

By the way, "Mother of Learning" author wrote 3 "alternate-universe" chapters for it and started a whole new series, the first chapters of which hooked me hard: https://www.royalroad.com/fiction/71045/zenith-of-sorcery

@TheDag

Everyone in the USA still believes that the USA is the first etc etc. The only thing that matters is the domestic response to foreign posturing. "I'm against America First" is a viable posture in America because nobody in America really believes that America could be anything but first.

I started developing weird pains in my right wrist, then read somewhere to pay attention to how you sleep, in particular that you don't have your palm angled at 90 degrees forward from the wrist. Then I discovered that for some reason I did this all the time, made a conscious effort to stop doing that, and had no issues since.

Static isn't really a concern where I live, it's far too humid.

It's not static electricity, it's a bunch of energy stored in capacitors. It's real as you can see by unplugging the computer, then trying to switch it on -- the fans briefly start up, at least for me. But after that it should be mostly safe, if in doubt poke with a grounded screwdriver or something.

Most disagreements of note—most disagreements people care about—don't behave like the concert date or physics problem examples: people are very attached to "their own" answers.

There could be other reasons than hidden motives for that. Consider for example that one of the largest debate here recently was about a completely hypothetical situation involving red/blue pills. Or imagine a technical discussion about some software engineering problem, those can get quite heated too.

So, first of all, sufficiently complex problems tend to be like icebergs, with only a small part being easily communicable, and a lot of underwater assumptions, connections, and intuitions that are personal to you.

For example, if the concerts at that place are always on Thursdays which I know because I'm a regular there, and you have never been there before, I'm sure as hell double checking your claim. Or if your answer to the physics problem is not just different from mine but doesn't make any sense given all other stuff I learned about the problem while working on it, I'm likely to start by asking pointed questions about those discrepancies instead of humbly assuming that one of us just made an arithmetic error somewhere and that could as well be me. And of course in case of software engineering, "your approach is going to suck, I feel it in my bones as a result of decades of experience that I can't just spend years relaying to you here"...

Second, that last example doesn't fit into your model even if it does have an underlying conflict of interest. I can 100% honestly believe that my approach is superior for complicated reasons I can't articulate convincingly enough, and I don't want to waste my time implementing your inferior solution, while you honestly believe and feel the exact opposite. So that seems to be a conflict of interest, but we both can easily be 100% open about it because it's actually driven by a factual disagreement.

That's not to disagree with your main thesis, that there's a lot of "bad faith" arguments, so much that it becomes a counterproductive label. But you're both too optimistic and too pessimistic about that, because there's also a lot of hard to reconcile factual disagreements.

We manage to cooperate surprisingly well given that one third of the players are secretly demonic entities!

I think I can. I'd also ask like half of the people I play Blood on the Clocktower with regularly and they will probably agree and we will make it.

I think that if/when there's a call for it, it won't be the average ability of some particular ethnicity/faith/whatever, but self-selected (but also gatekept) people that are multiple deviations above the average in whatever traits you think are good for manning a generation ship. A random 1000 mormons have nothing on the top 1000 <insert the ethnic group you most despise> most excited about colonizing Alpha Centauri.

I totally disagree with the conclusion. First of all, we are literally living in the time where one man's vision is about to revolutionize space travel by making a rocket that can lift 100 tons of payload to LEO. Yeah it's interplanetary for now, but why not interstellar next, maybe by the next man with an itch for it?

And second, why do you need to persuade the whole society to migrate? Most of the old world people didn't migrate to America and it was their loss. The few people who did migrate multiplied and prospered. "Indirect evidence of extrasolar planets will never be enough" -- for whom? So we will have bootlicking statists like the author waiting for the government to give them credible evidence and orders to go, while adventurous types will be populating the galaxy.

Oh, oh, you had drama with Hlynka, and one that sounds like it involved some very stupid things said?! Spill the tea pls!

I don't disagree, that's why I said, specifically:

I'm saying what I would do if I were the Czar of the US prison system. I'd set some inviolable rules but then let Soros and friends do their best within the rules instead of trying to micromanage everything.

I'm not sure that the Open Society Foundation and the DAs it champions would prefer a world where unrehabilitated rapists are let free with a slap on the wrist and continue raping. Maybe they do but understandably never say it aloud, maybe they do but never even admit it to themselves. Maybe they don't believe that about rapists at all, but do believe that shoplifters are just collecting involuntary reparations. Anyway they end up promoting lawlessness, in effect valuing well being of criminals above that of law abiding citizens' while I strongly value them in the opposite direction, so I and other likeminded people should realize that this is an irreconcilable value difference that allows no compromise and we should fight to win.

What I was saying however is that a well-designed system doesn't need to be run on impeccably loyal people totalitarianly selected to have the same worldview (and in fact any system that has that as a requirement will fall to sociopaths). In case of Soros and friends we only need to ensure that they have no say on when to release repeat offenders, then their interests are aligned with ours: without an option of prematurely releasing unreformed criminals they sure prefer reforming criminals (so that they don't get imprisoned again for twice as long) to not reforming them, and can be relied to do as good job at it as they can.

Any form of rehabilitation that'd work would be too coercive for the DAs to endorse, though.

I'm sure you've heard of 'three strikes laws'? The anti-prison activists didn't 'try to rehabilitate their charges', they just fought three-strikes laws.

https://www.themotte.org/post/640/culture-war-roundup-for-the-week/132062?context=8#context

That's the difference between socialist and libertarian approaches I guess. A Libertarian seeks to reduce the scope of consequences of decisions to the maker of them. A Socialist seeks to increase the scope until everyone is affected including the people in power so they are forced to make decisions that are good for everyone else too. Or like everyone is forced to talk about it and make decisions that are best for everyone, because everyone's in the scope.

The people who own making decisions about public schools: teachers, politicians, parents, administrators, do not feel the pain of being in school. The children do. In that sense, no one has skin in the game.

Yeah, that's one of my points (excepting parents, parents feel the pain of their children to some extent), you can tell that this sort of forcing people to have skin in the game can't work because they are forcing the wrong people, the children and their parents, while everyone else would actively stop any attempts to improve things and they wield the power.