@faceh's banner p

faceh


				

				

				
4 followers   follows 1 user  
joined 2022 September 05 04:13:17 UTC

				

User ID: 435

faceh


				
				
				

				
4 followers   follows 1 user   joined 2022 September 05 04:13:17 UTC

					

No bio...


					

User ID: 435

Indeed, I read the exact arguments on lesswrong and elsewhere that humans would dive headlong into AGI because the military incentives to build one, and to build it before the other guys, was irresistible.

Countries throwing billions of dollars at reckless research because they don't want to be conquered is EXACTLY what doomerists warn of.

If I, as a male, want to be a bit cheeky, I can actually agree that a random bear is less dangerous to an American woman than a random male.

Statistically speaking, if the male is chosen COMPLETELY at random from all living males, then the odds are more likely you're getting a middle aged guy from Asia (esp. China), India, or Africa. I really have no direct frame of reference for what I expect such males to do in this situation, but the stereotypes are concerning.

Cursory Google search shows there are about 200,000 brown bears in the world, and around 800,000 black bears. Then presumably negligible numbers of Pandas, Koalas, and Polar bears, along with more exotic types.

So odds are that the randomly chosen bear is a relatively less dangerous black bear vs. the "will attack you instantly" brown bear.

So playing the odds, I might say yeah, a given woman is better off with a randomly selected bear in most cases, vs. a randomly selected male human.

But if we restrict the question to American males, and we specify that the bear WILL be one of the more dangerous varieties, I think the answer is clear.

Yes, there's certainly an argument that a well-fed and generally fit European female has less to fear from many of the individual males of certain populations on a sheer physical prowess angle, compared to most grown bears. If she can run faster and further that's all she really needs.

I can understand why that argument ("those men can't rape you, they're too small/weak") wouldn't be comforting in this context, though.

A lot of markets become mainly bets on how the creator will decide to resolve it rather than on what the question is purportedly about.

Yes. I've seen problems arise even with fairly 'objective' markets because even if you can measure a given phenomenon with precision, people might still mistrust the sensor doing the measuring. The market asks "what will be the high temperature in Miami on [date]" and we have to consider whose thermometer? Is it calibrated correctly? Are there any conditions that might throw it into an unexpected/error state?

So now the question is somewhat less about climate conditions and more about the quirks of the measurement system.

In theory you could solve this by attaching a reputation market to the system, so that a given resolution source can have their 'trustworthiness' rating impacted if enough people suspect they're fudging numbers or intentionally writing ambiguous questions/resolution criteria.

But that's just yet another system that is susceptible to gaming.

Augur had a seemingly solid system for avoiding this, but probably couldn't handle the volume, being dependent on Ethereum.

I am literally a practicing attorney and I have had my mind blown at some of the rules-lawyering/munchkin behavior that has come out of the space.

Ironically this perhaps goes to show why sports betting is so popular, because sports rules are uniformly understood, well-defined, and the bets are set on easily determinable outcomes like "Who won" and "what was the score", outcomes which are rarely ever walked back after the fact.


I speculate that we'll see some kind of AI-based solution arise and different markets will become popular with different segments of the population based on the quirks of how, say, Kalshi's AI resolves questions vs. Polymarket's vs. Manifold's.

In this case prediction markets might not actually 'solve' the issue of people having different reality bubbles, but at least there'll be some competition.

The final issue is that if it is common and good then it will alter the very things it is trying to predict. Does predicting it make it true when we trust predictions at a 99.9% confidence ratio? Is there then a rebound effect where they become worthless and you need a meta meta meta meta meta prediction market to determine the accuracy of the prediction market you're trusting to verify the accuracy of prediction market that you're using to make the initial prediction?

Nah, I think the issue that precedes and largely supercedes this is the oracle question. Do people trust that whatever entity is reporting the final results is doing so accurately and isn't fudging numbers to give an edge to its allies or to cover up some other outcome that TBTB are trying to disguise?

Do we trust that ambiguous results will be resolved in good faith and correctly more often than not?

Who do we actually rely on to be the final arbiter of 'truth' such that these markets can continue to settle reliably where there's incentive to capture such institutions to divert them from the purpose of accurate reporting.

In other words I personally doubt we'll ever reach 99.9% confidence in prediction markets if only because we can't reach that confidence in the platforming hosting the markets or the entities producing the results which are deemed as 'truth,' and I don't believe these are easily tractable issues.

No, but it was a good example of people not being aware of how human perception works, and thus jumping right to "these people have to be lying to me" rather than "there's something weird about that dress."

I still remember the blue/gold dress discourse.

Plenty of women go abroad alone to dangerous countries like India. Sure there are some examples of women getting raped/killed there, but plenty more aren't.

If the questions specifies that they're in the woods, this presents a situation where the male in question can reasonably expect not to be observed by a third party.

THAT much, I will grant, is reason for concern for the woman.

I would not say 20% of men across the world would choose to assault/rape/attack a lone female. And even actual criminals don't commit crimes all the time.

I'm not quite willing to say 20% of men would not, purely on the evolutionary argument that assault and rape were a common element of our ancestral environment.

Really, my concern is that I don't know to what extent all men, everywhere on the planet, are actually socially trained against any sort of violence against women... and have enough to lose that they care about that social training. I could see it being higher than 20% who would in theory be dangerous to an unaccompanied female. But the error bars on that estimate are large.

But I can say for damn sure that a tiny handful of bears is trained not to be violent towards humans in general, but some are more naturally inclined towards it than others.

One possible solution is that you have people pay to have questions answered, and as part of that payment, they pay people to act as oracles who have good reputations.

Yeah, this was part of how Augur's system worked. Reward people who end up on the 'right' side of a final resolution question consistently AND anyone who is answering the question has to stake some portion of their reputation on the outcome they're judging. Eventually 'bad actors' (who are either malicious or are too stupid to reliably interpret contracts) lose out and the correct/consistent oracles accumulate more wealth so they can have more influence over future resolutions.

It helped settle into an equilibrium where it was usually not worthwhile to try to exploit an apparent ambiguity, while knowing that wealthier oracles will ignore said ambiguity and you'll lose money directly by trying to challenge them.

I've been blown away by how bad otherwise intelligent people are at writing and interpreting resolution criteria.

Yep. There are plenty of bright line rules for resolving ambiguity in legal contracts, and it can be permissible to pull in outside evidence to interpret them, but you have to think about the ENTIRE document in a systematic way, you can't just glance it over and interpret it based on vibes.

And glancing at things and going with your gut is how so, so many humans operate.

The problem is there's always a tradeoff when you try to get as precise as possible with your wording, in that it both makes it harder for laypeople to easily understand what the terms say (and less likely to read it all) and, paradoxically, can open up a greater attack surface because there's more places where ambiguities can arise.

This is where I imagine LLMs would have a role, if they are given a set of 'rules' by which all contracts are to be interpreted, and they can explain the contracts they read to laypeople, and everyone agrees that the AI's interpretation is final, then you at least make it more challenging to play games with the wording.

in practice you don't need 99.9, you need better than alternatives in at least some cases.

Agreed. And thus I strongly support prediction markets as a concept for making personal decisions, hedging risks, and predicting important events.

Just noticing that centralized prediction markets are yet another sort of institution that can be captured and/or sabotaged if they become important to guiding/controlling society.

Would really hope we have robust competition between them to ensure no player ever becomes fully dominant in the space.

The genes that foster safety in groups and willingness to cooperate will outpace the genes that might make a man rape/assault someone.

Right, but in this situation, as stated in the question, there are no groups to cooperate with or intervene, the male's behavior is based solely on whatever he himself chooses to do in the absence of any observers, and thus no immediate social consequences.

I am going to argue that in the ancestral environment, if a random male happens across a random female, both complete strangers to the other, in the middle of the woods, nobody else around, rape WAS probably a common outcome. And this would eventually lead to general norms that women shouldn't travel anywhere alone.

I have seen decent evidence that many males of certain cultures are willing to engage in violence against females even in the full view of other people. Can't say what that percentage is with precision, but I'd have to assume a higher percentage would willingly engage in violence if there were no observers.

I think I will stipulate that the number has to be <50%, but 3% is probably the absolute lower bound.