@NunoSempere's banner p

NunoSempere


				

				

				
0 followers   follows 2 users  
joined 2022 September 10 10:19:29 UTC

				

User ID: 1101

NunoSempere


				
				
				

				
0 followers   follows 2 users   joined 2022 September 10 10:19:29 UTC

					

No bio...


					

User ID: 1101

Augur had a seemingly solid system

This is not what I recall. Invalid markets resolved to 50/50, so you had users, chiefly someone who went by the moniker of Poyo, create markets that appeared to be legit but e.g., had the wrong date, so that people would bet & he'd win money when they resolved 50/50

The last one is: I agree that sometimes predictions influence what happens. A few cases people have studied is alarmist Ebola predictions making Ebola spread less because people invested more early on, and optimistic predictions about Hillary Clinton leading to lower turnout.

You can solve these problems in various ways. For the Ebola one, instead of giving one probability, you could give a probability for every "level of effort" to prevent it early on. For the Hillary Clinton one, you could find the fixed point, the probability which takes into account that it lowers turnout a little bit (https://en.wikipedia.org/wiki/Fixed_point_(mathematics)).

A. There is a heap of inertia B. Enthusiastic people with a grand plan are working in fields which already have inertia C. Therefore enthusiastic people which have a grand plan will be bogged down in that previously existing inertia.

I mean, sure. But then the answer would seem to not work inside fields which already have huge amounts of negative inertia: to try to explore new fields, or to in fact try to create a greenfield site. To give a small example, the Motte does happen to be its own effort, and thus seems less bogged down. Or, many open source projects were started pretty much from scratch.

Any thoughts on why people don't avoid fields with huge amounts of inertia? Otherwise the inertia hypothesis doesn't sound that explanatory to me.

Breezewiki is good. And in general, OP might want to look into https://github.com/libredirect/browser_extension

I see what you mean. I figured out how to preserve the footnotes, and have copied the text over.

Do people trust that whatever entity is reporting the final results is doing so accurately

  1. Scoring rules exist
  2. Deceivers outcompete nondeceivers
  3. But yeah, you can't use a prediction marketplace to decide on something that's more valuable than the value of the whole prediction marketplace. That's one of the issues with Robin Hanson's futarchy.

doubt we'll ever reach 99.9% confidence in prediction markets

I mean, in practice you don't need 99.9, you need better than alternatives in at least some cases.

German grammar

Actually, now that I think about it, German has the feature that in composite phrases (i.e., most phrases saying anything complicated), the verb is at the end. This makes sentences messier. It's possible that having strong categories could be a crutch to make such long sentences understandable.

Not sure to what extent that is a just-so story, though.

Bismark Analysis has a pretty great analysis of Soros here: https://brief.bismarckanalysis.com/p/the-legacy-of-george-soros-open-society, which might be of interest.

You could also choose nuclear energy, better vaccines & pandemic prevention, better urban planning. etc. Or even in education, things like Khan Academy, Wikipedia, the Arch Wiki, edx, Stack Overflow,... provide value and make humanity more formidable. Thinking about those examples, do you still get the sense of pessimism, almost defeatism in your previous comments?

Mmh, I see what you are saying. But on the other hand, there is such a thing as a Pareto frontier. Some points on that pareto frontier, such that you can't fulfill more needs without sacrificing previous gains, might be:

  • monomaniacal formidability. You are a titan of industry and you to ignore your family because you just care that much about, idk, going to Mars.
  • a life of bucolic contemplation and satisfaction.
  • a flourishing family-values life, caring for your children and the members of your clan
  • a life of hedonism, enjoyment and vice
  • etc.
  • some mix of the above, e.g., having a good career AND a family AND having fun AND ...

Like, if I look at my actions, I don't get the impression that I'm on any kind of Pareto frontier, where, idk, listening more to my in-the-moment curiosity trades off against the success of my romantic relationships, which trades off against professional success. It seems like I could just be... better on all fronts? Contradictorily, there is a sense in which I am "doing the best I can at every given moment", but it feels incomplete, and doesn't always ring true. Sorry for the rambling here.

For your example, making your same comment in the morning seems like it could plausibly be a better choice.

I think your first priority should be in finding reliable ways to prioritise and focus on long-term goals.

Yeah, maybe. My discount rates have increased a bunch after the fall of FTX, since their foundation was using some of the tools I was working on for the last few years. So now I'm a bit more hesitant about doing longer term stuff that relies on other people, and also, sadly, longer term stuff in general.

Updating in the face of anthropic effects is possible

Now here: https://nunosempere.com/blog/2023/05/11/updating-under-anthropic-effects/. Pasting the content to save you a link:

Status: Simple point worth writting up clearly.

Motivating example

You are a dinosaur astronomer about to encounter a sequence of big and small meteorites. If you see a big meteorite, you and your whole kin die. So far you have seen n small meteorites. What is your best guess as to the probability that you will next see a big meteorite?

In this example, there is an anthropic effect going on. Your attempt to estimate the frequency of big meteorites is made difficult by the fact that when you see a big meteorite, you immediately die. Or, in other words, no matter what the frequency of big meteorites is, conditional on you still being alive, you'd expect to only have seen small meteorites so far. For instance, if you had reason to believe that around 90% of meteorites are big, you'd still expect to only have seen small meteorites so far.

This makes it difficult to update in the face of historical observations.

Updating after observing latent variables

Now you go to outer space, and you observe the mechanism that is causing these meteorites. You see that they are produced by Dinosaur Extinction Simulation Society Inc., that the manual mentions that it will next produce a big asteroid and hurl it at you, and that there is a big crowd gathered to see a meteorite hit your Earth. Then your probability of getting hit rises, regardless of the historical frequency of small meteorites and the lack of any big ones.

Or conversely, you observe that most meteorites come from some cloud of debris in space that is made of small asteroids, and through observation of other solar systems you conclude that large meteorites almost never happen. And for good measure you build a giant space laser to incercept anything that comes your way. Then your probability of of getting hit with a large meteorite lowers, regardless of the anthropic effects.

The core point is that in the presence of anthropic effects, you can still reason and receive evidence about the latent variables and mechanistic factors which affect those anthropic effects.

What latent variables might look like in practice

Here are some examples of "latent variables" in the real world:

  • Institutional competence

  • The degree of epistemic competence and virtue which people who warn of existential risk display

  • The degree of plausibility of the various steps towards existential risk

  • The robustness of the preventative measures in place

  • etc.

In conclusion

In conclusion, you can still update in the face of anthropic effects by observing latent variables and mechanistic effects. As a result, it's not the case that you can't have forecasting questions or bets that are informative about existential risk, because you can make those questions and bets about the latent variables and early steps in the mechanistic chance. I think that this point is both in-hindsight-obvious, and also pretty key to thinking clearly about anthropic effects.

I'm curious which ones you (or other motte people) think would be most interesting for you in particular, rather than "useful in general".

On pruning science, or, the razor of Bayes: one of many thoughts of «what if Lesswrong weren't a LARP» the need to have a software framework, now probably LLM-powered, to excise known untruths and misstatements of fact, and in a well-weighed manner all contributions of their authors, from the graph of priors for next-iteration null hypotheses and other assumptions.

Also interesting

Also interested.

Yeah. To reply to the first part, my answer to that is to realize that knowledge is valuable insofar as it changes decisions, and to try to generate knowledge that changes decisions that are important. YMMV.

How would one go about using this?

hard-to-evaluate work at any large organization... learn to play the game

You can also be on the lookout for different games to play.

You seem to think it would be better if powerful EAs spent more time responding to comments on EA forum

I think this is too much of a simplification. I am making the argument that EA is structured such that leaders don't really aggregate the knowledge of their followers.

Can you give an example of any multi-billion dollar movement or organization that displays "blistering, white-hot competence"?

Some which could come to mind: Catholic Church in Spain 1910 to early 2000s, Apple, Amazon, SpaceX, Manhattan project, Israeli nuclear weapons project, Peter Thiel's general machinations, Linus Torvald's stewardship of the Linux project, competent Hollywood directors, Marcus Aurelius, Bismark's unification of Germany and his web of alliances, Chicago school, MIT's JPAL (endowment size uncertain though), the Jesuits, the World Central Kitchen.

provided concrete evidence that interventions are less effective than claimed

I discussed a previous one on the Motte here, here is a more recent one: CEA spends ~$1-2M/year to host the equivalent of a medium subreddit, or a forum with probably less discussion than The Motte itself.

offered concrete alternatives to this target audience.

Here are some blue-sky alternatives, Auftragstaktik is one particular thing I'd want to see more of.

I've been doing ok redirecting yt automatically to Invidious in my custom browser. On Firefox I'm using LibreRedirect: https://libredirect.codeberg.page/. For music I'm using yt-dlp. Not much of a plan, though.

It does sound interesting to me.

Thanks, I appreciate this list!

Rationalist reversals: the notion of «Infohazard» is the most salient example of infohazard known, anthropic shadow as an anti-bayesian cognitive bias and reasoning yourself into a cult.

Curious about this.

virtually nobody has ever done this before

A similar proposal I've heard of is recursive prediction markets. E.g,. you hold a prediction market on what the probability another prediction market will/would assign when asked what the chance that a researcher spending a lot of time on a topic would conclude. I did some early work on this here: https://www.lesswrong.com/posts/cLtdcxu9E4noRSons/part-1-amplifying-generalist-research-via-forecasting-models and here: https://www.lesswrong.com/posts/FeE9nR7RPZrLtsYzD/part-2-amplifying-generalist-research-via-forecasting, and in general there is some work on this under the name "amplification".

in defense of Marx.

I was not expecting this.

Big fan of your writings.

Just leaving a quick note that I don't understand why you are hosting these in LW rather than in your own site & linking to them. It seems that you don't have that much control over what the LW people do, and e.g., having your own rss would be a good preventative measure.

Kudos.