site banner

Calling all Lurkers: Share your Dreams of Effortposting

It’s been pointed out recently that the topics discussed in the Culture War thread have gotten a bit repetitive. While I do think the Motte has a good spread on intellectual discussion, I’m always pushing for a wider range (dare I say diversity?) of viewpoints and topics in the CW thread.

I was a lurker for years, and I know that the barrier between having a thought and writing a top level comment in the CW thread can loom large indeed. Luckily I’m fresh out of inspiration, and would love to hear thoughts from folks about effortposts they want to write but haven’t gotten around to.

This of course applies to regulars who post frequently as well - share any and all topics you wish were discussed in the CW thread!

28
Jump in the discussion.

No email address required.

Here are some drafts I have, though not particularly CW.

  • Acetylcysteine as the first treatment for a cold/mucus in Spain but not in Britain

  • Ze Dreadful German In Ze Writings of Curtis Yarvin. I would respect the guy if he was a gentleman and a scholar, actually knew German and used it to better capture the Zeitgeist and express his Weltanschauung, but instead we get a Blitzkrieg of stilted phrases which annoy me.

  • Comparison of The Driver (1978), Driver (2011), Baby Driver (2017). Same plot, different decades.

  • Base rates of success dating docs.

  • Tetlock forecasting approach vs Subjective Bayesianism

  • My ideal prediction market playbook

  • Optimize hard or GTFO

  • A retelling of El Mio Cid, an Spanish epic poem where a recurring theme is that the hero would be a good and loyal knight if only he had a good king as lord.

  • A lot of shit on OpenPhilanthropy, FTX and EA.

  • Utilitarianism for Democrats

  • Utilitarianism for Republicans

  • Why are we not better, harder, faster, stronger

  • Updating in the face of anthropic effects is possible

  • Betting and consent

  • How to host an autarkic/uncensorable site.

  • Tetlock vs subjective bayesianism

  • Something on the limits of Bayesianism

  • I want to nerd out a bit on infrabayesianism / what one should do if one expects that one's hypothesis may not be able to represent future events.

  • Bounties, things I would pay for

  • My consulting rates

  • Criticism as a demand side problem

  • My preferred deviations from common English

  • Some observations on the speed of qalia

  • People's choices determine a pseudo ordering over people's desirability

This is more than what I would have though, typing this out.

I've been posting a stream of similar ideas on my blog (https://nunosempere.com/blog/), with an eye to those that I think could be more valuable. But if this community is particularly interested in any of these, I'll probably be happy to re-prioritize.

Tons of great ideas.

Optimize hard or GTFO

A lot of shit on OpenPhilanthropy, FTX and EA.

How to host an autarkic/uncensorable site.

Why are we not better, harder, faster, stronger

Updating in the face of anthropic effects is possible

Seems more useful though cultural stuff is probably more interesting.

Updating in the face of anthropic effects is possible

Now here: https://nunosempere.com/blog/2023/05/11/updating-under-anthropic-effects/. Pasting the content to save you a link:

Status: Simple point worth writting up clearly.

Motivating example

You are a dinosaur astronomer about to encounter a sequence of big and small meteorites. If you see a big meteorite, you and your whole kin die. So far you have seen n small meteorites. What is your best guess as to the probability that you will next see a big meteorite?

In this example, there is an anthropic effect going on. Your attempt to estimate the frequency of big meteorites is made difficult by the fact that when you see a big meteorite, you immediately die. Or, in other words, no matter what the frequency of big meteorites is, conditional on you still being alive, you'd expect to only have seen small meteorites so far. For instance, if you had reason to believe that around 90% of meteorites are big, you'd still expect to only have seen small meteorites so far.

This makes it difficult to update in the face of historical observations.

Updating after observing latent variables

Now you go to outer space, and you observe the mechanism that is causing these meteorites. You see that they are produced by Dinosaur Extinction Simulation Society Inc., that the manual mentions that it will next produce a big asteroid and hurl it at you, and that there is a big crowd gathered to see a meteorite hit your Earth. Then your probability of getting hit rises, regardless of the historical frequency of small meteorites and the lack of any big ones.

Or conversely, you observe that most meteorites come from some cloud of debris in space that is made of small asteroids, and through observation of other solar systems you conclude that large meteorites almost never happen. And for good measure you build a giant space laser to incercept anything that comes your way. Then your probability of of getting hit with a large meteorite lowers, regardless of the anthropic effects.

The core point is that in the presence of anthropic effects, you can still reason and receive evidence about the latent variables and mechanistic factors which affect those anthropic effects.

What latent variables might look like in practice

Here are some examples of "latent variables" in the real world:

  • Institutional competence

  • The degree of epistemic competence and virtue which people who warn of existential risk display

  • The degree of plausibility of the various steps towards existential risk

  • The robustness of the preventative measures in place

  • etc.

In conclusion

In conclusion, you can still update in the face of anthropic effects by observing latent variables and mechanistic effects. As a result, it's not the case that you can't have forecasting questions or bets that are informative about existential risk, because you can make those questions and bets about the latent variables and early steps in the mechanistic chance. I think that this point is both in-hindsight-obvious, and also pretty key to thinking clearly about anthropic effects.