site banner

Culture War Roundup for the week of June 5, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

The surprising part isn't what the phenomenon looks like. It's that there are observers at all in the one-shot universe. It does provide information.

It doesn't provide information, because the only possible one-shot universe we can observe is one with observers. Maybe there was a 0.000...01% chance that the one-shot universe would be fine-tuned for observers, or maybe there was a 99.999...% chance that the one-shot universe would be fine-tuned for observers. Either of these possibilities is consistent with the fact that we, as observers, see a one-shot universe with observers in it.

So imagine we have two hypotheses, to simplify.

Hypothesis 1 is that there's a 95% chance of a rational being coming into existence.

Hypothesis 2 is that there's a 0.001% chance of a rational being coming into existence.

You, the observer, notice, hey, I'm a rational being who came into existence!

I'm saying that it's rational to think that this should update your priors towards hypothesis 1 over hypothesis 2. But only if there's a one-shot or few-shot universe.

It's 95000 times more likely for a rational being to come into existence under hypothesis 1 than under hypothesis 2. So this piece of evidence should update our beliefs by 95000 times using the odds form of Bayes' theorem, I believe.

For a comparison (and I'm not certain that this is a perfect analogy, but I think it works), let's say you are told that there's a 50/50 chance a surgery succeeds. Under the 50% chance you survive, you always wake up. Under the 50% where it fails, they freeze your body, and you estimate that there's a 1/50 chance that they manage to revive you in the distant future.

You wake up. Assuming there's not going to be any distinguishing sensation between the two ways you could wake up, which should you think is more likely? I would think you should think that there's a 50:1 chance that it worked.

You, the observer, notice, hey, I'm a rational being who came into existence! I'm saying that it's rational to think that this should update your priors towards hypothesis 1 over hypothesis 2. But only if there's a one-shot or few-shot universe.

No, it shouldn't change your priors, irrespective of whether there's a one-shot or multi-shot universe.

Given that you are a rational being, the odds that you will observe a universe where a rational being came into existence are exactly 100%. This is true regardless of whether hypothesis 1 or hypothesis 2 is true, and therefore it tells you no additional information about which hypothesis is correct.

You wake up. Assuming there's not going to be any distinguishing sensation between the two ways you could wake up, which should you think is more likely? I would think you should think that there's a 50:1 chance that it worked.

That's true, because this situation is materially different from the one we are talking about above. Here, we know two sets of probabilities ex ante (both of which can occur), and are now trying to decide, ex post, which is more likely to have occurred. Given two different possibilites, the one with the higher probability is, by definition, the one that was more likely to occur (this is true whether the surgery is one-shot or many-shot, by the way).

In the situation we are discussing, we don't know anything about the probabilities ex ante, and we are trying to derive those probabilities based on our ex post observations.

A better analogy would be: you go into a surgery and nobody knows your odds of survival. You wake up after the surgery. What, if anything, does this tell you about your ex ante odds of surviving the surgery? The answer is, it tells you nothing about those odds. It just tells you that you survived. Your odds of survival could have been 0.001% or 99.9%, but since you can only observe outcomes in which you survive, that fact that you observe your own survival gives you no additional information about the ex ante likelihood of that outcome.

Another example to illustrate the point. Suppose an alien hands you a black box with a screen and a button on it. You push the button, and the number "21" appears on the screen. Pushing the button again does nothing and you cannot disassemble the box to learn how it works. What are the odds that the box was going to display the number "21" when you pushed the button? The answer is, you have no idea (except you know the odds are not 0%). It might have been a 100% chance, it might have been a 0.000000001% chance. You have no way of knowing based on your single ex ante observation.

By starting with "given that you exist," you're assuming away the part you can learn from. Starting from an objective not-taking-into-account-yet-that-you-exist estimate of likelihood that a random universe would be able to contain life, you should have some estimate of a likelihood that a universe could contain life.

But now that there's some agent, that's data! That's information! You can use that to shift your beliefs. Now you're about to respond that there will always be an agent whenever you have the opportunity to think about these things. Maybe so. But there won't always be an agent in the universes in the world-model we were just talking about. And those worlds where there is one look different from the worlds where there isn't one. They were created by mechanisms that result in a higher probability that such agents come to exist. And so you should assume that you, in actuality, are more likely to be in the higher-probability portions, than you'd think on a basis that ignores your coming to exist.

Per your surgery case, I don't think your conclusion is true? Most people who survive a surgery do it in cases where survival is likely, so I would think that you should think that you're among that pool. That seems like another case where the bare fact of your existence provides evidence about the world.

Well, that's not quite just your existence, since what I just said requires knowing how often other people survive surgery, but it would still seem in the general case that you should update to some extent (it may be a small one, depending on your prior knowledge) towards your surgery having odds of survival by waking up.

See Joe Carlsmith's account of SIA here for one analysis that would incline one towards being in universes where more people exist. It's fairly rigorous, but not without difficulties. I don't remember it addressing the question exactly as we've posed it, but it's definitely very relevant.

I'm not convinced that it's quite the same framing as I had used above for the first two paragraphs, so maybe I'll have to think about whether any of that needs revising. But I'm still pretty confident that you coming to exist provides you information that you can use to shift your beliefs.

Starting from an objective not-taking-into-account-yet-that-you-exist estimate of likelihood that a random universe would be able to contain life, you should have some estimate of a likelihood that a universe could contain life.

If we could start there then we would gain some information from the later observation that the universe contains life. But we cannot start there. We start in a universe where the existence of life is a given, with 100% probability.

But now that there's some agent, that's data! That's information!

It's not. You can only observe a universe with agents because you are an agent. That statement is always true, no matter the prior probabilities, so you cannot draw any conclusion about prior probabilities from the fact that it happened.

what I just said requires knowing how often other people survive surgery

Right, if we could bring in outside information about how likely a universe is to contain life and what factors influence that, then we might be able to draw some conclusions from the fact of our existence, but we have no such outside information, so we cannot draw such conclusions.

But we cannot start there.

Why can't we start there? Isn't that equivalent to stating that you can't think about universes where life doesn't exist? That's transparently false. Working out what our beliefs should be if we ignore a piece of information is something we're allowed to do.

That statement is always true.

No, that statement isn't always true. It's only always true for observers. That means that you should shift your probability mass from what they would be if you ignored which worlds you're more likely to exist in as an observer, to what they should be after taking that into account.

Right, if we could bring in outside information about how likely a universe is to contain life and what factors influence that, then we might be able to draw some conclusions from the fact of our existence, but we have no such outside information, so we cannot draw such conclusions.

Assuming our thinking is at all Bayesian, shouldn't we have some sort of probability distribution? Not sure exactly what one should look like, but that should exist. In any case, did you miss what I said about how it should always be the case, whatever that probability distribution is, that you should update (ignoring other post-waking-up information, on your existence alone) towards the chance of your survival having been higher than you thought it was before your surgery?

Why can't we start there? Isn't that equivalent to stating that you can't think about universes where life doesn't exist? That's transparently false. Working out what our beliefs should be if we ignore a piece of information is something we're allowed to do.

I'm saying we literally can't start there. We can't go back and observe before the beginning of the universe. We don't have any information about what was happening then, or what the dynamics of the situation looked like. We only have information about the universe we currently inhabit.

We can certainly think about it, but we don't know anything about it, so it doesn't do us much good.

No, that statement isn't always true. It's only always true for observers. That means that you should shift your probability mass from what they would be if you ignored which worlds you're more likely to exist in as an observer, to what they should be after taking that into account.

You are an observer, so it's always subjectively true for you 100% of the time.

Assuming our thinking is at all Bayesian, shouldn't we have some sort of probability distribution? Not sure exactly what one should look like, but that should exist.

Yes, so start with whatever your Bayesian priors are. You think there's an x% chance that the universe would contain observers. For every possible value of x other than x=0, the odds that you will observe a universe with observers is equal to 100%. So there is no basis to either raise or lower your prior based on this observation, because the odds of it being true are exactly the same in every possible scenario other than x=0.

In any case, did you miss what I said about how it should always be the case, whatever that probability distribution is, that you should update (ignoring other post-waking-up information, on your existence alone) towards the chance of your survival having been higher than you thought it was before your surgery?

Assuming you have no outside information about the surgery, there is no basis to update your priors. This is true for the same reason I explained above. For any prior other than x=0, the chance that you will observe you survived the surgery is exactly equal to 100%. If you did not survive, you would not make any observation, so the only possible observation you can make is "I survived."

I'm saying we literally can't start there. We can't go back and observe before the beginning of the universe. We don't have any information about what was happening then, or what the dynamics of the situation looked like. We only have information about the universe we currently inhabit.

We can certainly think about it, but we don't know anything about it, so it doesn't do us much good.

But you've acknowledged that we should have some sort of prior probability distribution (it may be pretty uncertain), so we can use that.

If you're saying that we always have to take into account that we exist, we can't think about cases where that's not true, how does it not follow that you can't imagine worlds where you don't exist?

You are an observer, so it's always subjectively true for you 100% of the time.

I understand that. What I'm trying to make clear is (something similar to that) that the space of all worlds and the space of all worlds with observers don't look the same. And so observers will have things that look more like the latter. And worlds with observers will look more like multiverses, there is some reason why the universe's fine-tuning is necessary, or theism, because in our ideas of the possibilities, observers are relatively more likely to occur in worlds of those varieties, as compared to one-shot worlds that require high degrees of fine-tuning.

Yes, so start with whatever your Bayesian priors are. You think there's an x% chance that the universe would contain observers. For every possible value of x other than x=0, the odds that you will observe a universe with observers is equal to 100%. So there is no basis to either raise or lower your prior based on this observation, because the odds of it being true are exactly the same in every possible scenario other than x=0.

No, it isn't 100%. You're ignoring all the scenarios where you don't exist. Yes, you won't be around in those worlds to decrease your estimate of the probability of life existing, but that doesn't mean you shouldn't take them into account.

And we've estalished that we're talking about a one-shot universe, so there's definitely a good chance that there is no observer, in which case the odds taht you will observe a universe is less than 100%, because in some of those worlds you don't observe anything at all. That you exist is information that every observer should take into account, but that doesn't mean that it isn't information.

Assuming you have no outside information about the surgery, there is no basis to update your priors. This is true for the same reason I explained above. For any prior other than x=0, the chance that you will observe you survived the surgery is exactly equal to 100%. If you did not survive, you would not make any observation, so the only possible observation you can make is "I survived."

You definitely can make observations.

Let's say you have two hypotheses (in actuality, it should be distributions, but I haven't looked into the math for how to handle those). You think there's a 67% chance the surgery works, and a 33% chance it fails. It doesn't matter whether this is based on data, we're assuming you have priors. Next, you think there's an 80% chance you wake up again if the surgery works, and a 20% chance that you wake up if it fails. The doctors have told you that it should feel the same upon your waking up, if you survive, regardless of whether it worked or not.

You wake up. How likely should you think it is that the surgery worked?

Well, there's a 4:1 ratio of survival, so I'm pretty sure under Bayes' theorem, we'd multiply that by the 2:1 and get a 89% chance that the surgery worked, instead of the 67% chance that the surgery would work originally.

To check that that makes sense, we indeed find that 89% of rational agents who thought this way and woke up had surgery that worked.

You don't need more formal knowledge, subjective probabilities are good enough. If you don't try to update, your agents will be wrong more frequently than mine.

For a comparison, you clearly think the anthropic principle means that most planets with observers will along certain axes, look like Earth—that is, unusually good for a planet at sustaining life. But under the reasoning that you have, shouldn't that not be the case? That is, given that you exist, it's a 100% chance that you exist. Therefore, you can't distinguish between the "looks like Earth, well suited for life" hypothesis and the "chance quantum fluctuations brought me and an oxygen mask into being on an inhospitable planet" hypothesis—on both of these, if there's any observer, the observer has a 100% subjective chance of existing, so you can't adjust your priors from how likely you think each sort of planet is to form. And since inhospitable planets are so much more common overall, I would think, shouldn't you be shocked that we're on a hospitable one? You clearly don't actually think that, but I don't see how that's different from the case that we're arguing over. You think that we should expect to find ourselves on a hospitable planet, but have no problems with asserting that we should not expect to find ourselves in an hospitable world (where world is meant to extend beyond just the universe).

You're probably about to respond to the last paragraph by saying that "inhospitable planets are much less common in worlds with observers, so it makes sense to look at those." This is undoubtedly correct. But why should we not similarly expect that inhospitable worlds are much less common for observers, and prefer the hospitable ones: the ones where the fine-tuning is actually only apparent, and the physical constants are either necessary or much more likely to be hospitable to life, the ones where there's a multiverse, and so it's practically guaranteed that some of the universes are favorable to life, the ones where it was intentionally made to be hospitable to life, whether by a God or runners of a simulation, or whatever. All these are to the one-shot, heavy-fine-tuning-required-that-we'll-just-have-to-luck-into world what Earth is to Pluto.

I probably shouldn't waste too much more time on this, so I'll be willing to respond once more, and then will (probably) stop.

More comments