site banner

Culture War Roundup for the week of December 11, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

I can't prove it but assuming that other minds exist sure does seem to produce better advance predictions of my experiences. Which is the core of empiricism.

This sounds like a nice demarcation until you realize that it also applies to most long lived religions, and that the things they are making predictions about are a lot more practically useful than what the reason based approaches are concerned with. At least on the individual level.

What exactly is the problem with using with the world model imparted by some religion, in contexts where the world model of that religion has a track record of making accurate predictions and reason does not?

I don’t think there are a huge number of such contexts, but there are definitely some (e.g. "if you strive to be honest and fair in your actions by the standard religious definitions, that genuinely will turn out better for you in the long run" makes good predictions in a tight-knit community even if the "reasonable" position is that you could probably get away with cheating in situations where you don’t see any way that you would get caught). You can of course try to galaxy-brain some reason that what the religion says is actually the same conclusion you would come to using pure logic, but I think "look around and see which approaches work well and which ones don't, and try out the ones that work well for others, and keep doing them if they work even if you don't fully understand why" is a perfectly legitimate approach.

In my experience it's very nice to have a strong-theoretical-model-backed lens you can use to interpret your empirical observations. But you can operate without such a lens, or with a lens based on a model that is known to be flawed (all models are wrong, some are useful).

What exactly is the problem with using with the world model imparted by some religion, in contexts where the world model of that religion has a track record of making accurate predictions and reason does not?

Well I personally think that's very sound, but there is indeed a problem still, which is that you have to be something first.

There's a specific color to your ultimate epistemology. There is one final arbiter to your internal thinking, one final authority, one personal catechism. And that is one's true faith even as you may recognize other frameworks to be instrumentally useful. When there's a conflict and your belief systems disagree, who wins? Much of the philosophical and theological debate isn't really about the modalities of applying belief systems in the nice conditions where they can be conciliated, but when they can't.

I do not mean to imply that it is not useful to have multiple lenses to view a situation from, it is in fact very helpful. But as you ask what the problem is, there it is: you may see they both have a point, but you can't serve two masters.

When there's a conflict and your belief systems disagree, who wins?

I think it's one of those "the hardest decisions are the ones that ultimately matter the least" sorts of things -- if there was some strong reason to choose one side over the other, the decision would be an easy one (unless it's hard because you're missing obtainable information, in which case you should maybe go obtain that information). In my case I'd say that generally, all else being equal, I'm going to go with whatever would sound intuitively right to someone unsophisticated (though all else is not equal very often). I'm not that attached to that approach though -- I have mostly settled on it as a matter of pragmatism, and it seems to be working pretty well so far.

You can't very well faithfully serve two masters but you can totally faithfully serve zero masters.

I agree. However, if you replace "mind" with "consciousness" then I would say that this is no longer true. Assuming that other consciousnesses exist does not produce better advance predictions of experiences, since in principle it does not seem impossible for a human p-zombie to exist, a being that acts in every way like a human, including having human-level intelligence, but lacks consciousness.

What do you mean by consciousness? I think a model which includes the idea that other people have a subjective experience and motivations that lead to their actions absolutely yields better predictions than a model of other as people automatons responding to their environment mechanically.

Assuming that other consciousnesses exist does not produce better advance predictions of experiences

Sure it does! I talk about consciousness, and what I say about it is caused by how I myself experience consciousness. If consciousness exists in others, I expect them to talk similar experiences to consciousness to the ones I have, and if it doesn't exist in others, well then it's pretty weird that they'd talk about having conscious experiences that sound really similar to my conscious experiences for some reason that is not "they are experiencing the same thing I am". If others were p-zombies, then sure all of their prior utterances may have sounded like they were generated by them being conscious, but absent a deeper understanding of how exactly their p-zombification worked, I could not use that to generate useful predictions of what their future utterances about consciousness would be (because, as we've established, the p-zombies are not just reporting on their internal state, but instead doing something else which is not that).

Modeling others as experiencing the same consciousness as I do does in fact lead to better advance predictions of my observations. It doesn't do so in a very philosophically satisfying way if you want to talk about axioms and proofs, but pragmatically speaking "other people are also conscious like me" sure does seem like a useful mental model for generating predictions.

"other people are actually just p zombies behaving as if they are conscious like me" generates predictions that are just as good and as a bonus you get to psychopathically advance your interests without regard to anything except blowback that affects you directly. Leave the shopping cart in the parking lot. Go at the speed limit in the leftmost lane. Drive with your high beams on. Who cares if the NPCs are upset? That's a way better life than actually being pro social all the time.

"other people are actually just p zombies behaving as if they are conscious like me" generates predictions that are just as good

I genuinely don't think it does. Unless you mean "believing" that in the classic "invisible dragon in my garage" sense, which I don't count as actually belief. Rule of thumb - if you're preemptively coming up with excuses for why your future observations will not support your theory over competing theories, or why your theory actually predicts exactly the same thing that the classic theory predicts and the only differences are in something unfalsifiable, that should be a giant red flag for your theory.

For example: I think that my experience of consciousness is caused by specific physical things my nervous system does sometimes. If I slap some electrodes on my scalp to take an electroencephalogram, and then do some task that involves introspecting, making conscious decisions, and describing those experiences, I expect that I will see particular patterns of electrical activity in my brain any time I make a conscious decision. I expect that the EEG readouts from other people would have similar patterns.

For the p-zombie explanation to make sense, we would either have to say that my experience of consciousness and the things I said about it were not caused by things happening in my nervous system, or we would have to say that those patterns in my nervous system and the way I described my experience were related to my consciousness, but in other people there was something else going on which just happened to have indistinguishable results. And also we would predict in advance that any time we try to use the "p-zombie" hypothesis what we actually end up doing is going "what do we predict in the world where other people's consciousness works the same way as mine" and then saying "the p-zombie hypothesis says the same thing" -- the p-zombie hypothesis does not actually predict anything on its own.

That's a way better life than actually being pro social all the time.

As an empirical matter, I think that if you try rating your internal subjective experience after ripping off a stranger who gets angry at you but who you'll never see again vs your internal subjective experience after helping a stranger who expresses gratitude but you'll never see again, you may be surprised at which one results in higher subjective well-being. That doesn't really have any bearing on the factual questions of other peoples' internal experiences, just a prediction I have about what your own internal experience will be like.

As an empirical matter, I think that if you try rating your internal subjective experience after ripping off a stranger who gets angry at you but who you'll never see again vs your internal subjective experience after helping a stranger who expresses gratitude but you'll never see again, you may be surprised at which one results in higher subjective well-being.

Bro, that's just an illusion due to you smuggling in your non empirical (read: religious) belief that other people's feelings matter. Do you feel bad about ripping off video game characters?

Bro, that's just an illusion due to you smuggling in your non empirical (read: religious) belief that other people's feelings matter.

My belief that other people have conscious experience and my belief that that conscious experience matters are not the same belief. The belief that other people's experiences matter to me is something that comes from my moral framework -- and yes, many people use religious teachings as their moral framework, so in that sense you could view it as similar to religion. But again, it's helpful to distinguish between that-which-is and that-which-should-be. I do expect that my sense of that which should be is downstream of some empirically verifiable properties of multi-agent systems, and also a shit-ton of random chance, but I don't have super strong intuitions for what those properties are, nor do I think that I'm morally obligated to change my own behavior away from what my moral intuitions say I should do just because I learn something new about game theory.

Do you feel bad about ripping off video game characters?

I don't think video game characters have conscious experiences. That seems like a pretty non-extreme viewpoint to me "video game characters are conscious", as a world model, generates quite bad predictions about future observations. In a pure consequentialist sense, I do expect it's fairly likely that the game designers will punish the player's decision to rip off a character, but also it's not like winning the game is a moral obligation, so I might rip off a video game character because I expect that to lead to more entertaining dialogue.

Honestly though, what position are you even trying to argue for here? I am very skeptical that you endorse the solipsist position yourself (though if you do I expect your reasoning there, and particularly any observations you could make that would convince you that it wasn't true, would be an interesting conversation).