site banner

Culture War Roundup for the week of December 11, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

So there's the trivial answer, which is that the program "run every program of length 1 for 1 step, then every program of length 2 for 1 step, then every program of length 1 again, and so on [1,2,1,3,1,2,1,4,1,2,...] style" will, given an infinite number of steps, run every program of finite length for an infinite number of steps. And my understanding is that the Kolmogorov complexity of that program is pretty low, as these things go.

But even if we assume that our universe is computable, you're not going to have a lot of luck locating our universe in that system.

Out of curiosity, why do you want to know? Kolmogorov complexity is a fun idea, but my general train of thought is that it's not avtually useful for almost anything practical, because when it comes to reasoning about behaviors that generalize to all turing machines, you're going to find that your approaches fail once the TMs you're dealing with have a large number (like 7 for example, and even 6 is pushing it) of states.

We're debating epistemology, and @self_made_human is arguing that some unfalsifiable theories about the origin of the universe are superior to others because they are "lower complexity" in the information-theory sense, which he proposed measuring through Kolmogorov complexity. My position is that there is no way to rigorously measure the Kolmogorov complexity of the Christian God, or of the Karmic Wheel, or of a universe that loops infinitely via unknown physics even in principle; you cannot measure things you cannot adequately describe, and mechanisms that are unobservable and unfalsifiable cannot be adequately described by definition.

There are a few things I imagine you could be saying here.

  1. Determining what you expect your future experiences to be by taking your past distribution over world models (the "prior") and your observations and using something like Bayes to integrate them is basically the correct approach. However, Kolmogorov complexity is not a practical measure to use for your prior. You should use some other prior instead.
  2. Bayesian logic is very pretty math, but it is not practical even if you have a good prior. You would get better results by using some other statistical method to refine your world model.
  3. Statistics flavored approaches are overrated and you should use [pure reason / intuition / astrology / copying the world model of successful people / something else] to build your world model
  4. World models aren't useful. You should instead learn rules for what to do in various situations that don't necessarily have anything to do with what you expect the results of your actions to be.
  5. All of these alternatives are missing all the things you find salient and focusing on weird pedantic nerd shit. The actual thing you find salient is X and you wish I and people like me would engage with it. (also, what is X? I find that this dynamic tends to lead to the most fascinating conversations once both people notice it's happening but they'll talk past each other until they do notice).

I am guessing it's either 2 or 5, but my response to you will vary a lot based on which it is and the details of your viewpoint.

Take two theories about our actual universe:

A) The universe loops infinitely based on physical principles we have no access to.

B) The universe is a simulation, running in a universe we have no access to.

My argument is that none of us can break out paper and pencil and meaningfully convert the ideas behind these two statements into a formula, and then use mathematics to objectively prove that one theory is more likely to be true than the other, whether by Kolmogorov complexity, or minimum message length, or Shannon entropy, or Bayesian Occam's Razor, or any other method one might name. It seems obvious to me that no amount of analysis can extract signal from no signal.

In short, I'm arguing that when there is no evidence, there is no meaningful distinction between priors.

I assume you have some reason you think it matters that we can't use mathematics to come up with a specific objective prior probability that each model is accurate?

Edit: also, I note that I am doing a lot of internal translation of stuff like "the theory is true" into "the model makes accurate predictions of future observations" to fit into my ontology. Is this a valid translation, or is there some situation where someone might believe a true theory that would nevertheless lead them to make less accurate predictions about their future observations?

I assume you have some reason you think it matters that we can't use mathematics to come up with a specific objective prior probability that each model is accurate?

I don't think reasoned beliefs are forced by evidence; I think they're chosen. He's arguing that specific beliefs aren't a choice, any more than believing 1+1 = 2 is a choice. To support that thesis, he's claiming that the math determines that one of those is less complex than the other, and therefore the math determines that the less complex one is more likely, and therefore he did not choose to adopt it, but rather was compelled to adopt it by deterministic rules. If in fact he's mistaken about the rules, then they can't be the source of his certainty, which means it has to come from somewhere else. I think it can be demonstrated that it's derived from an axiom, not a conclusion forced by evidence.

also, I note that I am doing a lot of internal translation of stuff like "the theory is true" into "the model makes accurate predictions of future observations" to fit into my ontology.

Close enough, I think? The larger point I'm hoping to get back to is that the deterministic model of reason that seems to be generally assumed is a fiction, and that one can directly observe the holes in this fiction by closely examining how they themselves reason. You drew a distinction between "beliefs as expected consequences", and "beliefs as models determining action". I would argue that our expectation of consequences are quite malleable, and that the we choose decisively shape both the experiences we have and how we experience them.

[EDIT] - Sorry if these responses seem a bit perfunctory. I always feel a bit weird about pulling people into the middle of one of these back-and-forths, and it feels discourteous to immediately unload on them, so I try to keep answers short to give them an easy out.

I don't think reasoned beliefs are forced by evidence; I think they're chosen. He's arguing that specific beliefs aren't a choice, any more than believing 1+1 = 2 is a choice.

The choice of term "reasoned belief" instead of simply "belief" sounds like you mean something specific and important by that term. I'm not aware of that term having any particular meaning in any philosophical tradition I know about, but I also don't know much about philosophy.

He's arguing that specific beliefs aren't a choice, any more than believing 1+1 = 2 is a choice.

That sounds like the "anticipated experiences" meaning of "belief". I also cannot change those by sheer force of will. Can you? Is this another one of those less-than-universal human experiences similar to how some people just don't have mental imagery?

The larger point I'm hoping to get back to is that the deterministic model of reason that seems to be generally assumed is a fiction

I don't think I would classify probabilistic approaches like that as "deterministic models of reason".

But yeah I'm starting to lean towards "there's literally some bit of mental machinery for intentionally believing something that some people have".

The choice of term "reasoned belief" instead of simply "belief" sounds like you mean something specific and important by that term.

My opposite above pointed out that some people have beliefs induced by severe mental illness, and that these beliefs are not chosen. It's a fair point, and those certainly aren't the type of belief I'm talking about. Likewise, 1+1=2 or a belief in gravity are self-reinforcing to a degree that it's probably not practical to shift them, and may not be possible at all. Most beliefs are not caused by mental illness, though, and are not as simple as 1+1=2. We have to reason about them to arrive at an answer, so "reasoned beliefs" seems like a more precise term for them.

That sounds like the "anticipated experiences" meaning of "belief". I also cannot change those by sheer force of will. Can you?

in terms of 1+1=2 or gravity, no. I think this might be because they're too self-reinforcing, or because there's no incentive to doubt them, or both, but they seem pretty stable.

I don't think I would classify probabilistic approaches like that as "deterministic models of reason".

People talk about reasoning as though it's a deterministic process. They say that evidence has weight, that evidence can force conclusions. They often talk about how their beliefs aren't chosen, they just followed where the evidence led. They expect evidence to work on other people deterministically as well: when they present what they think is a weighty piece of evidence, and the other person weighs it lightly, they often assume the other person is acting in bad faith. People often expect a well-crafted argument to force someone on the other side to agree with them.

I used to believe all these things. I saw logic and argumentation as something approximating math, as 1+1=2. I thought if I could craft a good enough argument, summon good enough evidence, people on the other side would be forced to agree with me. And likewise, I thought I believed things because the evidence had broken that way.

Having spent a couple decades debating with people, I think that model is fatally flawed, and I think believing it makes people less rational, not more. Worse, I think it interferes with peoples' ability to communicate effectively with each other, especially across a large values divide. Further, I think it's pretty busted even from its own frame of reference; while evidence cannot compel agreement, it can encourage it, and there is a lot of very strong, immediately available evidence people do not actually reason the way the common narrative says they should.

I think that's a very pragmatic and reasonable position, at least in the abstract. You're in great intellectual company, holding that set of beliefs. Just look at all of the sayings that agree!

  • You can't reason someone out of something they didn't reason themselves into
  • It is difficult to get a man to understand something, when his salary depends on his not understanding it
  • We don't see things as they are, we see them as we are
  • It's easier to fool people than to convince them that they have been fooled

And yet! Some people do change their mind in response to evidence. It's not everyone, it might not even be most people, but it is a thing that happens. Clearly something is going on there.

We are in the culture war thread, so let's wage some culture war. Very early in this thread, you made the argument

What does replacing the Big Bang with God lose out on? Both of them share the attribute of serving as a termination point for materialistic explanations. Anything posited past that point is unfalsifiable by definition, unless something pretty significant changes in terms of our understanding of physics.

What does replacing the Big Bang with God lose out on? I think the answer is "the entire idea that you can have a comprehensible, gears-level model of how the universe works". A "gears-level" model should at least look like

  1. If the model were falsified, there should be specific changes to what future experiences you anticipate (or at the very least, you should lose confidence in some specific predictions you had before)
  2. Take the components of your model. If you take one of those parts, and you make some large arbitrary change to it, the model should now make completely different (and probably wrong, and maybe inconsistent) predictions.
  3. If you forgot a piece of your model, could you rederive it based on the other pieces of the model?

So I think the standard model of physics mostly satisfies the above. Working through:

  1. If general relativity were falsified, we'd expect that e.g. the predictions it makes about the precession of Mercury would be inaccurate enough that we would notice. Let's take the cosmological constant Λ in the Einstein Field Equation, which represents the energy density of vacuum, and means that on large enough scales, there is a repulsive force that overpowers the attractive force of gravity.
  2. If we were to, for example, flip the sign, we would expect the universe to be expanding at a decreasing rate rather than an increasing rate (affecting e.g. how redshifted/blueshifted distant standard candles were).
  3. If you forget one physics equation, but remember all the others, it's pretty easy to rederive the missing one. Source: I have done that on exams when I forgot an equation.

Side note: the Big Bang does not really occupy a God-shaped space in the materialist ontology. I can see where there would be a temptation to view it that way - the Big Bang was the earliest observable event in our universe, and therefore can be viewed as the cause of everything else, just like God - but the Big Bang is a prediction (retrodiction?) that is generated by using the standard model to make sense of our observations (e.g. the redshifting of standard candles, the cosmic microwave background). The question isn't "what if we replace the Big Bang with God", but rather "what if we replace the entire materialist edifice with God".

In any case, let's apply the above tests to the "God" hypothesis.

  1. What would it even mean for the hypothesis "we exist because an omnipotent, omniscient, omnibenevolent God willed it" to be falsified? What differences would you expect to observe (even in principle)
  2. Let's say we flip around the "onmiscient" part of the above - God is now omnipotent and omnibenevolent. What changes?
  3. Oops, you forgot something about God. Can you rederive it based on what you already know?

My point here isn't really "religion bad" so much as "you genuinely do lose something valuable if you try to use God as an explanation".

More comments