This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
So there's the trivial answer, which is that the program "run every program of length 1 for 1 step, then every program of length 2 for 1 step, then every program of length 1 again, and so on [1,2,1,3,1,2,1,4,1,2,...] style" will, given an infinite number of steps, run every program of finite length for an infinite number of steps. And my understanding is that the Kolmogorov complexity of that program is pretty low, as these things go.
But even if we assume that our universe is computable, you're not going to have a lot of luck locating our universe in that system.
Out of curiosity, why do you want to know? Kolmogorov complexity is a fun idea, but my general train of thought is that it's not avtually useful for almost anything practical, because when it comes to reasoning about behaviors that generalize to all turing machines, you're going to find that your approaches fail once the TMs you're dealing with have a large number (like 7 for example, and even 6 is pushing it) of states.
We're debating epistemology, and @self_made_human is arguing that some unfalsifiable theories about the origin of the universe are superior to others because they are "lower complexity" in the information-theory sense, which he proposed measuring through Kolmogorov complexity. My position is that there is no way to rigorously measure the Kolmogorov complexity of the Christian God, or of the Karmic Wheel, or of a universe that loops infinitely via unknown physics even in principle; you cannot measure things you cannot adequately describe, and mechanisms that are unobservable and unfalsifiable cannot be adequately described by definition.
There are a few things I imagine you could be saying here.
I am guessing it's either 2 or 5, but my response to you will vary a lot based on which it is and the details of your viewpoint.
Take two theories about our actual universe:
A) The universe loops infinitely based on physical principles we have no access to.
B) The universe is a simulation, running in a universe we have no access to.
My argument is that none of us can break out paper and pencil and meaningfully convert the ideas behind these two statements into a formula, and then use mathematics to objectively prove that one theory is more likely to be true than the other, whether by Kolmogorov complexity, or minimum message length, or Shannon entropy, or Bayesian Occam's Razor, or any other method one might name. It seems obvious to me that no amount of analysis can extract signal from no signal.
In short, I'm arguing that when there is no evidence, there is no meaningful distinction between priors.
I assume you have some reason you think it matters that we can't use mathematics to come up with a specific objective prior probability that each model is accurate?
Edit: also, I note that I am doing a lot of internal translation of stuff like "the theory is true" into "the model makes accurate predictions of future observations" to fit into my ontology. Is this a valid translation, or is there some situation where someone might believe a true theory that would nevertheless lead them to make less accurate predictions about their future observations?
I don't think reasoned beliefs are forced by evidence; I think they're chosen. He's arguing that specific beliefs aren't a choice, any more than believing 1+1 = 2 is a choice. To support that thesis, he's claiming that the math determines that one of those is less complex than the other, and therefore the math determines that the less complex one is more likely, and therefore he did not choose to adopt it, but rather was compelled to adopt it by deterministic rules. If in fact he's mistaken about the rules, then they can't be the source of his certainty, which means it has to come from somewhere else. I think it can be demonstrated that it's derived from an axiom, not a conclusion forced by evidence.
Close enough, I think? The larger point I'm hoping to get back to is that the deterministic model of reason that seems to be generally assumed is a fiction, and that one can directly observe the holes in this fiction by closely examining how they themselves reason. You drew a distinction between "beliefs as expected consequences", and "beliefs as models determining action". I would argue that our expectation of consequences are quite malleable, and that the we choose decisively shape both the experiences we have and how we experience them.
[EDIT] - Sorry if these responses seem a bit perfunctory. I always feel a bit weird about pulling people into the middle of one of these back-and-forths, and it feels discourteous to immediately unload on them, so I try to keep answers short to give them an easy out.
The choice of term "reasoned belief" instead of simply "belief" sounds like you mean something specific and important by that term. I'm not aware of that term having any particular meaning in any philosophical tradition I know about, but I also don't know much about philosophy.
That sounds like the "anticipated experiences" meaning of "belief". I also cannot change those by sheer force of will. Can you? Is this another one of those less-than-universal human experiences similar to how some people just don't have mental imagery?
I don't think I would classify probabilistic approaches like that as "deterministic models of reason".
But yeah I'm starting to lean towards "there's literally some bit of mental machinery for intentionally believing something that some people have".
My opposite above pointed out that some people have beliefs induced by severe mental illness, and that these beliefs are not chosen. It's a fair point, and those certainly aren't the type of belief I'm talking about. Likewise, 1+1=2 or a belief in gravity are self-reinforcing to a degree that it's probably not practical to shift them, and may not be possible at all. Most beliefs are not caused by mental illness, though, and are not as simple as 1+1=2. We have to reason about them to arrive at an answer, so "reasoned beliefs" seems like a more precise term for them.
in terms of 1+1=2 or gravity, no. I think this might be because they're too self-reinforcing, or because there's no incentive to doubt them, or both, but they seem pretty stable.
People talk about reasoning as though it's a deterministic process. They say that evidence has weight, that evidence can force conclusions. They often talk about how their beliefs aren't chosen, they just followed where the evidence led. They expect evidence to work on other people deterministically as well: when they present what they think is a weighty piece of evidence, and the other person weighs it lightly, they often assume the other person is acting in bad faith. People often expect a well-crafted argument to force someone on the other side to agree with them.
I used to believe all these things. I saw logic and argumentation as something approximating math, as 1+1=2. I thought if I could craft a good enough argument, summon good enough evidence, people on the other side would be forced to agree with me. And likewise, I thought I believed things because the evidence had broken that way.
Having spent a couple decades debating with people, I think that model is fatally flawed, and I think believing it makes people less rational, not more. Worse, I think it interferes with peoples' ability to communicate effectively with each other, especially across a large values divide. Further, I think it's pretty busted even from its own frame of reference; while evidence cannot compel agreement, it can encourage it, and there is a lot of very strong, immediately available evidence people do not actually reason the way the common narrative says they should.
I think that's a very pragmatic and reasonable position, at least in the abstract. You're in great intellectual company, holding that set of beliefs. Just look at all of the sayings that agree!
And yet! Some people do change their mind in response to evidence. It's not everyone, it might not even be most people, but it is a thing that happens. Clearly something is going on there.
We are in the culture war thread, so let's wage some culture war. Very early in this thread, you made the argument
What does replacing the Big Bang with God lose out on? I think the answer is "the entire idea that you can have a comprehensible, gears-level model of how the universe works". A "gears-level" model should at least look like
So I think the standard model of physics mostly satisfies the above. Working through:
Side note: the Big Bang does not really occupy a God-shaped space in the materialist ontology. I can see where there would be a temptation to view it that way - the Big Bang was the earliest observable event in our universe, and therefore can be viewed as the cause of everything else, just like God - but the Big Bang is a prediction (retrodiction?) that is generated by using the standard model to make sense of our observations (e.g. the redshifting of standard candles, the cosmic microwave background). The question isn't "what if we replace the Big Bang with God", but rather "what if we replace the entire materialist edifice with God".
In any case, let's apply the above tests to the "God" hypothesis.
My point here isn't really "religion bad" so much as "you genuinely do lose something valuable if you try to use God as an explanation".
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link