site banner

Culture War Roundup for the week of July 17, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

That wasn't what I meant. Your axioms are already primary to you. So if we share an axiom that axiom is primary to you. I didn't say anything about my experience. There is just an overlap of a preexisting condition of primary-ness. I am not saying my axioms have any effect on what is primary to you.

Your axioms are primary to you. Let's say there is an axiom called axiom A. If axiom A is an axiom that you hold, then it is primary to you. If I hold axiom A, then axiom A is primary to me, because it is one of the axioms I hold. Therefore if you and I both hold axiom A, then it is primary for both of us and acts as primary, substantial moral common ground. You don't have to care about my experience or my moral axioms whatsoever. But if we do share them, then those axioms are primary for both of us. That's all I was saying. And with that common ground then we can communicate about morality. That is the basis of a shared morality. Even if you don't agree with me that moral axioms are subjective, the ones we share are still primary to both of us.

That two people happen to share some set of subjective things does not somehow elevate them to being any more primary.

No elevation is needed. Each person already believes the thing, therefore the thing is primary to them. I am not saying that there is magical effect creating new primacy from their shared moral axioms. I am saying that all of their moral axioms are primary to them, therefore if they share them, they have common ground and will agree that those axioms are primary.

Looking at those twenty-two people, can I say that cilantro is "good" or "bad"? I think that even trying would be an error in language.

I am saying that if all of those people share the axiom that cilantro is good, then they can all agree that cilantro is good. That's all.

can I say that cilantro is "good" or "bad"

If you are one of these people with the axiom that cilantro is good, then you will say that cilantro is good. If you hold an axiom that it is bad you will say it is bad.

If you are a third party with no opinion about cilantro, then I think the moral status of cilantro will be undefined for you, or perhaps it will seem like a weird and alien thing to attach moral status to. As it does for me in real life.

I think that even trying would be an error in language.

maybe it would be an error in language for you as a third party with no opinion on cilantro to say that cilantro is good or bad. But it would certainly not be an error of language for you to say that those people over there believe that cilantro is good. That would be a simple description of the reality that those people believe cilantro is morally good.

and for that strange group of people who believe cilantro is morally good, it would not be a error of language for them to say "cilantro is morally good" - because that is what they believe. You would say that they are incorrect, but there have been lots of humans with moral axioms you would say are incorrect or bizarre, and I doubt you would normally say that their expressions of their weird beliefs are an error of language.

But it would certainly not be an error of language for you to say that those people over there believe that cilantro is good.

There's twenty-one people over there that like cilantro and one person who doesn't. I can't actually say that "they" believe that cilantro is good. In any event, you changed what it is that I said would be an error in language. I asked, "Looking at those twenty-two people, can I say that cilantro is "good" or "bad"? I think that even trying would be an error in language."

There's twenty-one people over there that like cilantro and one person who doesn't. I can't actually say that "they" believe that cilantro is good.

Ok, Sorry if I miswrote that or wasn't clear enough. You can say that they, the 21 people who believe cilantro is good - believe that cilantro is good. That seems essentially definitionally true and not an error of language.

In any event, you changed what it is that I said would be an error in language. I asked, "Looking at those twenty-two people, can I say that cilantro is "good" or "bad"? I think that even trying would be an error in language."

I don't think I changed what you said. I made it clear what I thought. If there are groups of people who think cilantro is good or bad, that does not provide you any ability to extract from the fact that they believe those things the position that cilantro is good or bad. Their moral conclusions are largely irrelevant to whether you can say cilantro is good or bad, that would have to be based on your own axioms. most likely you wouldn't think that cilantro has a moral weight, but I have no problem imagining a culture that does, like this theoretical group.

If your axioms are that cilantro is morally good, then it is not an error of language to say that cilantro is morally good.

However, as I said earlier, it is logically incoherent to say that is it objectively true that cilantro is morally good. And definitely logically incoherent to say that it is morally truer that cilantro is good than that cilantro is bad. What is objectively true is that some of these theoretical people believe that cilantro is good, and some of them believe it is bad. That is objectively true. Determining the truth of the statement "cilantro is morally good" is where logical coherence breaks down.

You can say that they, the 21 people who believe cilantro is good - believe that cilantro is good.

Sure. You can say that the nazis believed that eliminating Jews is good. Nothing interesting seems to follow from this. On your view, there are no grounds on which we can say, "The nazis were wrong, and exterminating Jews is not good." We can only say, "There are some people over there who think that exterminating Jews is good and some people over there who think that exterminating Jews is bad. Nothing interesting seems to follow from this."

Nothing interesting seems to follow from this.

I disagree, but interesting or not, my account of the nature of morality more closely aligns with reality and has more explanatory power.

From the evidence we have, it appears that morality is relative. I am making the argument that just because morality is relative that doesn't rob us of morality. It doesn't lead to moral nihilism and it doesn't decrease the relevance of morality in our lives.

On your view, there are no grounds on which we can say, "The nazis were wrong, and exterminating Jews is not good." We can only say, "There are some people over there who think that exterminating Jews is good and some people over there who think that exterminating Jews is bad. Nothing interesting seems to follow from this."

I think I have made a strong argument that this is not a necessary result of moral relativism.

The behavioral result is identical in my account and your moral realist account. If I am a moral relativist who thinks the nazis are wrong, I will say "There are some people over there who think that exterminating Jews is good and some people over there who think that exterminating Jews is bad" also I can say "I am one of the people who thinks that the nazis are wrong and exterminating Jews is bad" and I can act accordingly to stop their abhorrent behavior.

Nothing has changed behaviorally from your account, as both the allies and the nazis are going to behave the same regardless. Even if I was a moral realist the nazis were going to act in line with their fucked up beliefs. I was still going to act in accordance with my beliefs.

The difference between the accounts is their explanatory power. moral relativism doesn't have the issue of needing to find a justification for moral axioms that as far as I can tell are fundamentally not possible to justify objectively. Can you explain to me how you can justify a moral axiom without relying on another moral axiom?

The behavioral result is identical in my account and your moral realist account.

This is going to depend on things like the determinism/compatibilism/free will debate. It cannot be freely concluded.

The difference between the accounts is their explanatory power. moral relativism doesn't have the issue of needing to find a justification for moral axioms that as far as I can tell are fundamentally not possible to justify objectively.

This is not what it means to have more explanatory power. In fact, if it were, we could on similar grounds jettison the entire scientific endeavor for objective physical reality. No need to go to the trouble of looking for a justification when we can just happily settle for the subjectivist view.

This is going to depend on things like the determinism/compatibilism/free will debate. It cannot be freely concluded.

I'm not sure I believe that actual compatibilists exists but otherwise I guess that's fair. I'll think more about how that debate interacts with this one.

This is not what it means to have more explanatory power.

I think it kind of is but that's a larger argument. As this feels to not be the core of the argument, how about I start by just saying it's more coherent and has at least an equal level of explanatory power as the alternate theory. I think to argue explanatory power in depth I would need to know more about what you think moral realism vs. moral relativism predicts, which you have said would require pulling in the debate over determinism/free will.

In fact, if it were, we could on similar grounds jettison the entire scientific endeavor for objective physical reality. No need to go to the trouble of looking for a justification when we can just happily settle for the subjectivist view.

Thats not true.

Science is different in that it has no need to be justified by arbitrary axioms. It has utility as a justification which I think we would both agree is not a valid justification for morals. All science needs to show that it is better science is to work.

The scientific endeavor can be tracked via its utility. If my opponents and I have different science, but their science makes better bombs and medicine, then I should reconsider my science.

But if my opponents and I have different morals, and their morals make better bombs and medicine (let's say they use children in their mines or sacrifice children to create a working immortality potion), that is not grounds to reconsider my morals. Science is judged on utility, morality is not.

moral relativism doesn't have the issue of needing to find a justification for moral axioms that as far as I can tell are fundamentally not possible to justify objectively. Can you explain to me how you can justify a moral axiom without relying on another moral axiom?

I would still like an answer to this please.

I'm going to have to back out of this soon, because I can now tell that you've been too steeped in the New Atheists. There's not going to be much value in proceeding beyond simply suggesting that you spend a bit more time in some philosophy courses.

You've run absolutely roughshod over centuries of philosophical underpinnings of science, plus you've come to a plainly wrong conclusion to boot. Not a single word on what the actual object of science is, nor why such a thing should correlate in any way to "utility", whatever that means. If you lived in a Matrix where the only thing that seemed to bring you an ill-defined "utility" was pressing the experience-machine-go-heroin button, I guess that would be the proper domain of science or something.

moral relativism doesn't have the issue of needing to find a justification for moral axioms that as far as I can tell are fundamentally not possible to justify objectively. Can you explain to me how you can justify a moral axiom without relying on another moral axiom?

I would still like an answer to this please.

I'm sort of proceeding by reductio ad absurdum. Seeing how your test here would play out when turned against something you like. You seem vastly less willing to be even a tenth as stringent in favor of bounding over giant buildings in a single leap (of faith).

I think you are pigeon hole-ing me really incorrectly.

because I can now tell that you've been too steeped in the New Atheists.

I really really am not. For one, aren't they are all stridently opposed to moral relativism?

I have no love for them as an ideological group and also have not read much of their stuff. Looking up the people that define that group I can honestly say that while I have heard of some of them, the only one I have really read at all is Dennett, who I do enjoy. But I don't agree with their beliefs and am not that familiar with their thoughts. Clearly they oppose religion and the things that come with it and I am deeply in favor of religion.

I do not think my beliefs line up with the New Atheists in general. Wasn't someone earlier in this thread saying that Sam Harris was trying to compose an objective system of morality based on naturalism? That flies in the face of my entire position.

There's not going to be much value in proceeding beyond simply suggesting that you spend a bit more time in some philosophy courses... You've run absolutely roughshod over centuries of philosophical underpinnings of science

Dude mean. I'm pretty familiar with the philosophical literature. It's embarrassing to retreat to a call to authority but I went pretty far into the philosophy class tree in undergrad and it was a college with a well respected philosophy program. I took lots of upper level philosophy courses. I don't have a philosophy PHD but I'm also not green by any measure. I guess I reject some of the established positions of analytic philosophers, but I have read them, and I have a lot of love for the continental literature as well. I don't think a lack of knowledge of the canon is the issue here.

Not a single word on what the actual object of science is, nor why such a thing should correlate in any way to "utility", whatever that means. If you lived in a Matrix where the only thing that seemed to bring you an ill-defined "utility" was pressing the experience-machine-go-heroin button, I guess that would be the proper domain of science or something.

Ah, ok I think I see the confusion. You're interpreting my use of utility to be the same as economists or utilitarians; essentially the same as human pleasure (I know they quibble about the exact meaning but something in that space). That was not what I meant at all. I meant it in the informal or scientific sense of a description of the degree to which something is practically useful. If your science makes tall buildings that don't fall down, medicine that heals the sick, and bombs that explode well - then it is good science. Good science correctly predicts the material world and successfully provides control over it. The more it does those things, the better science it is. I wasn't trying to refer to utility as human happiness/pleasure at all.

You've run absolutely roughshod over centuries of philosophical underpinnings of science

I really don't think I have. Science's core objective is to provide us with a reliable and predictive understanding of the natural world. Its success is best measured by its utility, and in a scientific context, utility refers to the practical applications of scientific theories and findings. That might not be the only goal of science or the method of science, but it's a very strong measure of its success. If your science doesn't work when applied to physical experiments, you go back to the chalkboard. That position does not "run roughshod" over the philosophical underpinnings of science.

With that explained, do you feel your matrix thought experiment is still relevant here? I think it was based on the assumption that I meant utility like utilitarians do, but if I am misunderstanding, please tell me.

I'm sort of proceeding by reductio ad absurdum. Seeing how your test here would play out when turned against something you like. You seem vastly less willing to be even a tenth as stringent in favor of bounding over giant buildings in a single leap (of faith).

I'm not sure I follow what you are saying. Is the reductio ad absurdum argument you are making the matrix thought experiment? If not can you lay the argument out again please?

when turned against something you like.

What thing that I like?

More comments