site banner

Small-Scale Question Sunday for October 29, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

(This is tangential to my main point, but just for fun: Is there a probability where it becomes justified to believe something? 2% is too low, but 100% is too high--that would "commit you to a useless stance of Cartesian doubt." Is there a cutoff? If so, where is it and why? Even if you only believe ideas at 99% probability or above, you're still accepting up to a 1% chance that your belief is false. Wouldn't it be safer to say that you simply "don't have a belief on the matter?" On the other hand, if you can believe something at 99%, why not at 80%, or 51%? Why not at, say, 30%, if all the alternatives are even less likely?)

We speak of belief as a binary matter - you either believe something or you don't - but in practice it's a matter of degrees of confidence. For any given proposition, you have some degree of confidence in its truth (even if it's near-zero) and at a certain threshold it's high enough that you say you believe it. But it's just semantics.

You say "Why is it a problem to simply state that you don't know why there is something rather than nothing or what consciousness is, and thus don't have a belief on the matter?" Good question, and I can't think of a good answer except that it seems painfully unsatisfactory to me, like asking someone starving in the desert "why can't you simply enjoy being hungry?"

Well, I'm sorry, but that's just not a good reason to believe something. That doesn't negate the real feelings you describe and the challenge of dealing with them, but it's not going to be convincing to anyone else as a justification for believing what you believe, nor will anyone else be have any reason to think that you're justified in believing it yourself.

But I can't help but notice you didn't apply that reasoning to the next big question I mentioned: "how ought we to act?" The is/ought gap can't be bridged empirically. But it has to be bridged somehow--before you can act, you need to know how you ought to act. You can't just throw up your hands and say, "I don't know"; every deliberate action implies a value judgment.

It depends on the action. Sometimes our actions are justified based on information we have good reason to believe about the physical world (e.g., floors hold our body's weight, and putting one foot in front of the other repeatedly on this floor will soon take you to your kitchen), or about our minds (e.g., you want to walk to the kitchen because you're hungry).

But your later remarks make me think that what you're really trying to get at is essentially "how do we know how to treat other people", i.e., morality. Well, I think you already know that that's a deeply controversial and unsolved topic at an abstract level. Let's consider the approaches on offer.

Consider the religious approach to morality: that God tells us right from wrong. I think the best rebuttal to that has remained unchanged for a couple thousand years when it was introduced by Plato, if I'm not mistaken. It runs as follows. Suppose God says killing is wrong. Did he have some reason to say that it's wrong? Or could he have just as easily said that it's always right to kill anybody else (in which case it would be right because he said it's right)? If you say either that it would still be wrong to kill even if God said it was right, or that God wouldn't/couldn't say killing is right because he had a reason for saying killing is wrong, well then we can appeal directly to the reason and skip the middle man.

Now consider the non-religious approach to morality, which uses science and reason. Let's start with science. Science can provide us information about the world and the predictable consequences of certain actions. Why is this important? Well, take witchcraft for example. Hunting witches and punishing them isn't actually irrational - if there really was a witch casting spells to harm other people, she really should be punished, or even killed! It only doesn't make sense if witchcraft isn't actually a thing. But belief in witches is nearly a cultural universal among primitive humans because the default operating system of Homo sapiens does not allow much room for the intuition that random bad shit sometimes happens. Rather, if a person you care about gets sick or your crop fails, the primitive human believes there must have been a witch that cast a spell to cause it. Today, science has afforded us actually correct explanations for events that used to be explained by witchcraft. That helps shape our morality - i.e., how we "should" act - in an instance like this.

And science's role in morality is far more extensive than finding better explanations for calamities than witchcraft. Again, it provides a more informed understanding of the physical world, and a large part of determining what actions are moral is going to be contingent upon facts about the world that we just don't know without science. A lot of that will come down to scientific knowledge about the state of brains and the fact that brain states constitute experiences like pain (and thus whether a certain action will predictably cause pain), but it can also include things like understanding the effects of certain pollutants on our bodies and ecosystems (and thus whether dumping certain waste will harm others).

But science can't bridge the is-ought gap. It can tell us "this action causes another person pain", but not "you therefore shouldn't take this action". That's where reason comes in.

Suppose someone were to say, "Why should I care if I cause you pain or kill you? Your pain isn't my pain, and besides, I'd like to take your possessions after I kill you." Well, he won't convince anyone else that only his suffering matters and no one else's, so he is in no position to object if others were to treat him that way. Since no one wants to be treated that way, and since one's power over others is uncertain (tomorrow you might be in a position to be killed by a bigger man or a larger mob), it's in everyone's interest to collectively agree that randomly killing and pillaging is wrong.

Or suppose someone says, "I don't think it's immoral to inflict cruel and torturous punishment on this bread thief because we need to deter criminals. The harm caused by inflicting pain on him is less than the harm caused by undeterred criminals." Indeed, criminal deterrence is a defensible rationale for causing pain. But if the goal is deterrence, then any harm inflicted in excess of that which is necessary to deter criminals is arguably pointless harm and should be avoided. And surely short imprisonment is enough deterrence for theft. Furthermore, there's a problem of perverse incentives: if a man knows he'll be tortured and executed for stealing a loaf of bread, well then he might as well kill the shopkeeper while he's at it. Since there can be no greater punishment than what is already expected for the theft, he is incentivized to maximize his chances of getting away with it by killing the witness. Therefore, it makes more sense to have a sliding scale of punishment for criminal activity.

Those aren't scientific or religious arguments, but the use of such reason together with the better understanding of the world that we get from science provides us the building blocks for morality. Now, people who have read way too much Hume might object that it's still smuggling in certain first principles like "all else being equal, pain is bad". But you can play that game with anything. How do we know that the law of noncontradiction is compelling - that A cannot equal not-A? Well, it just... sorta... is. You have to pull yourself up by your bootstraps at some point and stop searching for a deeper proposition that isn't self-justifying. And if someone is unconvinced by the starting point that "all else being equal, pain is worse than no pain", then I think that person is either someone with way too much education who likes playing games, or they're not an honest interlocutor.

I'm interested to know if you consider yourself a moral realist or not

Not really. I think we all just sort of woke up on this backwater planet in this mysterious universe and are just collectively fumbling our way towards making life better for ourselves using the crude cognitive toolkits we evolved with. That includes figuring out facts about ourselves and the world and using reason to try and persuade each other of the best state of affairs to strive towards.

I do think we have an evolved sense of morality. It seems obvious to me that moral intuitions are innate, and they're certainly a human universal. That doesn't mean those evolved intuitions are actually defensible, though, or provide a good basis for morality. Sometimes they are (e.g., indignation at unfairness) and sometimes they're not (e.g., the lives of that other tribe have no value because they're Others).

Sorry for the late reply; I've had a busy couple days. Thanks for the through response!

Consider the religious approach to morality: that God tells us right from wrong. I think the best rebuttal to that has remained unchanged for a couple thousand years when it was introduced by Plato, if I'm not mistaken. It runs as follows. Suppose God says killing is wrong. Did he have some reason to say that it's wrong? Or could he have just as easily said that it's always right to kill anybody else (in which case it would be right because he said it's right)? If you say either that it would still be wrong to kill even if God said it was right, or that God wouldn't/couldn't say killing is right because he had a reason for saying killing is wrong, well then we can appeal directly to the reason and skip the middle man.
You're right, of course, that if morality had some basis more authoritative than God, then God would be a mere "middle man" and would not be necessary to the determination of moral truths. But I don't agree that "it would still be wrong to kill even if God said it was right, or that God wouldn't/couldn't say killing is right because he had a reason for saying killing is wrong." I believe God's nature is the source of goodness; you can't appeal to some standard of goodness higher than God. But it also isn't true to say that God could arbitrarily change good to evil or vice versa; God--being perfect--has no reason to change his nature, and--being omnipotent--his nature can't be changed by anything else. An actions is "good" insofar as it conforms to the immutable will of God.

Your "witchcraft" example conflates a factual dispute for a moral dispute: science can tell us whether or not the village witch is guilty of destroying the crops (a factual question), but it can't tell us whether or not people who destroy crops deserve to be punished (a moral question). I think you acknowledge this, since you agree that science can't derive an "ought" from an "is."

Reason can justify an "ought" statement, but only by presupposing a condition: "you ought to exercise if you want to be healthy; you ought to punish criminals if you want to deter crime" etc. So I don't think your examples work:

Suppose someone were to say, "Why should I care if I cause you pain or kill you? Your pain isn't my pain, and besides, I'd like to take your possessions after I kill you." Well, he won't convince anyone else that only his suffering matters and no one else's, so he is in no position to object if others were to treat him that way. Since no one wants to be treated that way, and since one's power over others is uncertain (tomorrow you might be in a position to be killed by a bigger man or a larger mob), it's in everyone's interest to collectively agree that randomly killing and pillaging is wrong.
Plenty of powerful people can say, with a high degree of confidence, that they will*not* be killed tomorrow by a bigger man or a larger mob. Genghis Khan killed and pillaged to his heart's content, and he lived well into his sixties and, by most accounts, died by falling off his horse and/or contracting an illness. Meanwhile, plenty of moral people end up getting killed or pillaged *in spite of* always behaving as if killing and pillaging are wrong. If morality has no better basis than this sort of social-contract-theory, then the Genghis Khans of the world have no use for it.

Earlier, you (correctly) pointed out that, if God is a middle man between humans and morality, we can just skip God and go straight to morality. But your own view of morality seems to treat it as a "middle man" for rational self-interest. If Genghis Khan says, "Why don't I skip the morality, and go straight for my own rational self-interest (i.e. killing and pillaging with impunity, because I enjoy it and I'm powerful enough to get away with it)?", how could you dissuade him?

Similarly, while I agree humans generally have evolved a "moral intuition," I don't agree with you that it's "universal." Psychopaths seem to be lacking the moral compunctions that are innate in ordinary humans. And while plenty of psychopaths end up dead or in prison, intelligent and capable psychopaths often become wildly successful. It seems like, above a certain level of intelligence, psychopathy is a very useful trait (which might explain why it hasn't been selected out of existence). So, if you can't appeal to Genghis Khan's moral intuitions, because he wasn't born with them--and if you can't appeal to his rational or game-theoretic self-interest--how do you convince him not to kill and pillage?

The only way I can think of is to convince him that killing and pillaging are not desirable because they are not good. And we know they are not good, because God is good and God is opposed to killing and pillaging. If Genghis Khan continues to kill and pillage, his life will be unfulfilling because he has not followed what is good, and after his death he will be punished by God for disobeying his will.

Now, you may not believe these things, and Genghis Khan may not believe them either. In that case, we're no better off than we would be under your system. But we're no worse off, either. And, at the margins, there are some rare instances where religious appeals appear to have moved otherwise implacable pillagers and conquerors; we'll never know what Pope Leo said during his meeting with Attila the Hun, but we do know the latter subsequently called off the invasion of Rome.

But my arguments about the religious basis of moral truths are, obviously, less relevant to moral non-realists like you than to, say, atheists who still believe in objective morality, like a lot of utilitarians (Scott Alexander's Utilitarian FAQ, for example, never actually explains why anyone should assign value to other people; this seems like it's kind of the entire crux of utilitarianism, but Scott brushes it off as a "basic moral intuition" (section 3.1)). If you're willing to bite the bullet that morality is just a spook, then you have no reason to be troubled by materialism's failure to establish an objective basis for morality. But you also don't have much room to criticize people who are convinced of objective morality, if their convictions turn them away from a materialism that's inadequate to justify moral truths.

Thank you for the comment, it makes me a little more confident that the sanity waterline rises as well as falls when I see other people articulate much the same arguments as I would have made myself.