site banner

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Listen on iTunesStitcherSpotifyPocket CastsPodcast Addict, and RSS.


In this episode, we talk about white nationalism.

Participants: Yassine, Walt Bismarck, TracingWoodgrains.

Links:

Why I'm no longer a White Nationalist (The Walt Right)

The Virulently Unapologetic Racism of "Anti-Racism" (Yassine Meskhout)

Hajnal Line (Wikipedia)

Fall In Line Parody Song (Walt Bismarck)

Richard Spencer's post-Charlottesville tirade (Twitter)

The Metapolitics of Black-White Conflict (The Walt Right)

America Has Black Nationalism, Not Balkanization (Richard Hanania)


Recorded 2024-04-13 | Uploaded 2024-04-14

10

By rule utilitarianism, here I will mean, roughly speaking, the moral stance summarized as follows:

  1. People experience varying degrees of wellbeing as a function of their circumstances; for example, peace and prosperity engender greater wellbeing than for their participants than war and famine.
  2. As a matter of fact, some value systems yield higher levels human wellbeing than others.
  3. Moral behavior consists in advocating and conforming to value systems that engender relatively high levels of human wellbeing.

Varieties of this position are advocated in academic philosophy, for example, by Richard Brandt, Brad Hooker, and R.M. Hare -- and in popular literature, for example, by Sam Harris in The Moral Landscape and (more cogently, in my opinion) by Stephen Pinker in the final chapter of his book Enlightenment Now. Since it seems to be the most prominent form of utilitarianism in circulation these days, for the remainder of this essay I will simply write utilitarianism in place of humanist rule utilitarianism. I acknowledge this is not ideal, but I hold it to be the least of three evils -- the other two being to repeat the twelve-syllable phrase "humanist rule utilitarianism" ad nauseam, or to use an acronym (HRU).

I do not believe that rule utilitarianism cuts much ice as a moral stance, and this essay will explain why I hold that opinion.

1. The "Bo Diddley Question"

"All you need is love" -- John Lennon

Maybe, but the real question is,

"Who do you love?" -- Bo Diddley

In his book Enlightenment Now, psychologist Steven Pinker hedges on proposition (3) right off the bat, writing, "Despite the word's root, humanism doesn't exclude the flourishing of animals" [p. 410]. Well, professor, it sho' 'nuff does exclude the flourishing of animals! If the ultimate moral purpose is to promote human flourishing, then non-humans are excluded from consideration in the ultimate moral purpose. To be charitable, I suppose Pinker means that it is consistent with (3) that there is a secondary moral purpose in maximizing the wellbeing of animals. But really, a Harvard professor ought not to be an intellectual charity case, and he should have said what he meant. I speculate that the reason he did not say what he meant is that it opens a can of worms that is rather uncomfortable for the humanist position. To wit, either animals count as much as humans our moral calculus, or they do not. If animals count as much as humans, then most of us (non-vegetarians) are in a lot of trouble. On the other hand, if animals don't count as much as humans, then the reason they don't, carried to its logical conclusion, is liable to be the reason that some humans don't count as much as others. For example, do (non-human) animals count less because they are less intelligent? In that case, the utilitarian is obliged to explain why the line is drawn just where it is, and, indeed, why deep fried dumbass shouldn't be an item on the menu at Arby's. If you claim to found a moral theory on objective reason then you should actually do it -- and one of the firsts things your theory should account for is "speciesism", that is, why cannibalism is forbidden but vegetarianism is optional (unless you hold that cannibalism is not forbidden, and/or vegetarianism is not optional).

In The Moral Landscape, and in his TED talk on the same thesis [https://www.ted.com/talks/sam_harris_science_can_answer_moral_questions?language=en], Sam Harris spends most of his time defending propositions (1) and (2) of the utilitarian position, but he does briefly address the issue of speciesism, saying, "If we are more concerned about our fellow primates than we are about insects, as indeed we are, it's because we think they're exposed to a greater range of potential happiness and suffering." What he does not do is place the range of animal experience on the same scale as human experience to compare them, or give us any reason to think bottom of the scale for humans is meaningfully (if at all) above the top of the scale for our fellow primates-- not to mention pigs, cows, and pigeons. Moreover, he gives no reason whatsoever for why we ought to draw a big red line at some particular place on that scale, labelled "Not OK to trap, shoot, eat, encage, or wear the skins of anything above this line." [Note: the last two paragraph were added on 4/18/23].

Regarding propositions (1)-(3) above, at the end of the day, my response to the conjunction of (1) and (2) is duh, and my response to (3) is that it leaves unanswered the central question of morality: in the words of the great moral philosopher Bo Diddley, Who do you love? Speciesism is only the thin end of the wedge: not only do I generally value the wellbeing of people more than that that of cows and rabbits, I also value the wellbeing of my people more than I value that of other people. Thus, premises (1) and (2) do not imply conclusion (3) for the following reasons:

  1. I value the wellbeing of people (and animals) in my identity group over that of people (and animals) outside my identity group, and it is only right that I should do this.
  2. As a matter of fact, the wellbeing of different groups, of which I am a member, often trade against each other in moral decisions.
  3. The purpose of moral precepts, largely if not mainly, is to manage tradeoffs between our concerns for the welfare of different groups, with different degrees of shared identity, of which we are a common member (e.g. self, family, clan, tribe, nation, humanity, the brotherhood of conscious creatures, etc.).

Now in response to (1), you might say that I am a scoundrel. In response to that, I say that's just, like, your opinion, man [https://youtube.com/watch?v=j95kNwZw8YY]. Philosophers such as Peter Singer, and popular writers like Stephen Pinker, insist that I must be impartial between the wellbeing of my people on the one hand, and that of homo sapiens at large on the other. For example, Pinker writes, "There is nothing magic about the pronouns I and me that would justify privileging my interests over yours or anyone else's" [Enlightenment Now, p. 412]. To that I reply, why on Earth would I need magic, or even justification, to privilege my own interests over yours? Of course I privilege my interests over yours, and almost certainly vice versa. In fact, unless you are a friend of mine, I privilege my dog's interests over yours. For example, if my dog needed a life-saving medical procedure that costs $5000, I would pay for it, but if you need a life-saving medical procedure that cost $5000, I probably would not pay for it -- and if an orphan from East Bengal needed a life-saving medical procedure that costs $5000 (which one probably does at this very moment), I would almost certainly not pay for it, and neither would you (unless you are actually paying for it).

I believe I am not alone in being more concerned with the welfare of my dog than with that of a random homo sapien. Consider, for example, that the average lifetime cost of responsibly owning a dog in the United States is around $29,000 -- while, according to Givewell.com, the cost of saving the life of one unfortunate fellow man at large by donating to a well chosen charity is around $4500 [https://www.givewell.org/how-much-does-it-cost-to-save-a-life]. If those figures are correct, it means that if you own a dog, then you could have allocated the cost of owning that dog to save the lives of about six people on average (and if the figures are wrong, something like that is true anyway). Now, Steven Pinker himself once tweeted that dog ownership comes with empirically verifiable psychological benefits for the dog owner [https://twitter.com/sapinker/status/1344411594709729283]. In his excitement over those psychological benefits, I suppose, he neglected to mention that it also comes with the opportunity cost of six (or so) third world children dying in squalid privation. If Pinker really believes, as he claims to believe, that "reducing the suffering and enhance the flourishing of human beings is the ultimate moral purpose," he sure isn't selling it very hard. But then again, no one in their right mind is.

Some people become agitated if it is suggested that they care little about the welfare of human beings qua human beings. I myself am at peace with it. Indeed, if a man did let his family dog pass away, when he could have saved the dog's life for a few thousand dollars, and he spent that money instead to save the life of a foreigner whom he had never met, then, all else being equal, I would prefer not to have him as a countryman, let alone a friend. I submit that, as a matter of fact, a nation made up of such quixotic pipsqueaks would perish from the Earth in a matter of three or four generations.

A perceptive utilitarian might say that last sentence is just it: impartiality between the wellbeing of a one's ingroup and outgroup is harmful to the communities that adopt it, and that is why real utilitarianism does not require impartiality (contra. Pinker and Singer). In fact, that is getting somewhere -- but that somewhere is not one step closer to resolving the Bo Diddley question (who do you love?). One might suppose that utilitarianism could be salvaged by tailoring it to favor the wellbeing of "the ingroup", but the fly in that ointment is that there is no such thing as "the ingroup". On the contrary, human nature being what it is, groups at all levels split into factions which then try to have their way with each other -- from nuclear families, to PTA boards, to political parties, to nations, to the whole of humanity. A code of conduct that is good for my community at one level might subtract from the good of a smaller, tighter community of which I am also a member. That is a basic fact of the human condition.

Such self-interested factionalism is the elephant in the room in every negotiation, at every level of community, from international diplomacy down to an argument over who is doing the dishes after supper. Thus, it is a mistake to picture a code of conduct that benefits "the community": each person is, after all, a member of multiple overlapping communities of various sizes and levels of cohesion, whose interests are frequently in conflict with each other. Now hear this: when we are looking for a win-win solution that benefits everyone at all levels without hurting anyone at any level of "our community", this is an engineering problem, or a social engineering problem, but not an ethical problem. It is the win-lose scenarios (or the win-more-win-less scenarios), which trade the wellbeing of one level of my community against that of another, that fall into the domain of ethics. Of course we are more concerned for the welfare of those whose identities have more in common with our own -- but how steep should the drop-off be as a function of shared identity? Should it converge to zero for humanity at large? Less than zero for our enemies? How about rabbits and cows? That's the Bo Diddley question! That is a central problem, if not the central problem, of bona fide ethical discourse -- and it is a question about which utilitarianism has nothing to say, except to beg the question from the outset.

2. No Moral Verve

At the end of the day, the conversation on ethics should come to something more than chalk on a board. When the chalk dust settles, if we have done a decent job of it, we should bring away something that can inspire us to rise to the call of duty when duty gets tough. The fatal defect of utilitarianism in this regard is that practically no one -- neither you, nor I, nor Steven Pinker, nor Sam Harris, nor John Stuart Mill himself -- actually gives a leaping rat's ass about the suffering or the flourishing of homo sapiens at large. Such was eloquently voiced by Adam Smith, and his statement is worth quoting at length:

Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connection with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquility, as if no such accident had happened. [Smith (1759): The Theory of Moral Sentiments]

Here is the point. As C.S. Lewis wrote, In battle it is not syllogisms [logic] that will keep the reluctant nerves and muscles to their post in the third hour of the bombardment ["The Abolition of Man"]. Lewis was right about this -- and, while we are making a list of things that do not inspire people to rise to call of duty when duty gets tough, we can include on that list any concern they might claim to have for the welfare of human beings at large.

When I write that utilitarianism has nothing useful to say about real world ethical problems, I mean it. Of course one might give evidence about the impacts of some rule or policy, which might then inform whether we want to adopt that rule or policy-- but I doubt the following words have ever been uttered in a real debate over policy or ethics: I conceded that your policy/rule/value-premise, if adopted, would benefit every segment of the community more than mine does, but to Hell with that. The fact is that everyone is a utilitarian of some sort when they debate policies and ethical mores with people with whom they share common objectives -- whether or not they have read one fancy word of John Stuart Mill, Jeremy Bentham, Peter Singer, Sam Harris, or Steven Pinker. Thus, to the degree that utilitarianism has any force in the real world, it adds nothing to the conversation that wasn't already embedded in common sense. On the other hand, to the degree that utilitarianism is not redundant with common sense, it has no motivational force, even for its professed adherents -- especially if they own a dog.

3. The Voodoo Factor

On top of ignoring the multilayered, competitive nature of the human condition, and on top of having no practical motivational force even for its professed adherents, another problem with rule utilitarianism is the voodoo it invokes to connect (a) performing a particular action with (b) what would happen, counterfactually, if everyone followed the salient rule that permits the action. In the simplest possible example, if I steal a tootsie roll from a convenience store, then, in the immediate material, this is good for me and bad for the store owner. In the world of mystical utilitarian counterfactuals, if everyone stole everything all the time, then everyone (including me) would clearly be worse off -- but in the actual world, me stealing a bite of candy is not going to cause everyone to steal everything all the time, or, probably, cause anyone else to steal anything else ever. Even if an individual tootsie roll pilferage did have some miniscule ripple effect on society, I would still expect the material impact on me personally to be less than the cost of the candy I stole. To put it more generally, when someone steals something and gets away with it, they do not reasonably expect to lose net income as a result. So, why on Earth should I care about what would happen in the counterfactual situation where everyone stole everything all the time? I cannot imagine an objective reason why I should.

Perhaps there is some deep metaphysical argument that establishes, on an objective basis, that one ought to behave the way they wish others in "their community" to behave (if, again counterfactually, there were such a thing as "their community") -- or perhaps there is some kind of cosmic karma stipulating that what goes around invariably comes around on this Earth, but (1) I cannot imagine what that metaphysical argument would be, (2) the world doesn't look to me like it works that way, and (3) neither Pinker, nor Harriss, nor Singer, nor Benthem, nor Mill actually give such metaphysical arguments, nor attempt to show that the world does work that way.

Conclusion

I submit that the position of utilitarianism is not only weak, but so evidently preposterous that its firm embrace requires an act of intellectual dishonesty. I can say this without contempt, because less than ten years ago I myself espoused utilitarianism. I knew then and I know now that I was not being intellectually honest in espousing this view. To my credit, I could not bring myself to write a defense of utilitarianism, even though I tried -- because I could not come up with an argument for it that I found convincing. Yet, I presumed that I would eventually be able to produce such an argument, and I did state utilitarianism as my position, without confessing that I could not defend the position to my own satisfaction.

I further submit that programs like utilitarianism are not only mistaken but harmful -- and not just a little bit harmful, but disastrously harmful, and that the engendered disaster is unfolding before our eyes. The problem is not that utilitarians are necessarily bad people; it is that, if they are good people, they are good people in some sense by accident: reflexively mimicking the virtues and values of their inherited traditions, while at the same time denigrating tradition, and mistaking their moral heritage for something they have discovered on their own. As John Selden wrote,

Custom quite often wears the mask of nature, and we are taken in [by this] to the point that the practices adopted by nations, based solely on custom, frequently come to seem like natural and universal laws of mankind. [Natural and National Law, Book 1, Chapter 6]

The problem with subverting the actual source of our moral conventions and replacing it with a feeble rationalization is this: each generation naturally (and rightly) pushes back against their inherited traditions, and pokes to see what is underneath them. If the actual source of those traditions has been forgotten, and they are presented instead as being founded on hollow arguments, the pushback will blow the house down. Sons will live out the virtues of their fathers less with each passing generation, progressively supplanting those virtues with the unrestrained will of their own flesh. That is where we are now -- and shallow, impotent secular theories of morality are part of the problem.

Transnational Thursday is a thread for people to discuss international news, foreign policy or international relations history. Feel free as well to drop in with coverage of countries you’re interested in, talk about ongoing dynamics like the wars in Israel or Ukraine, or even just whatever you’re reading.

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

The Wednesday Wellness threads are meant to encourage users to ask for and provide advice and motivation to improve their lives. It isn't intended as a 'containment thread' and any content which could go here could instead be posted in its own thread. You could post:

  • Requests for advice and / or encouragement. On basically any topic and for any scale of problem.

  • Updates to let us know how you are doing. This provides valuable feedback on past advice / encouragement and will hopefully make people feel a little more motivated to follow through. If you want to be reminded to post your update, see the post titled 'update reminders', below.

  • Advice. This can be in response to a request for advice or just something that you think could be generally useful for many people here.

  • Encouragement. Probably best directed at specific users, but if you feel like just encouraging people in general I don't think anyone is going to object. I don't think I really need to say this, but just to be clear; encouragement should have a generally positive tone and not shame people (if people feel that shame might be an effective tool for motivating people, please discuss this so we can form a group consensus on how to use it rather than just trying it).

There has been a lot of CW discussion on climate change. This is an article written by someone that used to strongly believe in anthropogenic global warming and then looked at all the evidence before arriving at a different conclusion. The articles goes through what they did.

I thought a top-level submission would be more interesting as climate change is such a hot button topic and it would be good to have a top-level spot to discuss it for now. I have informed the author of this submission; they said they will drop by and engage with the comments here!