site banner

Culture War Roundup for the week of June 30, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

After Zizians and the efilist bombing I have tried to pay more attention to the cross section of ethical veganism, rationalists, and nerdy utilitarian blogs.

A Substack titled "Don't Eat Honey" was published. Inside, the argument is made that to buy or consume honey is an unethical act for insect suffering-at-scale reasons. According to the essay, bees, like livestock, suffer quite a lot at the hands of beekeepers. That's a lot of bees. Thus the title: don't eat honey.

The median estimate, from the most detailed report ever done on the intensity of pleasure and pain in animals, was that bees suffer 7% as intensely as humans. The mean estimate was around 15% as intensely as people. Bees were guessed to be more intensely conscious than salmon!

If we assume conservatively that a bee’s life is 10% as unpleasant as chicken life, and then downweight it by the relative intensity of their suffering, then consuming a kg of honey is over 500 times worse than consuming a kg of chicken! And these estimates were fairly conservative. I think it’s more plausible that eating honey is thousands of times worse than eating comparable amounts of chicken

This particular post is high on assumption and light on rigor. It received outrage. Another post on Bentham's blog on insect suffering I recall as higher quality material for understanding. Did you know that composting is an unethical abomination? I'd never considered it!

'Suffering' presents an incommensurable problem. Suffering is a social construct. Suffering is the number and intensity of firing pain receptors over time. Suffering is how many days in a row I experienced boredom as a teenager. Still, science attempts to define and quantify suffering. An equation works out the math: how conscious a cricket is in relation to man, a cricket's assumed capacity to feel pain, the length of time it spends feeling pain, and so on. My prediction is we will figure out the consciousness part of the equation with stable meaning before we ever do so for suffering.

We will manage to rethink, remeasure, and find additional ways of suffering. People always have. Today, plants do not feel "pain", but tomorrow, pain may not a prerequisite for suffering. Maybe starvation becomes a moral imperative. If the slope sounds too slippery, please consider people have already built a (relatively unpopular) scaffolding to accept and impose costs at the expense of human comfort, life, and survival. Admittedly, that suffering may present an incommensurable problem doesn't negate any imperative to reduce it. Find more suffering? Reduce that, too. It does give me reason to question the limitations and guard rails of the social technology.

According to Wikipedia, negative utilitarians (NU) are sometimes categorized as strong NUs and weak NUs. This differentiates what I'd call fundamentalists --- who follow suffering minimizer logic to whatever ends -- to the milder "weak" utilitarians. The fundamentalist may advocate for suffering reduction at a cost that includes death, your neighbor's dog, or the continued existence of Slovenia-- the honey bee capitol of the world. Our anti-honey, anti-suffering advocate has previously demonstrated he values some positive utility when it comes to natalism, but much of his commenting audience appears more in the fundamentalist category.

One vibe I pick up from the modern vegans is that the anti-suffering ethics are the ethics of the future. That our great-grandchildren will look backwards and wonder how we ever stooped so low as to tolerate farming practice A or B. I don't doubt we'll find cost effective, technological solutions that will be accepted as moral improvements in the future. I am not opposed to those changes on principle. Increase shrimp welfare if you want, fine.

My vague concern is that this social technology doesn't appear limited to spawning technological or charitable solutions. With things like lab meat showing up more frequently in the culture war I'd expect the social technology to spread. So far, however, vegans remain a stable population in the US. Nerdy utilitarian bloggers are yet to impose their will on me. They just don't think I should eat honey.

I was recently discussing why are foods including yeast ok with vegans but honey is not? Yeast are living things and we either stick them in bottles of their own waste until they shut down or cook them alive and there are many many more of them than anything else we use to prepare food.

There is no reason to think single celled organisms can suffer.

Suffering is essentially just the unlearning gradient in an ML model. Any system that responds to external stimuli by altering itself to avoid repeating past behavior can suffer. Even a single neuron can suffer. Even a single atom can suffer.

That being said, I don't care about the suffering of neurons and atoms-- or plants, or animals, or basically anything except a few near-human species (apes, elephants, cetaceans, etc), pets I irrationally love, and of course humans themselves. AI could be smarter than me but I'm still not going to give a shit if it suffers except insofar as it experiences specifically human suffering.

While I agree with the second paragraph, the first one has me scratching my head. Why would suffering have anything to do with the "unlearning gradient of an ML model" and, if so, how does an atom have anything to do with ML?

I think of it more as a (negative) reward signal in RL. When a human touches a hot stove, there's a sharp drop in dopamine (our reward signal). Neural circuits adjust their synapses to betterpredict future (negative) reward, and subsequently they take actions that don't do it. There's a bit of a sleight of hand here--do we actually know our experience of pain is equivalent to a negative reward signal--but it's not too wild a hypothetical extrapolation.

How do atoms fit in? Well, it's a stretch, but one way to approach it is to treat atoms as trying to maximize a reward of negative energy, on a hard coded (unlearned) policy corresponding to the laws of physics. E.g. burning some methane helps them get to a lower energy state, maximizing their own reward. Or, to cause "physical" pain, you could put all the gas in a box on one side of the box: nature abhors a vacuum.

Neither psychologist nor RL people I talked with seem to believe that this is literally how the human mind works, because this leads you to the suspicious conclusion that the thousands of simple RL models people train for e.g. homework are also experiencing immense sufferring. Yes there is a vaguely RL-like layer of our brain, but RL itself does not conscious experience make. Unless of course you have some very heavy philosophical machinery to convince us otherwise...

That's the sleight of hand I mentioned: because qualia are so mysterious, it's a leap to assume that RL algorithms that maximize reward correspond to any particular qualia.

On the other hand, suffering is conditioned on some physical substrate, and something like "what human brains do" seems a more plausible candidate for how qualia arise than anything else I've seen. People with dopamine issues (e.g. severe Parkinson's, drug withdrawal) often report anhedonia.

That heavy philosophical machinery is the trillion dollar question that is beyond me (or anyone else that I'm aware of).

this leads you to the suspicious conclusion that the thousands of simple RL models people train for e.g. homework are also experiencing immense sufferring

Maybe they are? I don't believe this, but I don't see how we can simply dismiss it out of hand from an argument of sheer disbelief (which seems just as premature to me as saying it's a fact). Agnosticism seems to be the only approach here.