site banner

Culture War Roundup for the week of July 28, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

There is a philosophical problem regarding whether pure altruism is conceptually possible; if you help someone, and you receive in exchange nothing but the satisfaction of having helped someone, then haven't you received something of value, thereby rendering the altruistic act "impure"? What if you don't even feel good about it, could it be pure then? But then, how were you motivated to help in the first place if you didn't even feel good about it? Regardless of how we answer these questions, I believe we can put the idea of absolute pure altruism to the side, because if it exists at all, it surely encompasses a minority of human interactions--source

I have a different perspective here, where a) I think it's conceptually possible, b) the interesting question is whether people who say they are are really only doing the pure altruism. I first encountered the term pure altruism in two papers by James Andreoni, from 1989 and 1990. In them, Andreoni lays out a model of altruistic giving, where agents contribute to a public good both because they value it in itself, but also because they get a private benefit, a "warm glow". He has some nice academic results, like a quick mechanism for indexing one's own altruism (if one was taxed one dollar less, or a thousand, how much more would one donate?), and other observations (taxation may not produce warm glow, and as a result increasing taxation by some amount doesn't reduce donations by that amount; when parents get a warm glow from giving to their children, children are incentivized to be more "spoilt" in a technical sense).

Are people who are saying they are doing pure forms of altruism actually doing so? Often not so. There are aspects of the EA community that just don't make sense by considering its participants as pure white cherubs of innocence and selflessness, although each particular case will be uncertain and ambiguous, and although pointing the discrepancy is tricky.

One of the biggest bets Open Philanthropy—a large philanthropic foundation I'm acquainted with—is making is in its own people. 161 people, earning say 150K to 250K salaries, with overhead of 20% to 40% (?) is 30M to 52M/year—probably higher than any one of their grants in 2024 and 2025. This does not include the cost of their office, another cool 16.5M. This leads them to have a class interest: they are invested in that form of doing philanthropy—rather than anonymous part-time rotating grantmakers whose funds under management grow or shring depending on their evaluated success (like the Survival and Flourishing Fund).

Trotsky's The Revolution Betrayed outlines how this happened with the apparatchik in Russia. The apparatchik are in charge of the wellbeing of the Soviet Union and ended up reallocating ressources to themselves. Some of my experience is that the grantmakers just want to be sucked up to, their ideas confirmed, their egos upheld, their strategies validated, their personalities admired. But at the same time they are selected for not rocking the boat in a deep way. More mundanely, people get grants from projects that don't work out, and don't pivot, because they think that would involve losing their jobs. EA seems like a failed Schelling point to me, because it advertises itself as doing pure altruism, but the actors end up fighting for their own self-interest, sometimes in quite obvious ways.

Is pure altruism selected out? If you do something for someone such that you don't get something out of it, can you continue doing that into the future? What is the mechanism? I think this is a key question that leads to rederiving some form of non-naïve form of altruism. Or alternatively, it leads to exploiting the pure altruism until its ressources are exhausted. One of the first guys to think about this ended up killing himself.

On the other side, pure altruism can be understood essentially as a mating display because it's a costly signal of strength, and it. The underlying purpose of ideology X isn't ideology X, it's displaying that you can still be a well-adjusted person even with its iron around your neck. Some version of this is fine by me, but the problem becomes when people really believe their ideologies and do cripple themselves for real, as happened with Germany's industrial economy as a result of their terrible energy policy. This matters to me, I made a heavily real, non-fake investment in learning German. I passed the C1 exam but probably at some point did have a C2 level in German. Now I just do business with Americans instead. I also do find it aesthetically distasteful when people do something which is nominally about, e.g., helping the homeless in a way that makes the problem worse, partly because nobody taught me how to do the Straussian reading.

At the same time, how do you coordinate around public goods? One cool answer is dominant assurance contracts but in practice this hasn't been implemented much, perhaps because the people who could have jobs as grantmakers they would rather preserve, but also because part of the problem of setting up a new project is just distribution, and you have a chicken an egg problem here (you could do a dominant assurance funding model if only you had already built the distribution funnel for your thing, but that's a big part of the job).

Anyways one answer here is to try to get people in man vs. nature games because man v. man conflicts are just fucked up.

I think "pure altruism" is a strawmanning of EA in general and Open Philanthropy in particular. One of EA's main tenets is that the traditional hyperfocus on overhead costs of charities is unhelpful as a measure of actual efficacy. If you want smart, driven people to do good work in allocating resources, paying them something like market rate is advisable. Otherwise, you're selecting on something other than merely talent for the job.

Of course, it's always possible OpenPhil is actually bad at their stated mission for whatever reason, including design flaws. So having different models out there, like volunteer crowdsourcing, is a good thing.

Famously, the Soviets did not rely on charitable giving to fund their efforts. Donors can always stop donating.

Scott has addressed this kind of thing--how much altruism is mandated or what is sufficiently pure--multiple times. Numerous essay in EA Land focus on the emotional unsustainability of pure altruism.

Some level of partiality/self-interest is allowable on pragmatic grounds alone. Martyrdom should not be a standard requirement.

it's always possible OpenPhil is actually bad at their stated mission for whatever reason, including design flaws.

OpenPhil might be the 800 pound gorilla funding EA, but it is useful to remember that OpenPhil is not particularly EA.

Scott has addressed this kind of thing--how much altruism is mandated or what is sufficiently pure--multiple times.

While in the past Scott has written about the burden being easy and the yoke light, he went on to donate a kidney and wrote that one should keep climbing the tower. I am skeptical that his past writings on addressing the questions of purity are, uh, pure.

There are many degrees of purity. Ultimately, one can always sacrifice more for the cause.

Scott seems to genuinely enjoy his life in terms of material comfort, in addition to his significant charitable giving. And the kidney.

So whatever the threshold is for diminishing returns on his charitable endeavors, he seems to be on the sustainable side.

wrote that one should keep climbing the tower.

I think you actually managed to interpret that exactly backwards. In addition to misapplying it contextually.

He wrote that one should retreat down to the lowest level of the tower one finds necessary to fulfill one's moral obligations. If you don't share those foundational assumptions, then that's fine. But plenty of people in the West ostensibly do.

He wrote that one should retreat down to the lowest level of the tower one finds necessary to fulfill one's moral obligations.

I disagree, the last exchange of his example suggests that when you've retreated to that lowest level, someone like Scott should come along to keep nudging you up the layers:

Q: FINE. YOU WIN. Now I’m donating 10% of my income to charity. A: You should donate more effectively.

The person is not left to be comfortable at their fulfillment level.

I also continue to think it's interesting that he opposed this kind of shenanigan in his What We Owe The Future review, published the next day, TINACBNIEAC:

This series of commitments feels basically right to me and I think it prevents muggings.

But I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms.... I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice.

He perceives the muggings can't really be prevented, that there's always going to be another switch, and a rational choice is to avoid the whole game and choose different axioms.

I disagree, the last exchange of his example suggests that when you've retreated to that lowest level, someone like Scott should come along to keep nudging you up the layers

So? What's wrong with a nudge? Coercion is bad. Persuasion is fine.

But, again, isn't Scott doing the thing where he's actually arguing down from the "purists"?

The person is not left to be comfortable at their fulfillment level.

Isn't that contradicting the point of him saying the whole 10% line is a totally great place to be comfortable?

also continue to think it's interesting that he opposed this kind of shenanigan

He's trying to find a reasonable middle ground. For people like him. For the more typical person. For anyone.

Scott perceives that unbounded moral philosophy is a mug's game. So bind it a little.

What is obligatory? What is supererogatory? Reasonable people can disagree and avoid muggings.

I think the issue here is that you perceive Scott is expressing two different stances, but I see him saying the same basic thing. Figure out what the obligatory minimum, satisfice, and then anything beyond that is extra credit, but there's no reason to beat oneself up over maximization or allow a philosophical mugging.