site banner

Culture War Roundup for the week of July 28, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

There is a philosophical problem regarding whether pure altruism is conceptually possible; if you help someone, and you receive in exchange nothing but the satisfaction of having helped someone, then haven't you received something of value, thereby rendering the altruistic act "impure"? What if you don't even feel good about it, could it be pure then? But then, how were you motivated to help in the first place if you didn't even feel good about it? Regardless of how we answer these questions, I believe we can put the idea of absolute pure altruism to the side, because if it exists at all, it surely encompasses a minority of human interactions--source

I have a different perspective here, where a) I think it's conceptually possible, b) the interesting question is whether people who say they are are really only doing the pure altruism. I first encountered the term pure altruism in two papers by James Andreoni, from 1989 and 1990. In them, Andreoni lays out a model of altruistic giving, where agents contribute to a public good both because they value it in itself, but also because they get a private benefit, a "warm glow". He has some nice academic results, like a quick mechanism for indexing one's own altruism (if one was taxed one dollar less, or a thousand, how much more would one donate?), and other observations (taxation may not produce warm glow, and as a result increasing taxation by some amount doesn't reduce donations by that amount; when parents get a warm glow from giving to their children, children are incentivized to be more "spoilt" in a technical sense).

Are people who are saying they are doing pure forms of altruism actually doing so? Often not so. There are aspects of the EA community that just don't make sense by considering its participants as pure white cherubs of innocence and selflessness, although each particular case will be uncertain and ambiguous, and although pointing the discrepancy is tricky.

One of the biggest bets Open Philanthropy—a large philanthropic foundation I'm acquainted with—is making is in its own people. 161 people, earning say 150K to 250K salaries, with overhead of 20% to 40% (?) is 30M to 52M/year—probably higher than any one of their grants in 2024 and 2025. This does not include the cost of their office, another cool 16.5M. This leads them to have a class interest: they are invested in that form of doing philanthropy—rather than anonymous part-time rotating grantmakers whose funds under management grow or shring depending on their evaluated success (like the Survival and Flourishing Fund).

Trotsky's The Revolution Betrayed outlines how this happened with the apparatchik in Russia. The apparatchik are in charge of the wellbeing of the Soviet Union and ended up reallocating ressources to themselves. Some of my experience is that the grantmakers just want to be sucked up to, their ideas confirmed, their egos upheld, their strategies validated, their personalities admired. But at the same time they are selected for not rocking the boat in a deep way. More mundanely, people get grants from projects that don't work out, and don't pivot, because they think that would involve losing their jobs. EA seems like a failed Schelling point to me, because it advertises itself as doing pure altruism, but the actors end up fighting for their own self-interest, sometimes in quite obvious ways.

Is pure altruism selected out? If you do something for someone such that you don't get something out of it, can you continue doing that into the future? What is the mechanism? I think this is a key question that leads to rederiving some form of non-naïve form of altruism. Or alternatively, it leads to exploiting the pure altruism until its ressources are exhausted. One of the first guys to think about this ended up killing himself.

On the other side, pure altruism can be understood essentially as a mating display because it's a costly signal of strength, and it. The underlying purpose of ideology X isn't ideology X, it's displaying that you can still be a well-adjusted person even with its iron around your neck. Some version of this is fine by me, but the problem becomes when people really believe their ideologies and do cripple themselves for real, as happened with Germany's industrial economy as a result of their terrible energy policy. This matters to me, I made a heavily real, non-fake investment in learning German. I passed the C1 exam but probably at some point did have a C2 level in German. Now I just do business with Americans instead. I also do find it aesthetically distasteful when people do something which is nominally about, e.g., helping the homeless in a way that makes the problem worse, partly because nobody taught me how to do the Straussian reading.

At the same time, how do you coordinate around public goods? One cool answer is dominant assurance contracts but in practice this hasn't been implemented much, perhaps because the people who could have jobs as grantmakers they would rather preserve, but also because part of the problem of setting up a new project is just distribution, and you have a chicken an egg problem here (you could do a dominant assurance funding model if only you had already built the distribution funnel for your thing, but that's a big part of the job).

Anyways one answer here is to try to get people in man vs. nature games because man v. man conflicts are just fucked up.

I think "pure altruism" is a strawmanning of EA in general and Open Philanthropy in particular. One of EA's main tenets is that the traditional hyperfocus on overhead costs of charities is unhelpful as a measure of actual efficacy. If you want smart, driven people to do good work in allocating resources, paying them something like market rate is advisable. Otherwise, you're selecting on something other than merely talent for the job.

Of course, it's always possible OpenPhil is actually bad at their stated mission for whatever reason, including design flaws. So having different models out there, like volunteer crowdsourcing, is a good thing.

Famously, the Soviets did not rely on charitable giving to fund their efforts. Donors can always stop donating.

Scott has addressed this kind of thing--how much altruism is mandated or what is sufficiently pure--multiple times. Numerous essay in EA Land focus on the emotional unsustainability of pure altruism.

Some level of partiality/self-interest is allowable on pragmatic grounds alone. Martyrdom should not be a standard requirement.

strawmanning of EA in general

I think EA does have a fair share of pure altruists. I know of at least four people that have gone celibate over the last few years as a result of being too concentrated on their jobs (and I claim they could have had romantic success if they had chosen to). I think coordinating around "we are doing the most good" also has an easy attractor in pure altruism.

the Soviets did not rely on charitable giving to fund their efforts

The thing I was pointing at is that the job of the apparatchiks was to nominally be pure altruists towards the population of Russia as a whole, and this predictably failed.

That there are some "pure" altruists in EA is not what I am picking at. The essays I reference are targeted at that very phenomenon because it is a thing some people do. Selection effects are what they are. You are making points without the knowledge of what is already been discussed on the topic. Go google "avoiding EA burnout" and you'll find a plenty of stuff on this front.

The thing I am pointing at is that comparing Soviet anything to EA is apples to hand grenades. Donors are not coerced. OpenPhil analysts are not employees of the state, and aimed at "doing the most good" insofar as they can figure that out. The failure mode that is most apt is the standard "NGO Industrial Complex" where organizations exist to exist, not to actually solve the problem in their mission statement.

You are making points without the knowledge of what is already been discussed on the topic. Go google "avoiding EA burnout" and you'll find a plenty of stuff on this front.

I think you are empirically wrong on this. E.g., if you go to one of the most upvoted such essays you will see my comment at the top. But it's been a while. Maybe there is much that I have forgotten.

No, this really wasn't much better than posting a LMGTFY. Don't do this.

When someone obstinately denies easily checked facts what do you suggest?

(a) Let it go and disengage. (b) Provide links to specific citations and proactively provide an explanation of their relevance. (c) Consider the possibility that they are not "ignoring facts" but that you are both interpreting the same evidence in a way that caters to your own biases and that you need to actually make an argument.

More comments