This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I have a different perspective here, where a) I think it's conceptually possible, b) the interesting question is whether people who say they are are really only doing the pure altruism. I first encountered the term pure altruism in two papers by James Andreoni, from 1989 and 1990. In them, Andreoni lays out a model of altruistic giving, where agents contribute to a public good both because they value it in itself, but also because they get a private benefit, a "warm glow". He has some nice academic results, like a quick mechanism for indexing one's own altruism (if one was taxed one dollar less, or a thousand, how much more would one donate?), and other observations (taxation may not produce warm glow, and as a result increasing taxation by some amount doesn't reduce donations by that amount; when parents get a warm glow from giving to their children, children are incentivized to be more "spoilt" in a technical sense).
Are people who are saying they are doing pure forms of altruism actually doing so? Often not so. There are aspects of the EA community that just don't make sense by considering its participants as pure white cherubs of innocence and selflessness, although each particular case will be uncertain and ambiguous, and although pointing the discrepancy is tricky.
One of the biggest bets Open Philanthropy—a large philanthropic foundation I'm acquainted with—is making is in its own people. 161 people, earning say 150K to 250K salaries, with overhead of 20% to 40% (?) is 30M to 52M/year—probably higher than any one of their grants in 2024 and 2025. This does not include the cost of their office, another cool 16.5M. This leads them to have a class interest: they are invested in that form of doing philanthropy—rather than anonymous part-time rotating grantmakers whose funds under management grow or shring depending on their evaluated success (like the Survival and Flourishing Fund).
Trotsky's The Revolution Betrayed outlines how this happened with the apparatchik in Russia. The apparatchik are in charge of the wellbeing of the Soviet Union and ended up reallocating ressources to themselves. Some of my experience is that the grantmakers just want to be sucked up to, their ideas confirmed, their egos upheld, their strategies validated, their personalities admired. But at the same time they are selected for not rocking the boat in a deep way. More mundanely, people get grants from projects that don't work out, and don't pivot, because they think that would involve losing their jobs. EA seems like a failed Schelling point to me, because it advertises itself as doing pure altruism, but the actors end up fighting for their own self-interest, sometimes in quite obvious ways.
Is pure altruism selected out? If you do something for someone such that you don't get something out of it, can you continue doing that into the future? What is the mechanism? I think this is a key question that leads to rederiving some form of non-naïve form of altruism. Or alternatively, it leads to exploiting the pure altruism until its ressources are exhausted. One of the first guys to think about this ended up killing himself.
On the other side, pure altruism can be understood essentially as a mating display because it's a costly signal of strength, and it. The underlying purpose of ideology X isn't ideology X, it's displaying that you can still be a well-adjusted person even with its iron around your neck. Some version of this is fine by me, but the problem becomes when people really believe their ideologies and do cripple themselves for real, as happened with Germany's industrial economy as a result of their terrible energy policy. This matters to me, I made a heavily real, non-fake investment in learning German. I passed the C1 exam but probably at some point did have a C2 level in German. Now I just do business with Americans instead. I also do find it aesthetically distasteful when people do something which is nominally about, e.g., helping the homeless in a way that makes the problem worse, partly because nobody taught me how to do the Straussian reading.
At the same time, how do you coordinate around public goods? One cool answer is dominant assurance contracts but in practice this hasn't been implemented much, perhaps because the people who could have jobs as grantmakers they would rather preserve, but also because part of the problem of setting up a new project is just distribution, and you have a chicken an egg problem here (you could do a dominant assurance funding model if only you had already built the distribution funnel for your thing, but that's a big part of the job).
Anyways one answer here is to try to get people in man vs. nature games because man v. man conflicts are just fucked up.
I think "pure altruism" is a strawmanning of EA in general and Open Philanthropy in particular. One of EA's main tenets is that the traditional hyperfocus on overhead costs of charities is unhelpful as a measure of actual efficacy. If you want smart, driven people to do good work in allocating resources, paying them something like market rate is advisable. Otherwise, you're selecting on something other than merely talent for the job.
Of course, it's always possible OpenPhil is actually bad at their stated mission for whatever reason, including design flaws. So having different models out there, like volunteer crowdsourcing, is a good thing.
Famously, the Soviets did not rely on charitable giving to fund their efforts. Donors can always stop donating.
Scott has addressed this kind of thing--how much altruism is mandated or what is sufficiently pure--multiple times. Numerous essay in EA Land focus on the emotional unsustainability of pure altruism.
Some level of partiality/self-interest is allowable on pragmatic grounds alone. Martyrdom should not be a standard requirement.
I think EA does have a fair share of pure altruists. I know of at least four people that have gone celibate over the last few years as a result of being too concentrated on their jobs (and I claim they could have had romantic success if they had chosen to). I think coordinating around "we are doing the most good" also has an easy attractor in pure altruism.
The thing I was pointing at is that the job of the apparatchiks was to nominally be pure altruists towards the population of Russia as a whole, and this predictably failed.
That there are some "pure" altruists in EA is not what I am picking at. The essays I reference are targeted at that very phenomenon because it is a thing some people do. Selection effects are what they are. You are making points without the knowledge of what is already been discussed on the topic. Go google "avoiding EA burnout" and you'll find a plenty of stuff on this front.
The thing I am pointing at is that comparing Soviet anything to EA is apples to hand grenades. Donors are not coerced. OpenPhil analysts are not employees of the state, and aimed at "doing the most good" insofar as they can figure that out. The failure mode that is most apt is the standard "NGO Industrial Complex" where organizations exist to exist, not to actually solve the problem in their mission statement.
I think you are empirically wrong on this. E.g., if you go to one of the most upvoted such essays you will see my comment at the top. But it's been a while. Maybe there is much that I have forgotten.
https://www.google.com/search?q=avoiding+effective+altruism+burnout
No, this really wasn't much better than posting a LMGTFY. Don't do this.
When someone obstinately denies easily checked facts what do you suggest?
More options
Context Copy link
More options
Context Copy link
Not the greatest comment for two reasons. First, it’s too snarky by half. Second, my word, that’s a lot of unnecessary tracking info. In case you don’t know, you could have deleted everything after the first “&” and been fine.
I could have made it worse and use the ol' LMGTFY.
There are some people in this conversation on various topics, like air conditioning units in Europe, that seemingly want to endlessly debate a relatively minor point that could be resolved with a quick google search and it baffles me.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
People have been killed in the name of EA ideas. One can claim that this was the work of the criminal and mentally ill (and be right), but that's also what the Marxists say about their bad apples.
I, for one, think that if you gave soviet levels of power to the shrimp welfare people, they would be very unwise with it. I don't think that's an unreasonable view.
Soviet levels of power were not granted by God of Thought Experiments from above, they were achieved by mobilizing large numbers of people to violence with arguments such as "this guy over here is better off than you - it was at your expense, go lynch the kulak". I do not think shrimp welfare is as persuasive an argument.
I think you're vastly underrating the earnestness and good will of communists. People very rarely start with murderous intent. That tends to proceed from the grinding of relatively benign dictums against the realities of power.
Moreover, please don't do the whole "it's just a few kooks on college campuses", one loses use of that argument after their first SBF. You and I are not beyond lynching kulaks, there is just a precarious set of incentives that allows us to maintain the moral rectitude to not do so. And I'm arguing that most EA people have, as part of their utilitarian construction, jettisoned important parts of those incentives.
To wit, I recommend rule utilitarianism and a higher degree of humility before history and the human condition.
More options
Context Copy link
More options
Context Copy link
Such as? If we're referring to the Ziz stuff then well that's not going to cut it for me in that they were not part of "EA" in any meaningful sense for a long time before the real insanity began.
But also, plenty of people have been killed in the name of classic liberal ideas.
Probably! I can't get over that Classic Environmentalism is anti-interventionist to the point some want humanity to disappear, and then some EA types are so interventionist they want to basically eliminate nature because of the inherent suffering.
In the light of your own answer, what is the ideological component in EA that would prevent interventionist types, in principle, from being soviet tier hand grenades? I don't see it. I see the same type of unbounded consequentialism that can allow people to engage in the same sort of evil in the name of ultimate good.
EA has no provision against people thinking of themselves as bringing about a utopia, and that makes it a dangerous philosophy. And this is why Ziz killed people and SBF defrauded millions.
Marx was once a benign economics nerd too.
Well, there is no one "EA"; but broadly speaking EA exists within the liberal democratic view of human rights. So "unbounded consequentialism" isn't actually on the menu for policy interventions.
I'm personally a rule utilitarian / classic liberal, so I care about specific classic (negative) human rights and fostering material progress. So I like a lot of what EA is all about, but I have my differences. I do not like philosophical ignorant veils and ponds of kids, for example. In terms of rhetorical utility though, I very much enjoy using EA as a hammer to bludgeon progressives/leftists with.
I do not think it is fair to directly fault EA at large for Ziz and SBF. In the former case, they literally disavowed the individual and their ideas. In the latter case, they were too trusting (I just assume all crypto is a scam by default) and deserve some demerits for that, but SBF also fooled a great many worldly financial types outside of EA.
Again, this is an extremely broad criticism that applies to many religions and ideologies.
What's a "human right"? I'm not asking what you think, you clearly believe in some utilitarian formulation of natural law, likely in the style of J.S. Mill. That has boundaries I'm well familiar with.
I'm asking what most EA people believe.
Because in my experience it's a lot less solid than what you have in mind, generally more aligned to Rawls than Mill and almost entirely without bounds.
I do not believe that Effective Altruists would oppose vaccine mandates categorically under grounds of bodily autonomy, for instance.
I understand those as fair arguments, but they are the same fair arguments Khrushchev made for Stalin and that Marx made for Guesde. We are responsible for what we bring into the world, the purpose of a system is what it does, etc.
Of course. And I denounce them all as capable of the same horrors.
More options
Context Copy link
More options
Context Copy link
anything unbounded has the same problem
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link