site banner

Culture War Roundup for the week of December 4, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Scott Alexander has recently argued in favor of Effective Altruism after the new scandal of effective altruists trying to oust Sam Altman from Open A.I.

His argument starts by focusing about how different factions attack EA from different perspectives that are contradictory. That those on the right call them woke and those on the left call them fascists and white supremacist. The point seems to be implying that they are going to be attacked anyway by all sides no matter what, so we shouldn't take seriously such criticisms. Then he mostly focuses on an estimated 200,000 lives saved in the developing world.

My problem with this is that it obscures something that isn't a mystery. Which is that EA's politics align much more with the Democratic establishment than with the right and there isn't any substantial confrontation of what that means.

The biggest donor of Effective Altruism according to my short research and claims I found in the effective altruism forum from 2022 where he participated in such discussion is Asana CEO Dustin Moskovitz.

Asana, his company contributed 45 million in the 2020 election and he also had an important contribution in millions in the future forwards pac

https://www.opensecrets.org/2020-presidential-race/joe-biden/contributors?id=N00001669 https://www.opensecrets.org/news/2020/10/pro-biden-super-pac-darkmon/ https://www.cnbc.com/2020/11/02/tech-billionaire-2020-election-donations-final-tally.html https://bluetent.us/articles/campaigns-elections/dustin-moskovitz-cari-tuna-democratic-donor-2020/

If one looks at open philanthropy or the EA forum and searches for controversial cultural issues there can be sometimes a small dissent but they follow the liberal party line for the most part.

Lets look at open philanthropy, an EA organization and Dustin Moskovitz organization. Scott certainly wants to give credit to EA and open philanthropy for promoting YIMBY.

However this organization has also funded decriminalization policies and pro migration policies.

https://www.openphilanthropy.org/focus/criminal-justice-reform/ https://www.openphilanthropy.org/focus/immigration-policy/

I wonder if the well funded caravans of migrants we see in some areas of the world have to some extend to do with funding related to EA.

Recently there has been a mini EA scandal where one individual expressed HBD views in the past but this was made a thing and he was condemned by many in the movement, but not entirely unanimously. https://forum.effectivealtruism.org/posts/8zLwD862MRGZTzs8k/a-personal-response-to-nick-bostrom-s-apology-for-an-old

https://forum.effectivealtruism.org/posts/kuqgJDPF6nfscSZsZ/thread-for-discussing-bostrom-s-email-and-apology

Basically, this individual wrote an email 26 years ago that used naughty language to make the point that you should use less offensive language when arguing for race realism.

Then he apologized due to pressure and argued:

What are my actual views? I do think that provocative communication styles have a place—but not like this! I also think that it is deeply unfair that unequal access to education, nutrients, and basic healthcare leads to inequality in social outcomes, including sometimes disparities in skills and cognitive capacity. This is a huge moral travesty that we should not paper over or downplay. Much of my personal charitable giving over the years has gone to fighting exactly this problem: I’ve given many thousands of pounds to organizations including to the SCI Foundation, GiveDirectly, the Black Health Alliance, the Iodine Global Network, BasicNeeds, and the Christian Blind Mission.

Then there is Open A.I. and Chat GPT and effective altruists have been influential in Open A.I. Chat GPT has liberal bias. https://www.foxnews.com/media/chatgpt-faces-mounting-accusations-woke-liberal-bias

Another thing to observe are the demographics of effective altruists.

They are only 0.9% right wing and 2.5% center right. With majority being of the left with 40% center left and 32% identifying as left. But that is identification. Just like Biden could be identified by some as center left while by others, including myself as far left. They are also 46% Vegans. 85.9% are Atheists.

https://rethinkpriorities.org/publications/eas2019-community-demographics-characteristics

I haven't encountered any group with such small representation of right wingers that actually is fair when promoting a political agenda towards either the right wing, or groups that are more seen related to the right. However, effective altruists are much more concerned about the lack of sufficient racial and ethnic diversity than ideological diversity when you search their forum.

Climate change and veganism are two issues that could well lead to hardcore authoritarian policies and restrictions. Considering the demographics of EA and the fact that Peter Singer is an important figure in it and helped coin the term, I do wonder if on that issue the EA influence would be for them to impose on us policies. When dealing with the moral framing of animal liberation movement activist like Singer we see a moral urgency. Like with all identity movements, to elevate one group such as animals you end up reducing the position of another group, such as humans. Or those who aren't vegans.

The issue is that these networks that are reinforced based on EA might already have as part of their agenda to promote their political agenda.. And these networks that developed in part due to EA and put like minded ideologues together to organize can also expand even more to promote their political agenda outside the EA banner.

It does seem that at least a few of the people involved with effective altruism think that it fell victim to its coastal college demographics. https://www.fromthenew.world/p/what-the-hell-happened-to-effective

My other conclusion related to the open A.I. incident as well is that the idea of these people that they are those who will put humanity first will lead to them ousting others and attempt to grab more power in the future too. When they do so, will they ever abandon it?

Scott Alexander himself argued that putting humanity first is the priority and he had some faith on them thinking rationally when they tried to oust Sam Altman, even though he invited them inside. He might not agree with their action necessarily but he sympathizes with the motive. https://twitter.com/slatestarcodex/status/1726132072031641853#m

That this action is dishonorable matters because like with Sam Bankman Fried it continues the pattern of important ethical issues being pushed aside under the idea that effective altruists know best.

This means that Sam Altman won't be the first. It also means that we got a movement very susceptible to the same problems of authoritarian far left movements in general of extreme self confidence to their own vision and will to power. This inevitably in addition to the whole issue of hell paved with good intentions encourages the power hungry to be part of it as well.

It does seem there is an important side to it which is about people donating in more unobjectionable terms but in general effective altruism it isn't separate from a political agenda that fits with a political tribe. That should be judged on its own merits without the 200,000 saved in developing world being accepted as an adequate answer for policies that affect the developed world. The short version of all this is that if you got a problem with leftist/far leftist NGOs, you should consider the effective altruism movement and some of its key players to be contributing in the same direction.

The term as a whole is stupid because almost every single person who operates a charity or is a large scale philanthropist sincerely believes they are engaged in “effective altruism”. Whether it’s Hobby Lobby types giving it all to whatever Evangelical church they belong to or Alex Soros funding justice reform think tanks and progressive DA candidates, they’re all believers in ‘effective altruism’. If they believed their altruism was ineffective they wouldn’t do it, and if they believed their motivations weren’t altruistic they’d presumably keep the money for themselves and those they cared about, or simply pursue naked political lobbying (which I’m not saying the above don’t do, to be clear, but I genuinely think they also think they’re helping ‘the world’ along whatever course they believe is best).

If they believed their altruism was ineffective they wouldn’t do it, and if they believed their motivations weren’t altruistic they’d presumably keep the money for themselves and those they cared about, or simply pursue naked political lobbying

I'm on net pretty neutral with respect to EA, but I don't think this line of criticism makes sense. To some extent, it's true that everyone who engages in charity do so out of belief that they're effective and that they're altruistic. But believing that you are those things doesn't tell us anything about if you/your charity actually are those things. And where I think EA at the least makes gestures at doing (and they might do nothing more than those gestures, let's be clear) is checking if they really are effective (they do seem to have a big blind spot in checking if they really are altruistic - believing that you're altruistic is, at best, a neutral signal and most likely a negative signal of one's altruism, and I don't think I've seen EA engage with this).

I think there's a strong argument to be made that, in their attempts to check if their (self-perceived) altruism is effective, all they're doing is adding on more epicycles to come to the conclusion that [charity they like] also happens to be [the most effective]. I honestly don't know enough about the logistics of what EA does, but certainly that should be the default presumption, sans clear indication that they're doing the hard work needed to check all that, such as giving oppositional people full access to all the tools to make the strongest argument possible against whatever charities they like (or for charities they dislike). And the more popular/decentralized EA is/becomes, the more that EA people will follow this default pattern of convincing themselves that [charity I like] is [the most effective] because memes like this always get implemented in the laziest, most intellectually dishonest way when spread out among a wide/decentralized populace.

I would also say, given that we know this pattern about the populace, EA has, in some real sense, the responsibility to craft their memes such that if they get out to the wider populace and actually become popular, that the people who lazily implement these memes in dishonest ways don't fall into this extremely common trap of matching [thing I like] with [good] while building up a whole facade of pseudoscience/pseudomath in order to justify it. I'm not sure EA is very concerned with this at all, and I'll admit that the defensiveness I see from EA when they're criticized both about their core mission and about their more superficial PR aspects doesn't make me optimistic.

The term as a whole is stupid because almost every single person who operates a charity or is a large scale philanthropist sincerely believes they are engaged in “effective altruism”.

I don't see how anyone can closely look at real-world charities and believe this. The charity world is full of organizations that transparently don't think about effectiveness at all. The Make-a-Wish foundation doesn't run the numbers and decide it's better to grant a wish for X dying first-world children than to save Y first-world children or Z third-world children from dying, they don't consider the question in the first place. Yes if you dilute "effectiveness" to "think they're doing good" they do think that, but they don't actually try to calculate effectiveness or even think about charity in those terms. And that's by many metrics one of the "good" charities! The bad ones are like the infamous Susan Komen Foundation or (to pick a minor charity I once researched) the anti-depression charity iFred. iFred spends the majority of donations on paying its own salaries and then spends the rest on "raising awareness of depression" by doing stuff like planting flowers and producing curriculum that nobody reads and that wouldn't do any good if they did. Before EA the best charity evaluation available was stuff like Charity Navigator that focuses on minimizing overhead instead of on effectiveness. That approach condemns iFred for spending too much money on overhead instead of flower-planting, but doesn't judge whether the flower-planting is effective, let alone considering questions like the relative effectiveness of malaria treatment vs. bednets vs. vaccines.

Even within the realm of political activism like you're focusing on, such activism is often justified as trying to help people rather than just pursuing the narrow political goal as effectively as possible, opening up comparisons to entirely different causes. As EA discovered, spending money trying to keep criminals out of prison is less efficient at helping people than health aid to third-worlders even if you assume there is zero cost to having criminals running free and that being in prison is as bad as being dead. You can criticize the political bias that led them to spend money on such things, but at least they realized it was stupid and stopped. Meanwhile BLM is a massive well-funded movement despite the fact that only a couple dozen unarmed black people are shot by police per year (and those cases are mostly still stuff like the criminal fighting for the officer's gun or trying to run him over in a car). Most liberals and a significant fraction of conservatives think that number is in the thousands, presumably including most BLM activists. It would be a massive waste even if it hadn't also reduced proactive policing and caused thousands of additional murders and traffic fatalities per year. That sure sounds like a situation that could benefit from public discourse having more interest in running the numbers! Similarly, controversial causes like the NGOs trying to import as many refugees as possible aren't just based on false ideological assumptions, but are less effective on their own terms than just helping people in their own countries where it's cheaper. The state of both the charity and activist world is really bad, so there's a lot of low-hanging fruit for those that actually try and any comparison should involve looking at specifics rather than vaguely assuming people must be acting reasonably.

The term as a whole is stupid because almost every single person who operates a charity or is a large scale philanthropist sincerely believes they are engaged in “effective altruism”.

Not really, many have not even thought to consider effectiveness. Or optimize for things like tax avoidance or PR (many charities run by companies). Or for rent seeking.

If they believed their altruism was ineffective they wouldn’t do it

Is that realistic? There are social, financial, and knee-jerk reasons to be involved in charities. Many of these are pretty decoupled from the Effective Altruist™ sense of efficiency. You end up with organizations that send most of their money to administration and don’t bother to quantify their actual impact, because people will donate to them anyway.

It’s not like EA is immune to that sort of exploit. But caring (or pretending to care) about it is a pretty good distinguishing feature!

most effective

By whose standards?

By their's, because they are the ones doing it.

Yes, subjectivity exists in all human actions. No, it is not inciteful to point that out.

Of course, but that's why it's such a ridiculous concept in the first place. Objective measures like overhead and percent given to a cause etc are good to track, but whether or not it's a "good" cause is completely subjective so what they are really doing is just funneling money to causes they support, which is more or less what everyone else does.

Are you perhaps thinking of CharityNavigator (which tracks things like percentage of donations that actually go to the ostensible cause) instead of GiveWell (which tracks things like expected impact of the donation in terms of the metric the organization is supposed to be helping with)?

The latter - that impact is more important than intention or purity or self sacrifice - is the place where EA distinguishes itself from normal charitable people. Normal people are pretty altruistic, but they're not necessarily strategic about it, because most people are not strategic about most of the things they do most of the time, and particular are not strategic about things that don't significantly affect them and where they will probably never get feedback about whether their approach worked.

The most effective for their specific goal, which is some form of Peter Singer human-centric utilitarianism in which projected saved human lives (or projected bonus human life years) are maximized. And likewise, every other charity is just optimizing effectiveness for a specific goal, some Christian charity dedicated to banning abortion is usually happy to switch method to boost efficiency.

The term as a whole is stupid because almost every single person who operates a charity or is a large scale philanthropist sincerely believes they are engaged in “effective altruism”

I honestly don't think this is true. A lot of people who start charities choose a cause that has impacted them personally with little thought to whether this is a cause where dollars go the furthest. EA means more than just not actively trying to waste your donation. It means giving rigorous thought to the tradeoffs involved.

The only rigor required is whatever bullshit statistical model EAs design to ‘prove’ that their approach technically saves 2.07% more lives than something else.

Consider that EAs spend a lot of money on AI doom research, how do they calculate that this is more effective at saving lives than malaria nets? I’m sure some LessWrong autist has done ‘the math’, but it essentially amounts to a sincere belief that the chance of Yudkowsky saving the human race by coming up with thought experiments outweighs the lives saved by putting the money into nets. There’s nothing empirical to that tradeoff, the Christians likewise believe they’re saving x lives from damnation, Soros might well believe he’s saving x lives from police brutality, what do EAs do differently?

Consider that EAs spend a lot of money on AI doom research, how do they calculate that this is more effective at saving lives than malaria nets?

I think that realistically speaking, they don't. Mathematical arguments that attempt to make predictions about AI risk cannot avoid running into the "garbage in, garbage out" principle. The data is simply not there to make any good predictions on this topic. And there is no way to obtain the data without clairvoyance.

I do believe that effective altruists are probably, however, much more rational when they address more well-understood topics such as the characteristics of various diseases when compared to one another.

In general all non-trivial long-term utilitarian arguments about human society are nonsense. For example, you might save 10000 people from dying in a flood next year, but then one of them turns out to be the next Hitler and kills 10000000 people.

I couldn't agree more. Bay Area Rationalists remind me of this meme when it comes to naming things: https://img.ifunny.co/images/e5402c3312546aa012fd661f686af8fd58cb8158d29db92ac5f0f1e3617bfa12_1.webp

The whole name Rationalists rubs me the wrong way for a variety of reasons. It comes off to me like they think they have a monopoly on Rationality, and reading LessWrong hasn't changed my mind. Almost everyone thinks they are being rational.

I remember some LessWrongers, back in the day, who preferred “aspiring rationalists.” Unfortunately, “aspies” was taken.

Either way, I can’t agree that picking a name equates to claiming a monopoly. The Stoics didn’t think that only they could be stoic, but that they were going to make important decisions according to a certain set of principles.