site banner

Culture War Roundup for the week of December 4, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Scott Alexander has recently argued in favor of Effective Altruism after the new scandal of effective altruists trying to oust Sam Altman from Open A.I.

His argument starts by focusing about how different factions attack EA from different perspectives that are contradictory. That those on the right call them woke and those on the left call them fascists and white supremacist. The point seems to be implying that they are going to be attacked anyway by all sides no matter what, so we shouldn't take seriously such criticisms. Then he mostly focuses on an estimated 200,000 lives saved in the developing world.

My problem with this is that it obscures something that isn't a mystery. Which is that EA's politics align much more with the Democratic establishment than with the right and there isn't any substantial confrontation of what that means.

The biggest donor of Effective Altruism according to my short research and claims I found in the effective altruism forum from 2022 where he participated in such discussion is Asana CEO Dustin Moskovitz.

Asana, his company contributed 45 million in the 2020 election and he also had an important contribution in millions in the future forwards pac

https://www.opensecrets.org/2020-presidential-race/joe-biden/contributors?id=N00001669 https://www.opensecrets.org/news/2020/10/pro-biden-super-pac-darkmon/ https://www.cnbc.com/2020/11/02/tech-billionaire-2020-election-donations-final-tally.html https://bluetent.us/articles/campaigns-elections/dustin-moskovitz-cari-tuna-democratic-donor-2020/

If one looks at open philanthropy or the EA forum and searches for controversial cultural issues there can be sometimes a small dissent but they follow the liberal party line for the most part.

Lets look at open philanthropy, an EA organization and Dustin Moskovitz organization. Scott certainly wants to give credit to EA and open philanthropy for promoting YIMBY.

However this organization has also funded decriminalization policies and pro migration policies.

https://www.openphilanthropy.org/focus/criminal-justice-reform/ https://www.openphilanthropy.org/focus/immigration-policy/

I wonder if the well funded caravans of migrants we see in some areas of the world have to some extend to do with funding related to EA.

Recently there has been a mini EA scandal where one individual expressed HBD views in the past but this was made a thing and he was condemned by many in the movement, but not entirely unanimously. https://forum.effectivealtruism.org/posts/8zLwD862MRGZTzs8k/a-personal-response-to-nick-bostrom-s-apology-for-an-old

https://forum.effectivealtruism.org/posts/kuqgJDPF6nfscSZsZ/thread-for-discussing-bostrom-s-email-and-apology

Basically, this individual wrote an email 26 years ago that used naughty language to make the point that you should use less offensive language when arguing for race realism.

Then he apologized due to pressure and argued:

What are my actual views? I do think that provocative communication styles have a place—but not like this! I also think that it is deeply unfair that unequal access to education, nutrients, and basic healthcare leads to inequality in social outcomes, including sometimes disparities in skills and cognitive capacity. This is a huge moral travesty that we should not paper over or downplay. Much of my personal charitable giving over the years has gone to fighting exactly this problem: I’ve given many thousands of pounds to organizations including to the SCI Foundation, GiveDirectly, the Black Health Alliance, the Iodine Global Network, BasicNeeds, and the Christian Blind Mission.

Then there is Open A.I. and Chat GPT and effective altruists have been influential in Open A.I. Chat GPT has liberal bias. https://www.foxnews.com/media/chatgpt-faces-mounting-accusations-woke-liberal-bias

Another thing to observe are the demographics of effective altruists.

They are only 0.9% right wing and 2.5% center right. With majority being of the left with 40% center left and 32% identifying as left. But that is identification. Just like Biden could be identified by some as center left while by others, including myself as far left. They are also 46% Vegans. 85.9% are Atheists.

https://rethinkpriorities.org/publications/eas2019-community-demographics-characteristics

I haven't encountered any group with such small representation of right wingers that actually is fair when promoting a political agenda towards either the right wing, or groups that are more seen related to the right. However, effective altruists are much more concerned about the lack of sufficient racial and ethnic diversity than ideological diversity when you search their forum.

Climate change and veganism are two issues that could well lead to hardcore authoritarian policies and restrictions. Considering the demographics of EA and the fact that Peter Singer is an important figure in it and helped coin the term, I do wonder if on that issue the EA influence would be for them to impose on us policies. When dealing with the moral framing of animal liberation movement activist like Singer we see a moral urgency. Like with all identity movements, to elevate one group such as animals you end up reducing the position of another group, such as humans. Or those who aren't vegans.

The issue is that these networks that are reinforced based on EA might already have as part of their agenda to promote their political agenda.. And these networks that developed in part due to EA and put like minded ideologues together to organize can also expand even more to promote their political agenda outside the EA banner.

It does seem that at least a few of the people involved with effective altruism think that it fell victim to its coastal college demographics. https://www.fromthenew.world/p/what-the-hell-happened-to-effective

My other conclusion related to the open A.I. incident as well is that the idea of these people that they are those who will put humanity first will lead to them ousting others and attempt to grab more power in the future too. When they do so, will they ever abandon it?

Scott Alexander himself argued that putting humanity first is the priority and he had some faith on them thinking rationally when they tried to oust Sam Altman, even though he invited them inside. He might not agree with their action necessarily but he sympathizes with the motive. https://twitter.com/slatestarcodex/status/1726132072031641853#m

That this action is dishonorable matters because like with Sam Bankman Fried it continues the pattern of important ethical issues being pushed aside under the idea that effective altruists know best.

This means that Sam Altman won't be the first. It also means that we got a movement very susceptible to the same problems of authoritarian far left movements in general of extreme self confidence to their own vision and will to power. This inevitably in addition to the whole issue of hell paved with good intentions encourages the power hungry to be part of it as well.

It does seem there is an important side to it which is about people donating in more unobjectionable terms but in general effective altruism it isn't separate from a political agenda that fits with a political tribe. That should be judged on its own merits without the 200,000 saved in developing world being accepted as an adequate answer for policies that affect the developed world. The short version of all this is that if you got a problem with leftist/far leftist NGOs, you should consider the effective altruism movement and some of its key players to be contributing in the same direction.

I wonder if the well funded caravans of migrants we see in some areas of the world have to some extend to do with funding related to EA.

I wonder if your wondering is done in good faith 🤔

Then there is Open A.I. and Chat GPT and effective altruists have been influential in Open A.I. Chat GPT has liberal bias. https://www.foxnews.com/media/chatgpt-faces-mounting-accusations-woke-liberal-bias

I think extremely few people (maybe even no one) pursue making LLMs liberally biased for EA reasons.

Climate change and veganism are two issues that could well lead to hardcore authoritarian policies and restrictions.

Since when has a group representing 3% of the population (vegans) taken enough power to implement "hardcore authoritarian policies and restrictions"?

Like with all identity movements, to elevate one group such as animals you end up reducing the position of another group, such as humans

Only for unhealthy minds, I think? Whether freeing slaves "reduced" the position of non-slaves is a question without an objective answer - only psychological interpretations. For instance, many Indians never eat meat and would tell you they don't feel "reduced" by this.

It does seem that at least a few of the people involved with effective altruism think that it fell victim to its coastal college demographics

That post is just describing regression to the mean, which every informal group encounters. Nothing unique to EA here.

My other conclusion related to the open A.I. incident as well is that the idea of these people that they are those who will put humanity first will lead to them ousting others and attempt to grab more power in the future too. When they do so, will they ever abandon it?

The same could be asked about any group with any large goal: companies, nonprofits, religious organizations. Nothing unique to EA here.

That this action is dishonorable matters

How do we know it is dishonorable?

This means that Sam Altman won't be the first.

won't be the last?

It also means that we got a movement very susceptible to the same problems of authoritarian far left movements in general of extreme self confidence to their own vision and will to power.

Do you have evidence EAs suffer from "extreme self confidence"?

This... encourages the power hungry to be part of it as well.

Again, this isn't unique to EA. Any group with money/power attracts the power hungry. What's your point?

I wonder if your wondering is done in good faith 🤔

What does that mean? That you don't think it's true? That you think it's true but it's inconvenient for someone to point it out? Please be specific.

Do you have evidence EAs suffer from "extreme self confidence"?

Have you heard of a guy called "Sam Bankman-Fried?" He was in the news a little bit lately.

What does that mean?... Please be specific.

This is exactly what I wanted him to do, but I was being snarky about it. I'd thank you for being kinder, but...

Have you heard of a guy called "Sam Bankman-Fried?" He was in the news a little bit lately.

A single guy in finance being over-confident is pretty minimal evidence that EAs as a group and as a constellation of organizations suffer from "extreme self confidence".

It's not just "a single guy in finance" it's a whole mess of people falling into the exact failure mode that critics of their approach predicted they would, and then trying to argue that said failure shouldn't be taken as evidence that their critics may have had a point.

I guess it depends what you mean by EA.

I haven't seen evidence that the core EA organizations (GiveWell, Evidence Action, 80000 hours, etc) suffer from "extreme overconfidence". As far as I can tell, all are large net-positives for the world that don't really engage in huge risk-taking behavior - if anything, I think the opposite is true (e.g. Evidence Action shutting down one of its programs, since it was less effective than others).

The median EA survey-taker donates to global health charities and not anything weird, which seems pretty much the opposite of "extreme overconfidence". You could argue the weirder causes (animal welfare, AI, etc) display "extreme overconfidence", but I think most of them would say they're donating, not because they're extremely convinced that, say, AI will destroy us all and only MIRI can stop them - I suspect most are donating, because they think there is at least a small chance (~5%) this is true and its worth taking seriously - it doesn't seem like "extreme overconfidence" is the mistake being made here. [Edit: or if there is a mistake at all; I remain agnostic]

So, I don't really see why SBF should be considered a central example of an effective altruist.

a whole mess of people falling into the exact failure mode that critics of their approach predicted

What do you mean?