site banner

Culture War Roundup for the week of December 4, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Scott Alexander has recently argued in favor of Effective Altruism after the new scandal of effective altruists trying to oust Sam Altman from Open A.I.

His argument starts by focusing about how different factions attack EA from different perspectives that are contradictory. That those on the right call them woke and those on the left call them fascists and white supremacist. The point seems to be implying that they are going to be attacked anyway by all sides no matter what, so we shouldn't take seriously such criticisms. Then he mostly focuses on an estimated 200,000 lives saved in the developing world.

My problem with this is that it obscures something that isn't a mystery. Which is that EA's politics align much more with the Democratic establishment than with the right and there isn't any substantial confrontation of what that means.

The biggest donor of Effective Altruism according to my short research and claims I found in the effective altruism forum from 2022 where he participated in such discussion is Asana CEO Dustin Moskovitz.

Asana, his company contributed 45 million in the 2020 election and he also had an important contribution in millions in the future forwards pac

https://www.opensecrets.org/2020-presidential-race/joe-biden/contributors?id=N00001669 https://www.opensecrets.org/news/2020/10/pro-biden-super-pac-darkmon/ https://www.cnbc.com/2020/11/02/tech-billionaire-2020-election-donations-final-tally.html https://bluetent.us/articles/campaigns-elections/dustin-moskovitz-cari-tuna-democratic-donor-2020/

If one looks at open philanthropy or the EA forum and searches for controversial cultural issues there can be sometimes a small dissent but they follow the liberal party line for the most part.

Lets look at open philanthropy, an EA organization and Dustin Moskovitz organization. Scott certainly wants to give credit to EA and open philanthropy for promoting YIMBY.

However this organization has also funded decriminalization policies and pro migration policies.

https://www.openphilanthropy.org/focus/criminal-justice-reform/ https://www.openphilanthropy.org/focus/immigration-policy/

I wonder if the well funded caravans of migrants we see in some areas of the world have to some extend to do with funding related to EA.

Recently there has been a mini EA scandal where one individual expressed HBD views in the past but this was made a thing and he was condemned by many in the movement, but not entirely unanimously. https://forum.effectivealtruism.org/posts/8zLwD862MRGZTzs8k/a-personal-response-to-nick-bostrom-s-apology-for-an-old

https://forum.effectivealtruism.org/posts/kuqgJDPF6nfscSZsZ/thread-for-discussing-bostrom-s-email-and-apology

Basically, this individual wrote an email 26 years ago that used naughty language to make the point that you should use less offensive language when arguing for race realism.

Then he apologized due to pressure and argued:

What are my actual views? I do think that provocative communication styles have a place—but not like this! I also think that it is deeply unfair that unequal access to education, nutrients, and basic healthcare leads to inequality in social outcomes, including sometimes disparities in skills and cognitive capacity. This is a huge moral travesty that we should not paper over or downplay. Much of my personal charitable giving over the years has gone to fighting exactly this problem: I’ve given many thousands of pounds to organizations including to the SCI Foundation, GiveDirectly, the Black Health Alliance, the Iodine Global Network, BasicNeeds, and the Christian Blind Mission.

Then there is Open A.I. and Chat GPT and effective altruists have been influential in Open A.I. Chat GPT has liberal bias. https://www.foxnews.com/media/chatgpt-faces-mounting-accusations-woke-liberal-bias

Another thing to observe are the demographics of effective altruists.

They are only 0.9% right wing and 2.5% center right. With majority being of the left with 40% center left and 32% identifying as left. But that is identification. Just like Biden could be identified by some as center left while by others, including myself as far left. They are also 46% Vegans. 85.9% are Atheists.

https://rethinkpriorities.org/publications/eas2019-community-demographics-characteristics

I haven't encountered any group with such small representation of right wingers that actually is fair when promoting a political agenda towards either the right wing, or groups that are more seen related to the right. However, effective altruists are much more concerned about the lack of sufficient racial and ethnic diversity than ideological diversity when you search their forum.

Climate change and veganism are two issues that could well lead to hardcore authoritarian policies and restrictions. Considering the demographics of EA and the fact that Peter Singer is an important figure in it and helped coin the term, I do wonder if on that issue the EA influence would be for them to impose on us policies. When dealing with the moral framing of animal liberation movement activist like Singer we see a moral urgency. Like with all identity movements, to elevate one group such as animals you end up reducing the position of another group, such as humans. Or those who aren't vegans.

The issue is that these networks that are reinforced based on EA might already have as part of their agenda to promote their political agenda.. And these networks that developed in part due to EA and put like minded ideologues together to organize can also expand even more to promote their political agenda outside the EA banner.

It does seem that at least a few of the people involved with effective altruism think that it fell victim to its coastal college demographics. https://www.fromthenew.world/p/what-the-hell-happened-to-effective

My other conclusion related to the open A.I. incident as well is that the idea of these people that they are those who will put humanity first will lead to them ousting others and attempt to grab more power in the future too. When they do so, will they ever abandon it?

Scott Alexander himself argued that putting humanity first is the priority and he had some faith on them thinking rationally when they tried to oust Sam Altman, even though he invited them inside. He might not agree with their action necessarily but he sympathizes with the motive. https://twitter.com/slatestarcodex/status/1726132072031641853#m

That this action is dishonorable matters because like with Sam Bankman Fried it continues the pattern of important ethical issues being pushed aside under the idea that effective altruists know best.

This means that Sam Altman won't be the first. It also means that we got a movement very susceptible to the same problems of authoritarian far left movements in general of extreme self confidence to their own vision and will to power. This inevitably in addition to the whole issue of hell paved with good intentions encourages the power hungry to be part of it as well.

It does seem there is an important side to it which is about people donating in more unobjectionable terms but in general effective altruism it isn't separate from a political agenda that fits with a political tribe. That should be judged on its own merits without the 200,000 saved in developing world being accepted as an adequate answer for policies that affect the developed world. The short version of all this is that if you got a problem with leftist/far leftist NGOs, you should consider the effective altruism movement and some of its key players to be contributing in the same direction.

One thing that's always bugged my about progressivism and especially EA is that despite all their claims of being empathetic and humanistic they completely ignore the human. They are ironically the paperclip maximizers of philanthropy.

The argument is that despite some of the questionable things EA has been caught up in lately, they've saved 200 thousands lives! but did they save good lives? What have they saved really? More mouths to feed? Doctors and lawyers? Someone that cares about humanity would want to ask these questions. A paperclip maximizer that discounts a persons humanity entirely and just sees each life as some widget to maximize the number of would not.

The purpose of empathy is to be able to put yourself in someone else's shoes, to understand their feelings. Except, to do that you have to have some level of understanding of how they function, some mental model of their mind. Else you are simply projecting. It's easy to just imagine what you'd feel like if you were in Palestine or Israel etc. Except that isn't empathy. Even just listening to what a person says isn't truly empathy. If I were an alcoholic and I said I wanted a drink, to someone that has no knowledge of me it might seem a nice thing to do, but clearly it would not be. I'm not sure what it even means to have empathy for someone you don't know. I'm not sure it's possible. What is it really that you are feeling? Do you believe people are all the same, with the same wants? same needs? some values? It's such a dim view of people and of the world.

I suppose some people do, "We're all human," is something you'll hear espoused by this ideology, but that is literally the least you can have in common with another person. Trying to apply it to any other human interaction is instantly ridiculous. You wouldn't apply that logic anywhere in life, you don't hire someone just because they're human, you don't befriend someone, care about someone, hate someone. It's basically an open admission that you have nothing convincing to say. Even if someone was forced to compliment their worst enemy they'd manage to ad lib something more convincing than, "he's human."

Anyone that has had relationships with other humans, so basically everyone, knows how complicated it is to actually know someone. You can have spent years living with a partner and still be completely caught off guard when your mental model goes awry and your attempt at empathy then completely falls flat. The idea that some ideological group is more moral or more caring because of the sheer number of lives they've saved completely discredits and belittles one of the pillars of being human, getting to know each other, socializing, learning friend and foe. It discounts their humanity itself, that it's even necessary to get to know or to understand someone before you can help them. Your wants and needs don't matter, you are a widget, you need x calories, y oxygen, to continue existing and I will supply these needs, such altruism, wow.

Looking around at social media and world events I can't help but wonder if this is some major glitch with human psychology in the digital age. Too many strangers, too much opportunity for, "selflessness." So many people caught up in an empty and self serving empathy that has no imagination for others. Meanwhile people that have normal empathy are dismissed because they aren't as "selfless" as the newer movements. Spending time with and focusing on people that share your values isn't altruistic because if they share your values than you are less selfless than the progressive who cares about the stranger. (Not to mention the bay area tech bro that managed to save 0.0345 persons per dollar spent, blowing away the nearest tech bro competitor who only saved 0.0321)

This logic seems mad though, taken to it's extreme the most altruistic move would be to help someone that shares none of your values, and since altruism is a core value you should be exclusively helping the least altruistic of people as that is the most selfless thing you could do. Of course this is obviously ridiculous and self defeating (like the lgbt groups supporting hamas)

More cynically I think this sort of caring is just a way to whitewash your past wrongs, it's pr maximizing, spend x dollars and get the biggest number you can put next to your shady bay area tech movement that is increasingly under societies microscope given the immense power things like social networks and ai give your group. If you really want to help others you need to understand them, that means spending time with others, not with concepts. If you're lucky you might eventually find a few people that you understand well enough that more often than not your actions are positive and beneficial to them. Congratulations you have now invented the family and traditional community.

One thing that's always bugged my about progressivism and especially EA is that despite all their claims of being empathetic and humanistic they completely ignore the human. They are ironically the paperclip maximizers of philanthropy.

Once again for those who might just be joining us. Utilitarianism is an inhuman (and dare I say it, Evil) ideology that is fundamentally incompatible with human flourishing. Utilitarians deciding to ignore the human cost of a policy to maximize some abstract value be it "utility" or "paperclips" is not ironic, unfortunate, or unintentional. It is by design.

"Effective altruism" has never been about altruism.

I will admit I consider my self a 'skeptical utilitarian'(I made this term up, or, if I didn't, I am unfamiliar with the other usage) in that I have utilitarian leanings in terms of how to reason about morality but reject unpalatable extreme extrapolations thereof, on 'eulering' and 'epistemic learned helplessness' grounds. Still I have always found casual swipes at utilitarianism of the form, 'see, it actually leads to bad things' to be weak. Clearly the goal is to lead to good things, broadly, and if it seems to lead to a bad thing then that probably means you should try again and fully considerer the externalities, etc. I don't see a good reason why 'utility' can't be a proxy measure for human flourishing, and I would personally prefer a form of utilitarianism organized in just such a way.

Clearly the goal is to lead to good things, broadly, and if it seems to lead to a bad thing then that probably means you should try again and fully considerer the externalities, etc.

I can declare that the "goal" of a live grenade is to be delicious candy for children, but that won't make it so. The argument against Utilitarianism is 1) that it can't actually do what it aims to do, because "utilitarian calculus" is about as valid as "turnip plus potato equals squash", and 2) when it inevitably fails, it tends to fail very, very badly.

"Fully considering the externalities" is straightforwardly impossible, the output it generates is unfalsifiable, and it is tailor-made to justify one's own biases.

I don't see a good reason why 'utility' can't be a proxy measure for human flourishing

Because "utility" can't be rigorously measured, quantified, or verified in any way, even theoretically, and the whole system is built on the premise that it can be.

I should have known better than to comment on this topic here, I am not very rigorous or deep in my metaphysical beliefs.

Let me try and clarify my internal view, and if you have the time, you can explain what I am doing wrong.

So, I view my own morality and the morality of my society through a largely consequentialist lens, understanding that my ability to fully understand consequences decays rapidly with time, and is never perfect. I view morality as a changing thing that adapts and morphs with new technology, both social and physical. I find the 'concept' of 'utilitarianism' a useful jumping off point for thinking about morality. Obviously this interacts with my own biases, I am not really sure what it would even mean for a person to think about something and not have that problem honestly. I do not view 'utilitarianism' as a solved, or solvable problem, rather as a never ending corrective process.

For example, I am not currently vegan or vegetarian, but I also do not like animal suffering, and I think a lot about this disconnect. Ideally I would like a world that allows me to enjoy all the perks of animal husbandry while reducing as much animal suffering as possible. I think the effort of trying to reduce the amount of suffering in factory farming, reflects a 'utilitarian' effort, but that does not mean I would agree with any particular reality those intuitions suggest. If for example, reducing animal suffering, made it impossible for a lot of people to afford meat or eggs, then that also seems bad, and is another part of the problem to keep working on or striving for solutions to.

My biases manifest in a number of ways, for example, I lean towards observational data in terms of what a better or worse world would look like, so for example, if a particular religion espoused the idea that animals enjoy animal husbandry and or they can only go to heaven if eaten by humans, I would not factor that into my considerations. I also tend to think suffering is bad and happiness and a fulfilment/satisfaction are good, etc.

I guess I view 'morality' as a system or framework that I use to try and evaluate my own actions and the actions of others. I am reliant on the persuasiveness of my arguments in favor of my preferred outcomes to drive other people (and sometimes myself) to respect or adopt a 'morality' similar to my ideals.

Well said.

For what it's worth, I largely agree, to be more blunt than you, I'm both a moral relativist and a moral chauvinist. I make no claims that my sense of morality is objective, and go so far as to say that there's no such thing, not a single good reason to imagine it can be so, that morality can be disentangled from the structure of an individual observer and forced to fit all others. The closest you can get is the evolutionarily/game theoretically conserved core, such as a preference for fairness and so on, which can be seen in just about any animal smart enough to hold those concepts. That's still not "objective". That doesn't stop me from thinking that mine is superior and ought to be promulgated. It's sometimes tempting to claim otherwise, but I largely refrain from doing so. I don't deny the rights of others to make such a claim about theirs, to the extent that I approve of free speech.

Of course, I personally find that I can decompose my value judgements and then derive simpler underlying laws/heuristics that explain them, which often explain new and complicated situations, but I'm lucky enough that I have yet to find one I can't resolve in that manner, and I can see that I have principles instead of a lookup table because it can often involve me grudgingly accepting things I dislike because to do otherwise would conflict with more fundamental principles I prefer to hold over mere dislike. That's why I'm OK with people I despise speaking after all, leaving aside I have no way to stop them.

As for animal welfare, I simply do not care. It's a fundamental values difference. I don't get anything out of torturing or killing subhuman animals, but I also have nothing against those who do, to the extent that cultural pressures imply that that those who shirk them have other things wrong with them, like psychopathy. As discussed in an older comment, at a point in time, most people enjoyed watching dog fights or throwing rocks at cats, there was nothing/little in the act that was inherently psychopathic in terms of harming others.

To illustrate, imagine a society that declares shaving one's head to be a clear sign of Nazi affiliation. There are plenty of normal people who have some level of desire to do so, be it for stylistic preferences or because they're balding. But since such an urge is overpowered by a desire not to be mistakenly labeled as a Nazi, they refrain, while actual Nazis don't.

Congratulations, you managed to establish that shaving one's head is is 99% sensitive and specific for National Socialist tendencies.

You can see this kind of social dynamic and purity spiraling all over the place, and I think animal welfare is one of them, so is not calling people fags or retarded.

I do not value the elimination of factory farming for its own sake, or that of animals, but I will happily accept something like vegetarian meat or, better yet, labgrown meat, over it, but if, and only if it's superior to factory farmed or slaughtered meat in terms of taste or price, ideally both. That's what it means to be truly neutral between them.