site banner

Culture War Roundup for the week of March 4, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

In a large thread about HBD, this comment https://www.themotte.org/post/877/culture-war-roundup-for-the-week/187753?context=8#context was not replied. @OliveTapenade @guesswho @HlynkaCG @Questionmark

Since IQ scores are usually unavailable in large genetic databases, largest studies use "educational attainment" (EA, a variable dimensioned in number of years studied for individual's highest degree). Davide Piffer took polygenic index function derived from European populations and computed its values for some other populations, and plotted them vs their phenotypic IQs.

Of course, using polygenic index on another population than which polygenic index was derived isn't good. But this is bad in a sense that it is not accurate for estimating an individual from that another population (mostly because of linkage disequilibrium) but here we are interested in relative ranking of population averages, and they align fairly well. If using PGI to get another population averages was bad, we could predict it would be produce zero correlations and correlation -1 just as likely as +1. We could have seen bias associated with genetic distance from reference population or bias associated with relative position on PCA chart, but we don't. Maybe Piffer cherrypicked results? If so, why wouldn't be the best response to it is to re-do his analysis and show proper ~0 correlation?

Anti-HBD, what's your response to this? Why do you repeat "legacy of slavery" ad nauseam but ignore what genetics starts to explore? (link about mostly the same: https://www.emilkirkegaard.com/p/sasha-gusev-wont-answer-basic-questions )

Like @The_Nybbler I am deeply skeptical of educational attainment as a proxy for raw intelligence. If anything it strikes me as a case of affirming the consequent. Simple truth is that I've met too many 60th percentile ASVABs who were demonstrably capable of organizing/supervising complex evolutions involving hundreds of people and dozens of moving parts, just as I've met too many post-grads from prestigious institutions who I wouldn't trust to boil water, to take such claims at face value.

More generally I will reiterate my take from the previous thread. While Thomas Sowell does not address HBD directly I find it hard not to read his "vision of the anointed" in to pretty much everything HBDers post here. The scales falling from my eyes moment was when the Wonderlic "Race Norming" scandal came to light in 2019, and a significant portion of users here defended it. To be clear, The NFL had been collecting Wonderlic score on players since the late 70s, and what they got caught doing was artificially adjusting the scores of high-performing black players downward to change the racial distribution of disability payouts. On a dime I saw users who had claimed to support standardized testing flip from "the data obviously supports our conclusion" to "we must correct manipulate the data to better reflect the truth". This is what might be called in another context; "saying the quiet part out loud" and it exposes the fact that HBD as it is advocated for here on theMotte and more generally amongst rationalists is much more of a normative belief than a descriptive one. An argument over "ought"s rather "are"s.

Yes, I catch lot of flak on this forum for maintaining that Utilitarianism is a stupid and evil ideology that is fundamentally incompatible with human flourishing, but I feel that the discourse surrounding HBD is an apt illustration of the problem. Once you've gone on the record in defense of lying or manipulating data to defend your preferred narrative or achieve your preferred policy outcomes, what reason does anyone else have to trust you? Contra The Sequences and Scott Alexander, information does not exist in a vacuum, and arguments do not spring fully formed from the either. The proles are not stupid. They recognize that the Devil can quote scripture, and that a liar can tell the truth when it suits them. Thus the fundamental question one must always be prepared to ask when evaluating a statement is not whether a statement is true or false, but "Cui Bono?".

Who benefits from Id Pol, HBD Awareness, and Intersectionality? Who benefits from the dismantlement of Anglo/American norms about equality of opportunity and equality before the law? I can tell you who does not benefit in anyway. Those who possess genuine individual merit.

Unrelated to the central topic, but:

Yes, I catch lot of flak on this forum for maintaining that Utilitarianism is a stupid and evil ideology that is fundamentally incompatible with human flourishing, but I feel that the discourse surrounding HBD is an apt illustration of the problem. Once you've gone on the record in defense of lying or manipulating data to defend your preferred narrative or achieve your preferred policy outcomes, what reason does anyone else have to trust you?

Scott just recently wrote a post about this.

While I recognize that you can construct hypothetical examples where a utilitarian is forced to agree to something unpleasant, or imagine a lazy utilitarian who makes up half-baked arguments for why whatever they want to do is utilitarian optimal and that's why they get to violate strong heuristics/taboos/norms, I think those are thought experiments a lot more than they are descriptions of reality.

In reality, followers of other moral systems (or of no coherently named moral system) seem to me to make up lazy rationalizations for why to do whatever they want to do a lot more often than utilitarians, and are a lot easier to force into distasteful hypotheticals to boot.

The fact that actions have long-term consequences like 'all trust and honor across society breaks down' is not separate from utilitarianism, it's a part of the calculation, and that's why most utilitarians I talk to think about that stuff a lot more than most other people I know, and end up sticking to broad heuristics in most real-world cases.

We have noticed the skulls, as it were, and I think other moral systems which don't require you to think carefully and make explicit calculations and use your own best judgement under uncertainty, fail to teach their adherents the same carefulness. In practice, I think utilitarians end up doing better on average - obviously not perfect, but better than average.

It’s not a question of “agree to something unpleasant.” The problem being that because there are no lines that may not be crossed, that almost any act can become thinkable given the right set of circumstances. Me killing you to save others is thinkable provided that the others are either more valuable or there are more of them.

Ok, but again, I don't actually think that non-utilitarians are better about avoiding 'unthinkable trade-offs'.

Like, some number of christians or deontologist or virtue ethicists or whatever will in practice, in real life, trade some lives for others, either implicitly through policy or explicitly when faces with the rare real-world situations where that decision comes up.

Like, they don't actually just halt, stop, and catch fire in those situations when they encounter something their morality says is 'unthinkable', they just sort of make a decision, like everyone does, like normal.

And in those types of situations, I would expect utilitarians to mostly make better decisions and better trades, because they're allowed to think about and consider and make plans for those situations before encountering them, and just generally because of the habit of thinking about when and how to make moral tradeoffs.

I don't know if you have a more concrete real-world example you'd like to frame this under, I'm kind of at a loss for thinking of real-world instances besides things like 'risk your platoon to save one wounded soldier', which a. I don't know if that ever actually happens outside movies, b. I don't know what normal people actually do in that situation statistically, and c. I expect utilitarians to have no trouble applying hueristics like 'having faith in your comrades every day is more valuable than protecting the platoon the once every 20 years this actually comes up' or w/e.

I think a better way to understand the fundamental conflict is to think less in terms of "unthinkable trade-offs" and more in terms of "necessary evils" More pointedly that utilitarianism as it is typically advocated for in rationalist spaces does not seem to handle such scenarios gracefully. Instead of being able to acknowledge that [action] is bad but [action] was also necessary/understandable given the circumstances it instead seems to default to a position where [action] was necessary/understandable given the circumstances ergo [action] cannot be bad and must have actually been good or at least neutral.

I see Scott's defense of Fauci in this post here and his earlier posts on Kolmogorov Complicity and the Virtue of Silence as classic examples of the problem, sure sometimes betraying the public trust is the rational choice, but by betraying the public trust you have demonstrated yourself to be untrustworthy and can no longer honestly claim to be "the sort of person who cooperates in prisoners dilemmas" because you aren't, you're a defector.

That's just a semantics question over what "bad" means. You can say "hurting someone in self-defense is always bad, but sometimes it is the best option" or you can say "hurting someone in self-defense is not bad" and you're really saying the same thing.

That's just a semantics question over what "bad" means.

Yes, and at the same time it also illustrates the fundamental problem with utilitarianism, namely that it is the ethical framework that makes it easiest to excuse one's own negative behavior.