site banner

Culture War Roundup for the week of September 18, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

In what contexts are accurate prejudice/biases acceptable justification for discrimination?

I want to consider a broad range of groups including both involuntary/innate characteristics such as race, gender, and IQ, as well as more voluntary categories such as religion, political ideology, or even something like being in the fandom for a certain TV show, expressing a preference for a certain type of food, or having bad personal grooming. This is a variable that your answer might depend upon.

Let's suppose that we know with certainty that people in group X have a statistically higher rate of bad feature Y compared to the average population, whether that be criminality, laziness, low intelligence, or are just unpleasant to be around. I'm taking the fact that this is accurate as an axiom. The actual proportion of people in group X with feature Y is objectively (and known to you) higher than average, but is not universal. That is, Y is a mostly discrete feature, and we have 0 < p < q < 1 where p is the probability of a randomly sampled member of the public has Y, and q is the probability that a randomly sampled member of q has Y. Let's leave the causation as another variable here: maybe membership in X increases the probability of Y occurring, maybe Y increases the probability of joining X (in the case of voluntary membership), maybe some cofactor causes both. This may be important, as it determines whether discouraging people from being in group X (if voluntary) will actually decrease the prevalence of Y or whether it will just move some Ys into the "not X" category.

Another variable I'll leave general is how easy it is to determine Y directly. Maybe it's simple: if you're interacting with someone in person you can probably quickly tell they're a jerk without needing to know their membership in Super Jerk Club. Or maybe it's hard, like you're considering job applications and you only know a couple reported facts, which include X but not Y and you have no way to learn Y directly without hiring them first.

When is it okay to discriminate against people in group X? The far right position is probably "always" while the far left would be "never", but I suspect most people would fall somewhere in the middle. Few people would say that it would be okay to refuse to hire brown-haired people if it were discovered that they were 0.1% more likely to develop cancer and thus leave on disability. And few people would say that it's not okay to discriminate against hiring convicted child rapists as elementary school teachers on the basis that they're a higher risk than the average person. (if you are such a person though, feel free to speak up and explain your position).

So for the most part our variables are:

-Group membership voluntariness

-Feature Y's severity and relevance to the situation

-The situation itself (befriending, hiring, electing to office)

-Ease of determining feature Y without using X as a proxy

-Causality of X to Y

Personally, I'm somewhere between the classically liberal "it's okay to discriminate against voluntary group membership but not involuntary group membership" and the utilitarian "it's okay to discriminate iff the total net benefit of the sorting mechanism is higher than the total cost of the discrimination against group members, taking into account that such discrimination may be widespread", despite the latter being computationally intractable in practice and requiring a bunch of heuristics that allow bias into the mix. I don't think I'm satisfied with the classically liberal position alone because if there were some sufficiently strong counterexample, such as someone with a genetic strain that made them 100x more likely to be a pedophile, I think I'd be okay with refusing child care positions to all such people even if they had never shown any other risk factors. But if there were a similar strain that made them 10% more likely I don't think it would be fair to do this, because it's such a low base rate that 10% doesn't do much to offset the cost of the discrimination. Also the utilitarian position allows for stricter scrutiny applied for more serious things like job applications (which have a huge cost if systematically discriminating against X) versus personal friendships (if people refuse to befriend X because they don't like Y, those people can more easily go make different friends or befriend each other, so the systemic cost is lower)

But I'd love to hear more thoughts and perspectives, especially with reasoning for why different cases are and are not justified under your philosophical/moral framework.

I am not a big fan of proxies when it comes to important matters. We should aim to get as close to the underlying variable as possible.

Got a black and white potential candidate for a job as a physics researcher? In my world whoever scores higher in the IQ test gets it.

Not hiring child molesters at a pre school is less about competence or safety and more about optics. Lets say you had a device that could read minds and is attached to ones wrist to deliver a deadly electrical shock before one is to sexually abuse a child. I would bet most parents would still rather this guy not be near his children with the device on even if it means his life depends on doing an excellent job.


Will we ever reach the perfect set of proxies? No. But if they are used the way Bayes intended, I am fine with them. In practice they are used as hard and dumb cutoffs.

Oh you have 5 years of experience Instead of the 5.1 we are asking for? Too bad.

The problem with this view, which is at the heart of the modern rational world, is that the energy expended to figure things out in totality isn't always worth expending.

Your brain doesn't need to understand the shape of the environment on the atomic level to manipulate it, you've developped the concept of objects, which are a useful, wrong, simplification of it.

Newtonian physics are falsified, yet we use them daily in engineering.

The truth is that ALL criterions are proxies. The true nature of the world is unknowable and all decisions are made using models based on experience.

And quality models are energy efficient for their uses.

This actually gets at how I personally define “IQ”, which is the physical level at of detail at which a being understands reality. In this sense, there is a theoretical — and arguably practical — upper bound on intelligence, where a being understands all of reality across all time — that is, the position of every atom in the universe across every temporal dimension.

I have no doubt in the near future we will have AIs which achieve a significant portion of this — the ability to perceive and manipulate reality at the atomic level across a substantial — say, galaxy sized — slice of the universe. In practice these would be indistinguishable from magic to someone alive today. You could just say “get me a beer” to the AI, it would instantly assemble atoms into a glass (ice cold with frost on it) filled with atoms assembled as freshly brewed beer.

Looks like you're mixing in so much perception, memory, reach and ability to manipulate into "intelligence" that your personal definition of "IQ" is going to massively differ from the colloquial.

This is like the gripe I have with the Yud-esque AI doomers who claim intelligence is when if more, then can do magic.