site banner

Culture War Roundup for the week of January 29, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

In between blogging about fursuit collections, former motte moderator TracingWoodgrains has started to blow up on twitter after wading into an ongoing feud between Steve Sailer and propagandist Will Stancil.
Something in the replies must have really upset him (possibly interactions with a number of replyguys making not-so-veiled threats about what happens to people who associate with bigots or question "lying for the pursuit of good aims"), because he suddenly got really invested in proving that the recent FAA-DEI scandal is real.

After giving up on conservative journalists and deciding to do the legwork himself, he's now posting PACER documents from the recent FAA lawsuit, proving that the FAA HR department sent black applicants a list of resume buzzwords that would get their applications fast-tracked, via the National Black Coalition of Federal Aviation Employees.

A few hours ago this got the attention of Elon Musk, and Tracing is promising a follow-up, somehow trying to juggle 1L coursework with doing more investigative journalism than the entire conservative media put together. Obviously one of these things takes more time than the other, but I'm sure he'll have a coffee break free for the journalism bit.

One reason I think this could be important is that it's going to paint a huge target on Tracing's back. Propagandists have been claiming that the FAA DEI story was fake, the test designed to favor black applicants never existed, etc. They're going to get very angry at this evidence becoming widely known, and tracing is in a unique position to spread it outside the right wing news ghetto that prevents most liberals from ever encountering facts like these.
I'm not saying it's certain they're going to go after his law school, but he's in a uniquely vulnerable position right now, with very few allies in a position to help him (and probably a number who will suddenly decide he's on the enemy side of the fiend-enemy distinction.) So if anyone is in the position to help if he needs it, maybe start reaching out early.

Unfortunately all of this is getting difficult to follow without a twitter account (I even have one, but they're not letting me log in right now for no apparent reason). It's going to get even harder as Nitter instances die off. If anyone has a reliable account and would be willing to make screenshots, I'd love if you could take over covering the story as it develops.

Edit: his effortpost is now out on twitter and at his blog. I'll copy it into a reply below in case the nitter instance goes down again.

Just a note, this has obvious parallels to colleges letting DEI departments screen out the 80% of applicants before any objective hiring process begins:

they recommended using a biographical test first to "maximiz[e] diversity," eliminating the vast majority of candidates prior to any cognitive test.

It's a very effective method of manipulating procedural outcomes, isn't it?

When NY Times starts investigating this page and wants to interview me as the one sympathetic-to-their-audience 'progressive' venturing into the lions den, I promise to tell them y'all are just misguided victims of radicalizing social media algorithms. Probably the best I can do.

What's with the concept 'radicalizing social media algorithms'? This is like that idea that 'Putin hacked the elections'... by putting ads on facebook or something? How can people simultaneously defend democracy and believe that the average person is the cognitive equivalent of a fast food public wifi network?

If democracy is so great, why do we need to ban doctors from posting their opinions online? Why do we need to prevent people from taking horse deworming medication but make sure they get to vote?

If AI is racist, if Silicon Valley companies, the most powerful, data-driven, progressive companies ever, still can't seem to make DEI quotas, then perhaps they have a point?

How can people simultaneously defend democracy and believe that the average person is the cognitive equivalent of a fast food public wifi network?

I've made a few effortposts on this topic, ussually around the question of 'shouldn't we just let only the smart people vote?'

The basic idea is that if you have a true and powerful signal and a large enough amount of data collection, you can be ok even in a system with humongous amounts of noise.

It's ok if the average voter is so dumb that their voting behavior is near-random. So long as it's not completely random and they're probabilisticly influenced by the true signal of 'good candidate' at least a little, then if we average over tens of millions of voters we can recover that signal with high likelihood.

(whereas choosing any nonrandom subsample of the population to do the choosing, like 'the smart/informed people', is more likely to produce an artifact since their homogeneity makes them more likely to be biased in the same direction by the same factors)

Replying to myself to copy/paste one of those long posts, in case anyone is interested. Huge wall of text warning, with a lot of stuff that's probably too basic for the audience here:

This was in response to someone basically asking 'Wouldn't elections work better if only people who are educated on the issues and know enough to make good decisions were allowed to vote? Isn't it crazy that we let stupid and ignorant people make these decisions when they can't possibly know enough to decide well?'

My response:

So this is a really basic research methods question that scientists often have to deal with, mainly concerned with how to find a weak signal in a noisy data set.

Lets say that there's some 'correct' result for everyone election, the result that will lead to the best outcomes for the most number of people based on each of their individual preferences and needs, or whatever. The question is, what is the best way to arrive at that outcome as often as possible, or to arrive as closes as possible to it every time?

There are two main reasons this is a difficult problem. The first is that no one can see the future and it's impossible for anyone to truly know what the long-term consequences of any particular electoral outcome will be. The second is that it's impossible for anyone to truly know and understand the needs and preferences of all 350 million citizens and determine what the best outcome for all of them would be, even if they could predict it.

This creates a ton of uncertainty and disagreement about what the best electoral outcome is (who people should vote for), as we can see clearly at every election cycle.

In terms of statistical analysis, we would call this disagreement and uncertainty 'noise' - lots of disparate, high-variance, semi-random data about how people think everyone should vote. And we would call the 'correct' electoral outcome the 'signal' - the true result that we're trying to discover.

In this framing, an election is just a measurement, designed to try to capture the signal, and filter out the noise. Our elections tend to have a low signal-to-noise ratio, because it's so hard for anyone to know what the actual best outcome would be, and there's so much disagreement about what we should do.

As it turns out, scientists have been dealing with this problem in all kinds of domains since the invention of statistics, and they have a good handle on what works and what doesn't.

What you're suggesting is, basically, take a smaller number of data points from a restricted domain (people who pass the test), which you believe to have much less noise (less misinformation) and a much stronger signal (better understanding).

This is a good method in many domains - physicists, for instance, will go to great lengths to reduce noise in their experiments by shielding equipment or working far underground, even if this is expensive and limits the amount of data they can gather. If they can eliminate enough noise, they only need a very little data to confirm their hypotheses, because those hypotheses are very precise, and the systems they deal with are well-understood.

But the danger with restricting your domain and excluding subjects based on a specific criterion is that it can introduce bias into your measurements, leading you to very accurately measure the wrong signal. Physicists don't have to worry about this much, because physics works the same underground and behind shielding as it doe anywhere else. But any science that has to deal with people has to worry about this a lot, because people are very easily biased, and different groups of people can vary from each other in all kinds of ways.

In your example, it may be that people who pass your test know more about the world overall, but have some specific set of strong, incorrect beliefs that is currently in fashion among the educated classes, or was introduced to the curriculum they tend to study by the agencies that make that curriculum, or that concerns areas of study or ways of life (like plumbing or farm work) that the educated tend to have little contact with. And even if they have no systematically mistaken beliefs, their priorities and needs may still be systematically divergent from the rest of the general population - they may not appreciate the true needs of the poor, they may prioritize art and science over industry and safety, they may fall preferentially along one political or religious alignment, etc. Basically, as long as they have any systematic biases that make them different from the rest of the population, you cannot get the 'correct' signal from any type of measurement of them, because their 'signal' is something different that aligns with their biases. Their 'signal' may still be pretty good, but it can't ever be 'correct'.

How do scientists who deal with these types of problems try to measure the real signal amidst tons of noise, then? The answer is random sampling of lots and lots and lots of data points, averaged out with each other to converge on the correct signal.

See, when you have enough data points, it doesn't really hurt you much to add a 'noisy' data point (ie someone who knows nothing and acts randomly). Because that noise will tend to be randomly distributed, and cancel out with someone else who was randomly noisy in the other direction when you average everything together. So letting people with 'zero knowledge' vote is not a problem. The only type of voter that's a problem is one with 'negative information' - beliefs and preferences that actively drive them away from the correct signal. And even those people will tend to cancel out with people who have negative knowledge going the other direction... if you sample from every walk of life and every group, instead of limiting yourself to a single specific group with a tendency towards one specific flavor of negative knowledge.

Because the thing about a true signal is, that we expect it to have some impact on most data points, even if those points themselves have huge variance. Like if you give all the kids in one school platform shoes with 2" heels and measure their heights, there will still be lots of variance in height and there will be tons of kids in that school who are shorter than tons of kids in another school even with their shoes on, but if you take the average height it will still come out 2 inches taller, because the shoes still increased everyone's random noisy heights at once.

With elections, it's a bit more complicated, but it's the same idea. Maybe one person is an idiot about everything except farm policies, but they can tell a good farm policy from a bad one and that true information affects their vote. Maybe another person knows nothing about policy, but is a really good judge of character and will tend to vote for more honest and benevolent candidates. Maybe a third person has been through civil forfeiture and understands the reality of that situation much better than the average person, and lets that influence their vote when politicians make a proposal about it. etc.

Each of those people may have a lot of 'noise' in their heads about every topic other than the one they're good at, but that noise will be mostly random across individuals and will cancel out. As long as they have some knowledge or understanding that gives them good, 'correct' beliefs about the way to vote, and those beliefs influence their actual vote in some way, then that means they're being influenced by the 'signal' and will be adding true information about the signal to our data set when we measure them.

This is how psychologists, social scientists, and other scientists that deal with people and other complex and unpredictable phenomena, almost always design their studies: random sampling of as much data as possible, with statistical analysis to find the signal among the noise. It's simply the most practical and reliable way to go about things with situations this complex. And in the case of elections, that translates to allowing everyone to vote, and encouraging as many people to vote as possible.

It sounds counter-intuitive when you think about a single idiot voting. But when you think about that idiot as someone who only has one tiny spark of good information, and then think about the electoral process as adding the tiny sparks of tens of millions of people together to illuminate the truth, it makes a lot more sense.