site banner

Culture War Roundup for the week of August 28, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

https://nationalpost.com/opinion/colby-cosh-ubc-covers-for-bad-science-in-homeless-cash-transfer-study

A major university (in Canada) published another one of those studies where they give homeless people money and see if they spend it on crack or job applications. Mostly this was met with admiration and joy by the journalist class. The more right-leaning publication I posted above is more skeptical, pointing out of some of the potential problems with the study:

Unfortunately, putting a thumb on the scale was almost the first thing the researchers did. 732 possible participants in the study were screened. The UBC folk didn’t want their sample to include the long-term homeless, so to be eligible, participants had to have been homeless for less than two years. Also, they rejected severe drug and alcohol abusers and the mentally ill.

...

Note that the researchers didn’t even consider including the tent-dwelling, park-occupying homeless: merely by working with shelters, and with the people who prefer to sleep indoors despite some filth and danger, they were giving themselves an enormous implicit advantage. The study, having kinda announced at the outset that it’s garbage, goes on to describe how 229 people were chosen from the screening sample to provide the experimental group for the study. Alas, of the 229 people who took $7,500 payments, half (114) of them disappeared from view and didn’t complete the series of questionnaires and tests they had supposedly undertaken.

This isn't that interesting, it's just a bad study done in Vancouver, what I found interesting was the writer starts with a brief summary of the replication crisis, to an audience that is presumably not intimately familiar with it:

You ever hear of a guy named Daryl Bem? Bem is a social psychologist from Cornell University, now retired at age 85. In the ‘90s, after a long conventional career as an experimenter, he took up the cause of establishing evidence for human extrasensory precognition, and did some studies that seemed to confirm it exists. This set off a war in psychology as critics descended on Bem to nitpick the flaws in his studies and citations of psychic phenomena. Article content

In the end, the consensus about Bem’s research was mostly not that he used mainstream tools of statistical analytics improperly. He had mostly coloured within long-established scientific lines and followed his training in hypothesis tests — everyone’s training. Article content

Bem is now widely regarded as a weird sort of antihero who inadvertently demonstrated flaws in classic hypothesis testing, and whose late work was ground zero for the current “replication crisis” in psychology. It is not that humans are psychic: it is that you can prove the absurd proposition “humans are psychic” by very lightly abusing the received 20th-century scientific method.

There has been and is lots of discussion here about relaying rationalist concepts or ideas to outsiders or average random people in Mottizen's day-to-day lives. With the rise of culture war divisions, and especially the political rhetoric surrounding the Coronavirus Lockdowns and other policies, I'm wondering what approach if any you use when talking to acquaintances or friends who skew liberal, who broadly are happy to have the inertia of universities or the intelligentsia on their side, that you often reject social science research or findings unless personally having vetted them, without sounding to them like a low-IQ backwater hick redneck science denying flat-earther. I suspect that this is impossible.

Yeah, the critique felt a little weak. Specifically because the primary critique is noted in the study itself:

These findings are based on exploratory analyses in a modestly sized sample that represents a high-functioning subset (e.g., 31% screen-in rate) of the total homeless population in Vancouver. Thus, our results may not extend to people who are chronically homeless or experience higher severity of substance use, alcohol use, or psychiatric symptoms.

What would have been a more convincing approach would be instead focusing on this statement found in both the news story and press release, but missing from the study itself:

The study did not include participants with severe levels of substance use, alcohol use or mental health symptoms, but Dr. Zhao pointed out that most homeless people do not fit these common stereotypes. Rather, they are largely invisible. They sleep in cars or on friends’ couches, and do not abuse substances or alcohol.

This statement does a lot of work in justifying the policy implications of cash infusions because if 31% of the homeless passed their entrance requirements, and it's a representative sample, then 31% or more of the homeless at-large could be conceivably impacted.

I have no idea if his assertion is true or not, but that seems the most potentially dubious framing.

Specifically because the primary critique is noted in the study itself...

I think that's a bit of a fig leaf: the authors knew or should have known that the paper would be portrayed without full disclosure as to its limitations, including in its own abstract and in the UBC piece itself (which does only mentions that the study excluded "severe levels of substance use, alcohol use or mental health symptoms", but not that it excluded the long-term homeless). Neither mentions the further filtering to only the sheltered homeless, nor the loss to followups.

Summaries by nature can't include all details, but people writing studies know what will get left out, and should recognize when that's going to be highly dishonest.

((There are other problems: the use of two preregistered analysis that are the weakest for predictive power and least repeated in the news coverage ("subjective well-being and cognitive outcomes") followed by a mass of 'exploratory' analysis that are repeated heavily but also scream garden of forking paths, especially combined with the condition grouping and when the study power looks like this. In addition to the attrition before study criteria were applied, the cash group had vastly lower response rates (74% vs 95%) on the 1-month survey than the control group did, which probably didn't have a huge impact in the statistical analysis but doesn't seem to get mentioned in the main paper proper at all just in the appendix. I also don't have a good mental model for the impact of "In the main analyses, participants in the cash group were included in the final sample if they received the cash, while participants in the control group were only included if they completed at least one follow-up survey." but my gut check's that it's not a good sign combined with that extra 21% dropout rate for the 1-mo survey.))

Look at the other two portions of the study: the authors did a couple survey-style efforts specifically to form approaches to "frame the benefits of the cash transfer to make it more palatable to the public, with the goal of improving public support for a cash transfer policy". Which, in turn, again only mentions filtering for "severe level of substance use, alcohol use, or mental health challenges", without mentioning excluding the long-term homeless.

This is pretty standard! For a different sort of culture war issue, I'd point to this recent discussion about eating beef. There are, if you dig into it far enough, quite a lot of disclaimers about how this is really talking about 24-hour recall rather than any more holistic analysis of consumption, and inconveniently the study didn't actually ask about meat at all so instead the analysis was filtered through one database to make predictions for likely meat portion of self-reported food intake which still didn't say anything specific about beef so the authors further just cut everything that wasn't explicitly spelled out as one type of meat or another in half. It's all there, and unlike most bad actors in this space it's not even paywalled!

But ultimately, this study methodology still requires the author to look at (trash-quality) data claiming that X people consumed Y ounces of meat that the authors believed (for some reason?) was 50% beef, and that this was equivalent to X people consuming Y/2 ounces of beef individually. And while st_rev was responding to the NYPost, which one could quite plausibly expect to be unusually useless even by popsci standards, it's not like the popsci groups are doing any better.

These social scientists aren't morons, despite their best efforts. The people actively studying how best to frame the benefits of an intervention have to at least considered how they're going to describe the intervention. This doesn't even mean that the general thrust of these studies are wrong; they're all too underpowered to tell us that they're even lying, once you move the fig leaf. But that's pretty damning for the broader field of science.