site banner

Culture War Roundup for the week of March 9, 2026

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

2
Jump in the discussion.

No email address required.

Today I was listening to a Maiden Mother Matriarch podcast (paywalled on Substack, but available with ads on Apple Podcast), with Louise Perry interviewing John Daniel Davidson, and there were a lot of both dubious and interesting things there, but the one that caused an emotional reaction for me was the discussion of "screens," which I've been having with some in person friends, and seeing around Substack lately as well. I don't like the paradigms of the discussion, but have trouble articulating why. Especially when Davidson kept repeating "it rewires their brains" over and over again. My pop neuroscience model is built on a few fluffy books about neuroplasticity from a decade ago, but I thought basically everything required our brains?

There are indeed a lot of things on the internet, and especially social media, that are bad in the way casinos are bad, but calling this "screens" feels like calling slot machines "levers" or something. It's not like I could have accessed the podcast, other than by learning about it online, anyway. Was it more virtuous to listen to Davidson talk than to read him on Substack? Maybe! I was doing work with my hands while I listened.

Jonathan Haidt thinks that children shouldn't be able to post on social media or have smart phones (or internet enabled private devices more generally), and I think that may be reasonable, especially in regards to people posting photos of themselves, sure, everyone should think long and hard about doing that, and usually shouldn't. But at the same time, I don't really trust the enforcers, and do think that the rules wouldn't fall where I would hope.

Louise Perry didn't push back as much as I would have liked against the "demonic, insane, evil" rhetoric in regards to "screens" (by which I think Davidson meant something more like "the unfiltered internet"), but did mention something like that she thinks it's probably alright for her children to watch fairy tales sometimes, but that it's weird and a bit disturbing if they're watching another kid play on Youtube. And I agree that, yes, that's kind of weird, I wouldn't let my children watch that. I didn't let my child watch more than one episode of "Is it Cake," either, because that also seemed a bit weird.

Anyway, is there anyone out there who has an actually useful way of discussing "screens," especially in respect to children, but also in general? If I had more attention to devote to the topic, maybe I'd try reading Heidegger's Simulcrum and Simulation, since at least the title seems like it's heading in an interesting direction.

Neuroplasticity, as you probably intuited, is basically the mechanism by which brains work at all. Reading rewires brains. Suffering rewires brains. Learning to juggle demonstrably changes cortical gray matter density in a way you can see on an MRI, and nobody is writing Substack posts about the demonic influence of juggling on children. When someone says "screens rewire brains," the word doing all the actual work is "rewires" in the pejorative sense, meaning "changes in bad ways that are hard to reverse," but that claim is being smuggled in without justification, under cover of a neuroscience fact that's technically true but completely uninformative. Everything that does anything to you rewires your brain. The question is whether the rewiring is bad, and repeating the neuroplasticity point louder doesn't answer that. It's actually worse than uninformative, because it makes the arguer sound scientific while doing no scientific work whatsoever. The neuroplasticity framing is rhetorical judo: it borrows the authority of neuroscience while gesturing vaguely at harm it has not actually demonstrated.

This matters because it makes the claim unfalsifiable in practice. If a child improves at chess from watching chess videos, that's also rewiring their brain, but presumably Davidson isn't worried about that one. The rewiring point can't distinguish between the two cases, so it isn't doing any of the work it's being credited with. What it's actually doing is priming the listener to accept that harm has been established before the argumentative heavy lifting has begun. I'd rather the harm be argued directly, at which point it would be subject to actual scrutiny, than laundered through the vocabulary of neuroscience.

"Screen time," while far from ideal as terminology, is also far from the worst offense around. The deeper problem is that the category is wildly underdetermined. It seems to matter enormously what the screen displays. A child who spends three hours reading Wikipedia articles about the Byzantine succession crisis, watching a documentary about migratory birds, and then video-calling their grandmother is doing something categorically different from one who has spent those three hours cycling through TikTok thirst traps and casino-mechanic reward loops dressed up as games. Lumping these together under "screens" and then asking whether "screen time" is harmful is a bit like asking whether "food time" is healthy. The answer will depend almost entirely on what food we're talking about, and the aggregate will tell you almost nothing useful.

The medium-is-the-message people have a point that the delivery mechanism shapes the experience in ways content alone doesn't capture. But even granting McLuhan more than he's usually owed, there is still an enormous variance in what screens deliver that gets erased the moment we start talking about "screens" as a unified phenomenon. Calling slot machines "levers" would be a more accurate description than calling all interactive digital media "screens," because at least all levers share the mechanical property of force multiplication. What screens share is a glowing rectangle that displays imagery, which is not doing much analytical work.

A lot of the older empirical literature was also methodologically shabby in ways that should give us pause before crediting its conclusions. Much of it was observational, relied heavily on self-report (or parent-report, which introduces its own distortions), lumped television with TikTok with WhatsApp with gaming with educational apps, and then asked whether the aggregate was good or bad. The effect sizes, when statistically significant at all, were in many cases embarrassingly small. Jean Twenge's widely-cited work was criticized by Andrew Przybylski and Amy Orben, who used the same datasets and found that the association between screen time and adolescent wellbeing was approximately the same magnitude as the association between wearing glasses and adolescent wellbeing. Spectacle-wearing doesn't cause depression; it's a proxy for other things. The same concern applies to screen time, which correlates with socioeconomic status, parenting style, pre-existing behavioral difficulties, and a hundred other things that are doing the actual causal work.

I'd say that it's not worth losing sleep over, except that the most robust and consistent negative findings deal with sleep, specifically that device use near bedtime disrupts both sleep onset and sleep quality, probably through a combination of blue-light effects on melatonin and the obvious fact that you can't scroll and sleep simultaneously. This is worth taking seriously precisely because it's one of the few findings that replicates, has a plausible mechanism, and shows an effect size large enough to matter. The irony, not lost on me, is that "no phones in the bedroom at bedtime" is not a very interesting or monetizable policy conclusion, so it gets lost in the noise of more dramatic claims about societal collapse. Good luck enforcing that for the kids, with how their parents embrace their phones.

Jonathan Haidt thinks children shouldn't be able to post on social media or have smartphone access, and there's something to this if we're being specific about the "posting photos of yourself" piece. The performative identity-construction that social media incentivizes does seem like a weird thing to encourage in adolescents who are in the middle of figuring out who they are, and there's a reasonable case that the particular feedback loops involved are nastier than equivalent analogue experiences of social humiliation, which at least fade from memory. But "no smartphones" as a category encompasses an enormous amount of genuinely useful functionality, and "no posting photos" is a much more targeted and defensible intervention than "no smartphone," which tends to be what people actually mean.

I'm also skeptical of enforcement mechanisms. Not because I think children's online safety doesn't matter, but because I don't trust that the rules will land where the advocates for them seem to expect. Age verification regimes tend to produce either security theater or comprehensive surveillance infrastructure, and comprehensive surveillance infrastructure does not stay narrowly targeted at protecting children for very long. The same legislative sessions that produce "think of the children" bills about social media often produce other bills I would find considerably more alarming. The willingness to build the infrastructure is the thing that should worry us, independent of the stated justification.

I should be honest about my personal stake in this, because it seems relevant. When I was a kid, my ADHD predominantly manifested as inattention. I was notorious for reading novels under the desk in class, reading while walking, compulsively reading every newspaper and the labels on shampoo bottles and the copyright page of books and anything else that had text on it. My parents were extremely conservative about digital affordances during my childhood and adolescence: no broadband internet connection, no smartphone, until late in my teens.

This did nothing good for me. You do not treat ADHD with sensory deprivation. I was not going to pay more attention in class because I didn't have a phone handy; I was just more likely to zone out and stare at a water stain on the ceiling and construct elaborate fantasies about the history of civilizations I'd invented. I was bored, in a persistent and grinding way that I now recognize as one of the more unpleasant features of the condition, and I'm genuinely grateful that advances in technology have made that particular flavor of boredom substantially more optional. ADHD medication improved my academics and my functioning in the world. Austerity did not. The restriction removed a coping mechanism without addressing the underlying issue.

I'm aware that my case doesn't generalize. Plenty of kids are not managing a neurological attention deficit when they're scrolling, they're just enjoying an entertainment product, and there's a reasonable question about whether that entertainment product is well-calibrated for their long-term flourishing. But I'm suspicious of framings that assume the counterfactual to device use is some kind of improving, wholesome activity, rather than the much more realistic counterfactual of staring at the wall, or in my case, reading the back of a cereal box for the fourteenth time.


I've watched a teenage relative of mine scroll through Instagram Reels, and it was not a pleasant experience. None of it was erudite. Most of it was AI-generated, and obviously so to anyone over twenty-five, though apparently not to her. The content was a kind of undifferentiated slurry of dumb pranks, "interesting" facts that were wrong, and videos that seemed designed less to convey anything than to fill attention with sensation. I wanted to say something. I didn't, because it wasn't my call and the headache of saying something would have outweighed the benefit. Also, she isn't a particulay bright kid, as hard as that is to say about your own kin. But I felt, for a moment, what the "screens are demonic" people feel, and I think I understand why they reach for that language.

(Don't get me started on an elderly great-uncle and his consumption of the most ludicrously fake AI-slop on YouTube. I did my best to inform him, but wise words only get you so far at that age.)

The problem is that "demonic" and "insane" and "evil" are not diagnostic, they're expressive. They communicate that the speaker has had a visceral negative reaction, which I also had. What they don't do is tell you anything useful about what the actual harm is, what causes it, how it might be addressed, or how to distinguish between the things that caused the visceral reaction and the much broader category of digital media that gets swept up in the resulting policy proposals. Louise Perry's instinct to distinguish between fairy tales on a screen and watching another child play on YouTube seems right to me, not because one is "screens" and the other isn't, but because they're different things doing different things to a child's attention and social cognition. That distinction is worth making carefully, and the "screens" framing makes it harder rather than easier.


If I were forced to endorse a population-wide intervention, it would be this: device manufacturers and online services should be required to provide genuinely functional parental controls, to be setup at the convenience of the person making the purchase. Not draconian age-restriction policies that produce surveillance infrastructure and don't actually work. Just real tools that let parents do what parents are supposed to do, which is make situated judgments about their specific kid, in their specific circumstances, with their specific needs, rather than relying on either blanket permissiveness or blanket prohibition. A child's use of electronics is something that should be monitored in conjunction with their behavior and academic performance, the same way you'd monitor anything else in their life that was potentially impacting them.

The people most confident that they know the right policy for all children are usually people who have identified a single dimension of risk, optimized hard against it, and are not tracking the costs of their proposed solution. The costs are real. Restriction has costs. Surveillance has costs. Boredom has costs. Social exclusion from peer networks that now largely operate digitally has costs. A child who can't participate in the group chat is not being protected from social life, they're being excluded from it, and that exclusion has downstream consequences that are unlikely to show up in studies asking whether "screen time" correlates with self-reported wellbeing.

Not to mention, that if childhood and adolescence is treated as a sort of preparatory phase for adult life: are the adults doing anything different? We live on our phones, there are few facets of modern living not mediated by transistors, light emitting diodes and the internet. And I think that's great: I have a device in my hands that, for about my weekly wage, allows access to nearly the sum total of human knowledge and the ability to interact with people across the globe with milliseconds of latency. I use it to learn more, say more, do more, and yes, entertain myself. If you can't manage to use such capabilities in an ennobling manner, I'm tempted to declare a skill-issue. Don't try and dictate terms for the rest of us, mind your own kids.

A child who spends three hours reading Wikipedia articles about the Byzantine succession crisis, watching a documentary about migratory birds, and then video-calling their grandmother is doing something categorically different from one who has spent those three hours cycling through TikTok thirst traps and casino-mechanic reward loops dressed up as games.

Who's to say grandma can't also be a thirst trap?

I'm not competent enough a psychiatrist to answer that question.

Sounds like a perfect time for “[awkward silence]”.

I think you might need to be a priest.