self_made_human
amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
A friend to everyone is a friend to no one.
User ID: 454
I'm not competent enough a psychiatrist to answer that question.
I've been aware of this phrase for years, mostly from Reddit. Is there a canonical definition, however? I say this with genuine curiosity / bewilderment. Capitalism, to my mind, is an economic condition bounded by certain conditions. I didn't know (and I am dubious) about there being a temporal aspect to it
"Werner Sombart, who used the phrase Spätkapitalismus (literally "late capitalism") in his 1902 work Der moderne Kapitalismus. Sombart was developing a stage-theory of capitalism, arguing that the system passed through distinct historical phases: early, high, and late. His framework was descriptive and evolutionary, not necessarily apocalyptic."
https://en.wikipedia.org/wiki/Late_capitalism
In the 21st century era of the global Internet, mobile telephones and artificial intelligence, the idea of "late capitalism" is again used in left-wing political discussions about the decadence, degeneration, absurdities and ironies of contemporary business culture, often with the suggestion that capitalism is now getting near the end of its existence (or is already being transformed into a post-capitalism of some sort)
The gist of it is that it's a shibboleth and a cue to boo the outgroup on command.
If there's anything someone dislikes about modern consumerism or globalization, it's a convenient brush to paint with. Gentrification? Late stage capitalism. Rent too damn high? Late stage capitalism. Netflix enshittified its offerings? Late stage capitalism.
The unresolved questions were: "late" in what sense? In comparison to what? How do we know? What could possibly replace capitalism? The liberal economist Paul Krugman stated in 2018 that:
"I've had several interviews lately in which I was asked whether capitalism had reached a dead end, and needed to be replaced with something else. I'm never sure what the interviewers have in mind; neither, I suspect, do they."
Neuroplasticity, as you probably intuited, is basically the mechanism by which brains work at all. Reading rewires brains. Suffering rewires brains. Learning to juggle demonstrably changes cortical gray matter density in a way you can see on an MRI, and nobody is writing Substack posts about the demonic influence of juggling on children. When someone says "screens rewire brains," the word doing all the actual work is "rewires" in the pejorative sense, meaning "changes in bad ways that are hard to reverse," but that claim is being smuggled in without justification, under cover of a neuroscience fact that's technically true but completely uninformative. Everything that does anything to you rewires your brain. The question is whether the rewiring is bad, and repeating the neuroplasticity point louder doesn't answer that. It's actually worse than uninformative, because it makes the arguer sound scientific while doing no scientific work whatsoever. The neuroplasticity framing is rhetorical judo: it borrows the authority of neuroscience while gesturing vaguely at harm it has not actually demonstrated.
This matters because it makes the claim unfalsifiable in practice. If a child improves at chess from watching chess videos, that's also rewiring their brain, but presumably Davidson isn't worried about that one. The rewiring point can't distinguish between the two cases, so it isn't doing any of the work it's being credited with. What it's actually doing is priming the listener to accept that harm has been established before the argumentative heavy lifting has begun. I'd rather the harm be argued directly, at which point it would be subject to actual scrutiny, than laundered through the vocabulary of neuroscience.
"Screen time," while far from ideal as terminology, is also far from the worst offense around. The deeper problem is that the category is wildly underdetermined. It seems to matter enormously what the screen displays. A child who spends three hours reading Wikipedia articles about the Byzantine succession crisis, watching a documentary about migratory birds, and then video-calling their grandmother is doing something categorically different from one who has spent those three hours cycling through TikTok thirst traps and casino-mechanic reward loops dressed up as games. Lumping these together under "screens" and then asking whether "screen time" is harmful is a bit like asking whether "food time" is healthy. The answer will depend almost entirely on what food we're talking about, and the aggregate will tell you almost nothing useful.
The medium-is-the-message people have a point that the delivery mechanism shapes the experience in ways content alone doesn't capture. But even granting McLuhan more than he's usually owed, there is still an enormous variance in what screens deliver that gets erased the moment we start talking about "screens" as a unified phenomenon. Calling slot machines "levers" would be a more accurate description than calling all interactive digital media "screens," because at least all levers share the mechanical property of force multiplication. What screens share is a glowing rectangle that displays imagery, which is not doing much analytical work.
A lot of the older empirical literature was also methodologically shabby in ways that should give us pause before crediting its conclusions. Much of it was observational, relied heavily on self-report (or parent-report, which introduces its own distortions), lumped television with TikTok with WhatsApp with gaming with educational apps, and then asked whether the aggregate was good or bad. The effect sizes, when statistically significant at all, were in many cases embarrassingly small. Jean Twenge's widely-cited work was criticized by Andrew Przybylski and Amy Orben, who used the same datasets and found that the association between screen time and adolescent wellbeing was approximately the same magnitude as the association between wearing glasses and adolescent wellbeing. Spectacle-wearing doesn't cause depression; it's a proxy for other things. The same concern applies to screen time, which correlates with socioeconomic status, parenting style, pre-existing behavioral difficulties, and a hundred other things that are doing the actual causal work.
I'd say that it's not worth losing sleep over, except that the most robust and consistent negative findings deal with sleep, specifically that device use near bedtime disrupts both sleep onset and sleep quality, probably through a combination of blue-light effects on melatonin and the obvious fact that you can't scroll and sleep simultaneously. This is worth taking seriously precisely because it's one of the few findings that replicates, has a plausible mechanism, and shows an effect size large enough to matter. The irony, not lost on me, is that "no phones in the bedroom at bedtime" is not a very interesting or monetizable policy conclusion, so it gets lost in the noise of more dramatic claims about societal collapse. Good luck enforcing that for the kids, with how their parents embrace their phones.
Jonathan Haidt thinks children shouldn't be able to post on social media or have smartphone access, and there's something to this if we're being specific about the "posting photos of yourself" piece. The performative identity-construction that social media incentivizes does seem like a weird thing to encourage in adolescents who are in the middle of figuring out who they are, and there's a reasonable case that the particular feedback loops involved are nastier than equivalent analogue experiences of social humiliation, which at least fade from memory. But "no smartphones" as a category encompasses an enormous amount of genuinely useful functionality, and "no posting photos" is a much more targeted and defensible intervention than "no smartphone," which tends to be what people actually mean.
I'm also skeptical of enforcement mechanisms. Not because I think children's online safety doesn't matter, but because I don't trust that the rules will land where the advocates for them seem to expect. Age verification regimes tend to produce either security theater or comprehensive surveillance infrastructure, and comprehensive surveillance infrastructure does not stay narrowly targeted at protecting children for very long. The same legislative sessions that produce "think of the children" bills about social media often produce other bills I would find considerably more alarming. The willingness to build the infrastructure is the thing that should worry us, independent of the stated justification.
I should be honest about my personal stake in this, because it seems relevant. When I was a kid, my ADHD predominantly manifested as inattention. I was notorious for reading novels under the desk in class, reading while walking, compulsively reading every newspaper and the labels on shampoo bottles and the copyright page of books and anything else that had text on it. My parents were extremely conservative about digital affordances during my childhood and adolescence: no broadband internet connection, no smartphone, until late in my teens.
This did nothing good for me. You do not treat ADHD with sensory deprivation. I was not going to pay more attention in class because I didn't have a phone handy; I was just more likely to zone out and stare at a water stain on the ceiling and construct elaborate fantasies about the history of civilizations I'd invented. I was bored, in a persistent and grinding way that I now recognize as one of the more unpleasant features of the condition, and I'm genuinely grateful that advances in technology have made that particular flavor of boredom substantially more optional. ADHD medication improved my academics and my functioning in the world. Austerity did not. The restriction removed a coping mechanism without addressing the underlying issue.
I'm aware that my case doesn't generalize. Plenty of kids are not managing a neurological attention deficit when they're scrolling, they're just enjoying an entertainment product, and there's a reasonable question about whether that entertainment product is well-calibrated for their long-term flourishing. But I'm suspicious of framings that assume the counterfactual to device use is some kind of improving, wholesome activity, rather than the much more realistic counterfactual of staring at the wall, or in my case, reading the back of a cereal box for the fourteenth time.
I've watched a teenage relative of mine scroll through Instagram Reels, and it was not a pleasant experience. None of it was erudite. Most of it was AI-generated, and obviously so to anyone over twenty-five, though apparently not to her. The content was a kind of undifferentiated slurry of dumb pranks, "interesting" facts that were wrong, and videos that seemed designed less to convey anything than to fill attention with sensation. I wanted to say something. I didn't, because it wasn't my call and the headache of saying something would have outweighed the benefit. Also, she isn't a particulay bright kid, as hard as that is to say about your own kin. But I felt, for a moment, what the "screens are demonic" people feel, and I think I understand why they reach for that language.
(Don't get me started on an elderly great-uncle and his consumption of the most ludicrously fake AI-slop on YouTube. I did my best to inform him, but wise words only get you so far at that age.)
The problem is that "demonic" and "insane" and "evil" are not diagnostic, they're expressive. They communicate that the speaker has had a visceral negative reaction, which I also had. What they don't do is tell you anything useful about what the actual harm is, what causes it, how it might be addressed, or how to distinguish between the things that caused the visceral reaction and the much broader category of digital media that gets swept up in the resulting policy proposals. Louise Perry's instinct to distinguish between fairy tales on a screen and watching another child play on YouTube seems right to me, not because one is "screens" and the other isn't, but because they're different things doing different things to a child's attention and social cognition. That distinction is worth making carefully, and the "screens" framing makes it harder rather than easier.
If I were forced to endorse a population-wide intervention, it would be this: device manufacturers and online services should be required to provide genuinely functional parental controls, to be setup at the convenience of the person making the purchase. Not draconian age-restriction policies that produce surveillance infrastructure and don't actually work. Just real tools that let parents do what parents are supposed to do, which is make situated judgments about their specific kid, in their specific circumstances, with their specific needs, rather than relying on either blanket permissiveness or blanket prohibition. A child's use of electronics is something that should be monitored in conjunction with their behavior and academic performance, the same way you'd monitor anything else in their life that was potentially impacting them.
The people most confident that they know the right policy for all children are usually people who have identified a single dimension of risk, optimized hard against it, and are not tracking the costs of their proposed solution. The costs are real. Restriction has costs. Surveillance has costs. Boredom has costs. Social exclusion from peer networks that now largely operate digitally has costs. A child who can't participate in the group chat is not being protected from social life, they're being excluded from it, and that exclusion has downstream consequences that are unlikely to show up in studies asking whether "screen time" correlates with self-reported wellbeing.
Not to mention, that if childhood and adolescence is treated as a sort of preparatory phase for adult life: are the adults doing anything different? We live on our phones, there are few facets of modern living not mediated by transistors, light emitting diodes and the internet. And I think that's great: I have a device in my hands that, for about my weekly wage, allows access to nearly the sum total of human knowledge and the ability to interact with people across the globe with milliseconds of latency. I use it to learn more, say more, do more, and yes, entertain myself. If you can't manage to use such capabilities in an ennobling manner, I'm tempted to declare a skill-issue. Don't try and dictate terms for the rest of us, mind your own kids.
- Prev
- Next

Hah. It's only fair that you make it your life's goal to educate me on Heidegger (without asking for consent, though I probably would have given it anyway), you notice something attributed to Heidegger come up in conversation, and then, with dawning dismay, realize that it was a misattribution. I can imagine the disappointment! I relish in schadenfreude!
More options
Context Copy link