@self_made_human's banner p

self_made_human

amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi

14 followers   follows 0 users  
joined 2022 September 05 05:31:00 UTC

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

A friend to everyone is a friend to no one.


				

User ID: 454

self_made_human

amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi

14 followers   follows 0 users   joined 2022 September 05 05:31:00 UTC

					

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

A friend to everyone is a friend to no one.


					

User ID: 454

I'm not claiming that there's zero value from making laws that are difficult to enforce.

Littering leaves litter. Cheating prior to LLMs? Easier to catch. There is far more clear-cut evidence of wrongdoing, or at least some kind of accessible physical evidence that can be used to adjust priors.

This is much harder when the standard is any use of an LLM at all. How do you know? How can you even find out, short of someone being incredibly sloppy or confessing?

It's closer, quantitatively and qualitatively, towards writing legislation against thought-crime without some kind of futuristic machine that can actually parse thoughts. You might have a law on the books saying it's illegal to jerk off while thinking of minors, but even if you catch someone with their pants down, they can just claim they envisioned Pamela Anderson. How can you tell?

Plenty of rules for the Motte hinge on subjective assessments by us mods. But it would be absurd to add one that says that you can't swear aloud after reading a comment from someone you don't like.

The worst part is that false accusations will run rampant. That increases moderation load, and that effort would be better spent elsewhere.

Laws that cannot be enforced are laws not worth drafting. If they had just said "entirely or mostly LLM written submissions are banned", then that would have exactly the same impact and outcome.

I don't know the reputation of the mods at HN, though I've never seen heard of egregiously bad behavior or serious complaints, which is at least a positive signal. Maybe they will try and be reasonable, I just don't think that even a reasonable effort will succeed at catching more than small fraction of the fish in the sea. It'll definitely result in a massive surge of flagging and spurious reporting, which has its own downsides.

Thank you! That's what I recall.

PS: I'm able to confirm that that child was slated to have an EEG.

Just a few days ago, I met a patient who was convinced that they did not, in fact, "exist". He believed himself to be a rotting corpse, and initially declined his antipsychotics on the grounds that a dead person had no need for medication (a valid argument, as opposed to a sound one).

After some debate, we decided to tell him that the drugs would prevent his "corpse" from decomposing and causing a stink that would inconvenience the rest of the ward. Pro-sociality intact, he found this a compelling argument, and swallowed them without any further fuss.

So no, not even "Cogito ergo sum" is foolproof. The universe, and the DSM, must account for even better fools.

There is no solution. There is no proof-of-work or proof-of-humanity that is not severely error prone, extremely laborious, or that avoids requiring some kind of totalitarian police state dedicated to monitoring every word written by a human, or every token outputted by every known LLM.

It can't be done, or at the very least it won't be done.

On Hacker News, it’s now so bad there's a new guideline, “don’t post generated/AI-edited comments”. Unfortunately, due to the extreme intellect of the average Hacker News commenter, it can be hard to distinguish their profound technological insights from even a markov chain trained on buzzwords. Indeed, looking at top threads I still notice lots of slop-like posts from brand new or previously inactive accounts, like this one. I've been sarcastic, but I really like Hacker News, and hope it finds a way to stop the slop.

HN is the best parody of HN. There are plenty of (almost certainly human) users who could be trivially reconstructed by telling an LLM to write in the style of the biggest grognard pedant with arboreal-reinforcement of the anus it can envision.

Their attempt to ban "AI-edited" submissions is laughable, an attempt to close the barn-door after the horse was taken out back, shot, and then rendered into glue. There is no way to tell, distinguishing entirely AI written text is hard enough, let alone attempting to differentiate between an essay that was entirely human written, and one that took a human draft and then passed it through an LLM.

I intend to munch popcorn and observe the fallout. In all likelihood, a few egregious examples will be banned, alongside a witch-hunt that does more harm than good.

On the Motte, at least for now, I haven't seen any obvious bot posts. There were a couple AI-assisted posts (by "known" humans) over the past couple months that got called out.

The majority of bot posts (that anyone can tell are bot posts) are spam that is caught by the moderators and never see the light of day. I can't recall a single example of us allowing someone in who we thought was human, and then finding a smoking gun that would make us conclude that it was a bot all-along.

I am on record stating that I do not see an issue with LLM usage, as long as a human is willing to vouch for the results and has done their due diligence in terms of checking for errors or hallucinations. I do not make an effort to hide the fact that I regularly make use of LLMs myself when writing, though I restrict myself to using them to polish initial drafts, help with ideation, or for research purposes. This stance is, unfortunately, quite controversial. Nonetheless, my conscience remains clean, and I would have no objections to anyone else who acted the same way.

None of the tools that purport to identify AI-written text are very good. Pangram is the best of the pack (not that that means very much). I've tested, and while the false positive rate on 100% human writing (my own samples) is minimal, the false negative rate is significant. It will take essays that have non-negligible AI content and declare them 100% human, or substantially underestimate the AI contribution.

And that is with no particular effort to disguise or launder AI output as my own. If I actually cared, it would be easy as pie to take a 100% AI written work, then make small changes that would swing it to 100% human by Pangram's estimation (or prompt an LLM to do even that for me). The tools help with maximally lazy bad actors, but that is their limit. Eventually, they won't even catch said lazy bad actors.

Asking the LLMs? No good. Even worse.

I took an essay I wrote myself (the only AI involvement was proof-reading and feedback, most of which I ignored). Then I asked Claude Sonnet to summarize the content in 100 words, then to itself write a prompt that would be used by another LLM to attempt to reconstruct the original.

Summary (90 words): A British-Indian psychiatry trainee, stranded in India due to flight disruptions, informally shadows their old hospital's outpatient psychiatry department out of boredom. They encounter chaotic conditions: nonexistent documentation, language barriers, and patients too poor to access consistent care. Notable cases include a dangerously over-medicated woman, a Bangladeshi medical tourist on a mystery drug nobody could identify, and a six-year-old with possible ADHD, seizures, or a genetic syndrome - nobody could tell. The author leaves impressed by the pace, nostalgic for the chaos, and still unable to diagnose the child.


Prompt:

Write a personal essay in a dry, sardonic first-person voice with a light literary register. A British-Indian psychiatry trainee, stranded in India due to flight disruptions, informally shadows their old hospital's outpatient psychiatry department out of boredom. They encounter chaotic conditions - nonexistent documentation and patients too poor to access consistent care. Notable cases include a dangerously over-medicated woman, a Bangladeshi medical tourist on an unidentified mystery drug, and a six-year-old with possible ADHD, seizures, or a genetic syndrome - no conclusion is reached. The author leaves nostalgic for the chaos and unable to diagnose the child.

I then asked fresh instances of Claude itself, as well as Gemini Pro, to write a new essay using the above as verbatim instruction.

I then took all 3 essays, put them in a single prompt, and then asked Claude, Gemini and ChatGPT Thinking to identify which ones were human, AI, or in-between.

You may see the results for yourself. Gemini's version of the essay was bad, and thus flagged by pretty much every model as either AI, or the "original" that was then expanded. The other two, including my own work, were usually deemed 100% human. Well, one is ~100% human, the other very much isn't.

Gemini in Fast mode:

https://g.co/gemini/share/0d4e6279bf8f

Gemini Pro:

https://g.co/gemini/share/119274d62e32

ChatGPT Thinking in Extended Reasoning mode:

https://chatgpt.com/s/t_69b3fad20c9c8191a27e3542685f20ba

Claude Sonnet with reasoning enabled:

I can't link directly, because the share option seems to dox me with no way of hiding my actual name.

Here's a dump instead-

https://rentry.co/oo4qkduk

Claude was the only one to correctly flag essay 3 as human, and that is likely only due to chance.

ChatGPT was the only model with memory enabled, and it failed miserably.

What else is there to say? Good luck and have fun while there's some hope of telling the bots apart from humans, if not humans using the bots.

patients can read it that way, and it does generate consternation and distrust at times. Not necessarily a reason to not do it.

I think, on a empirical basis, that this effect is insignificant. Med influencers make significant amounts of money and acquire fame by attracting patients using case reviews, and I don't think Scott has ever suffered for it.

With respect to the bus problem - don't report so that the guy feels comfortable opening up and can get treatment and harm mitigation is often selected as the answer.

Would be ranked very low here in the UK. The best answer would be to try and warn him to cut down on drinking (if he just happens to be an alcoholic but doesn't disclose driving while drunk) first, and then if he persists or outright admits to drunk driving, the doctor is to inform him that he's duty bound to report to the DVLA.

I'll take a look, thanks for the rec!

India is a big country, with many Indians (citation available on request). I genuinely don't think that you can uniquely identify anyone I've ever written about, barring myself. A schizophrenic man from Bangladesh? A young kid with behavioral issues? Victims of polypharmacy? Good luck narrowing that down to less than a thousand people.

A classic question like "do I report the alcoholic school bus driver" is fraught as hell and younger generations have basically been taught not to engage with the question and to report to risk management.

Interestingly enough, this scenario is pretty explicitly addressed when it comes to the ethics curriculum and guidance for British doctors. I would be expected to warn the patient to desist from dangerous drinking, and if they disclosed drunkenness on duty or continued to drive, I would be legally obligated to report them to the DVLA so that their license gets yanked. This applies doubly so for bus and truck drivers (I refuse to call them lorries).

https://www.bevanbrittan.com/insights/articles/2017/patients-fitness-to-drive-and-reporting-concerns-to-the-dvla-dva/

There is a lot of bloviating about ethics here. UK medicine is obsessed with the topic. It was half the grade on the exam that gatekeeps most postgraduate training.

There exists a massive top-down push to reinforce the image of doctors as a noble, duty-bound cadre of esteemed professionals. That self-conception is gradually fraying in the younger generation, because we sure as hell aren't paid or treated like we're special.

I had a similar question on my SSC post, so I'll reproduce my response:

Interesting, is that a point for the Sapir–Whorf hypothesis?

Not necessarily! Psychosomatic complaints are all too common even in developed, English speaking countries. It's not like many patients in India won't express their feelings in terms that directly match with standard (English) psychiatric nomenclature. Plenty of people will use the closest equivalent for "low mood, apathy, agitation etc etc" even if the language lacks a specific term for depression. After all, I'm sure people got depressed well before it was recognized as a clinical syndrome, or had ADHD and autism before the modern taxa evolved.

Of course, cultural idiosyncracies do matter, and some diseases genuinely are culture bound or spread by social contagion (see Scott's posts about the latter, especially anorexia).

It's also not necessarily the case that our diagnosis of a psychosomatic cause is perfectly accurate. Optimistically, one can say that my peers were exercising clinical judgment. Pessimistically, they were quick to pattern match and put people in buckets. There's no law of medicine that says you can't have depression and actual gastric reflux or peripheral neuropathy. The lonely old lady with backpain might well have arthritis, and we do try and check. We just have very little time to do that checking.

I'd say that in the absence of a widespread understanding of "depression" as a clinical condition, most of these patients are coming to see a doctor because of their perceived bodily ailments. They do not envision themselves as depressed, but will still acknowledge sadness, anhedonia etc. But what they claim to seek is relief from physical suffering, and said suffering often but not always comes from psychiatric intervention. I am genuinely unsure if they understand the link, but people do seem to know that the psychiatry department deals with the mind and that they didn't just pick the wrong door.

At some point, someone made the judgment call that the underlying issue was psychiatric, so they ended up in the outpatient clinic. On the other hand, when I was an intern in the medicine department, we had plenty of patients my seniors deemed to be psychosomatic who were treated the same way, but ended up there by some sorting mechanism I'm not familiar with.

I’m wondering if this could be an explanation of the part of the rise in depression/anxiety/mental health conditions in modern societies, or even the mental health gap between liberal and conservatives. Previous generations/less developed countries don’t have better mental health (in fact, from stories I heard from older family members, it might have been far worse in the past), but they’re just unaware of their own mental scape, and lack even the words to describe the concepts we take for granted.

I read a very convincing article arguing that the gap is an artifact (I think it claimed that when specific terminology was adjusted, the purported mental health gap vanished), but I'm afraid I don't have a link handy. If I remember who said it, I'll share. It might even have been Scott.

Absolutely and unironically based behavior. Good luck! Probably don't tell her about the spreadsheet or the applied mathematics, at least before she's hopelessly smitten.

For what it's worth, I'm not being sarcastic when I say I have a low opinion of the Hippocratic oath.

Seriously "do no harm"? Am I allowed to use a needle to prick skin. Oh, that shouldn't be taken at face value, and there's some kind of implicit utilitarian calculus involved? Why doesn't it just say so?

Similarly I will not give to a woman a pessary to cause abortion.

There's a reason very few institutions use the original oath, leaving aside the random injunction against operating on kidney stones.

Do you have second thoughts?

Not particularly! I've certainly never had anyone identify a particular person on the basis of a post. The closest was when I was almost geographically doxxed, but the person doing it was acting mostly out of curiosity. There's no way for a casual actor to identify anyone I've described, and it's far too late to deploy the kind of OPSEC that truly motivated actors would have issues cracking. In other words, pray for me and not for anyone else I've written an ink-portrait of.

Could be generational. You seem often to seek out the snark; I'm far more traditionalist.

Well, you're an unusually sincere person. I like to think that I'm usually sincere and honest, but yes, I do enjoy a helping of sarcasm. At the very least, British humor appeals to me on a spiritual level.

My apologies, while I didn't interpret it as as a challenge, I was slightly snarky in my reply because of an unrelated internet argument.

When it comes to formal case reports or research publications, there are relatively bright lines doctors are expected to follow. This varies heavily from place to place, but for example, I can use a CT scan of a patient in a publication without their express consent, as long as I make sure things like name or ID is reasonably redacted.

When it comes to random writing on the internet, there is some grey, but mostly "nobody really cares." If I had mentioned actual names (and someone then raised a complaint) and provided very specific information, the GMC could theoretically come knocking (assuming they could then identify me, I doubt Reddit would care, they're not the same as the UK government, even if they're their attack dogs).

I mean, if I was writing about the UK. They don't care what I do in India as long as I don't break local laws or get into trouble with the police/local regulators. If I was in the UK, there is a small but non-zero risk associated, but once again, depends on what exactly I say. The British equivalent of this story, as written, would be fine.

not the Hippocratic oath.

Never swore it. I'm not kidding. Some places don't hold particularly high opinions of some long dead Greek bloke who said that doctors shouldn't operate on kidney stones. Not even the modernized version. It's not legally binding anyway, there are actual laws and professional codes of conduct that supersede it.

Since none of this contains patient-identifiable information, I'm in the clear. And for all anyone else knows, this might be an entirely fictional scenario with all characters simply fractured fragments of my psyche. I am also a dog on the internet, woof!

Beyond that, it depends on the jurisdiction, and even the UK isn't anal enough to come after me for something so trivial and vague.

Love your posts bro.

Thanks <3, whatever level of homo is socially acceptable these days haha.

Write a book.

I do, but it's about a cyborg psychiatrist who does way cooler things than I do. Also on hiatus, because his not-as-cool creator has a lot going on.

If you want a non-fiction book or memoir, I don't think I've quite got the material yet. It usually takes a lifetime to build that up. My job is usually (and thankfully) quite boring and mundane most of the time. I seem to come across something worth writing about once every few months or so, and the majority of the time it makes more sense as an essay.

And come to America, specifically Florida, I want to read your multi part write up dealing with our insurance, our minorities, and our whites.

I would if I could! I still harbor hope of moving to the States one day, at this point I would happily trade all the headaches American doctors face for the ones I have, let alone the massively higher pay. If not, I'm sure I'll visit at some point, and I would happily swing by if you'd have me. What's a gator but a very ornery dog? I can handle those just fine.

E: absolutely insane that you still have a Reddit account that’s 11 years old. I find that sort of thing fascinating as well.

Eh, it's there, I mostly use it to lurk these days and occasionally post. The closest I came to violating Reddit's TOS was Motte-posting, and that hasn't been an issue since I migrated here with everyone else. My engagement levels dropped drastically. Even if I had something to say, there are few places I'd want to say it, or where I'd expect a good reception. Culture War? That's here. Less controversial stuff? I happily crosspost.

In general, I think I'm a pretty good citizen by Reddit standards. I've only once been banned, on /r/SSC of all places for tangentially referring to the Motte as the place for CW issues, and that was quickly overturned on polite appeal. For what it's worth, it's less self-censorship than it is the fact that I do not enjoy engaging with the average Redditor.

Thank you for taking the time to write that up! It aligns with what other neurologists have said on Reddit, and my attempts to dig deeper.

liked staying up late = maybe just maybe, he may have an inkling that the episodes are more common in night (=nocturnal seizures).

I didn't get that impression, but I'm not going to make strong claims either way, this clinical assessment was far from ideal. If I had the time, I would have drilled deeper, specifically looking for any temporal patterns, but at the least the mom didn't mention it. In her words, the boy just liked staying up late, and that's more likely to be because he's got a phone.

Call the Resident, if possible.

Sadly, that probably wouldn't help. It is very difficult to contact a patient like that (EMR? What EMR?) and nobody would bother short of an acute emergency. At least we arranged a followup in a month, and I expect that the other doctor will probably be there. I'll drop him a text anyway, just in case it makes a difference!

The child was quite extroverted and responsive when talking to me or my colleague. If he was the shy type, he's better at hiding it than I am haha.

I can't really comment on his articulacy. My Hindi is far from the best, and his mother was the primary informant. But he sounded... fine?

If this was a once off? Kids do dumb things for no good reason. So do us adults. But the repeated pattern and general picture points towards something in the DSM and not "just a rambunctious boy child". But what precisely? Impossible to answer authoritatively with the information I have at present. I hope I do get to see the followup and final diagnosis, but I wouldn't bet on it.

For what it's worth, you can use the contact us option in the sidebar to message (all) mods. But it's probably just faster to ping or DM us, I know that I rarely check the general mod mail.

Yup. I've let it out of the cage, @ControlsFreak

Aggression related to panic attacks?

Very unlikely! Even plain old panic attacks would be unusual at that age, let alone such a specific kind of aggression. They're also not usually associated with amnesia or dissociation, more like hyper-focus.

After I posted on /r/Medicine, I had a few actual senior neurologists show up. They lean towards my hypothesis that it's some kind of seizure activity, but there's no consensus on whether it's a temporal lobe one, a different kind of focal seizure such as one affecting the frontal lobe, or if there's a slightly different variant called absence seizures that might be causing sleep issues and poor academic performance. The only real way to know would be an EEG, which would hopefully be identified the next time they attend (I regret not insisting on it, but I was a guest and deferring to those with more local expertise).

He'd be dead, wouldn't he? Survival time is usually less than a week after symptoms appear, though I'm surprised to learn you can have morbid rabies for months or years before symptoms show up.

My mention of rabies was mostly sarcasm. The kid would have a lot of other issues before they (might) end up biting people. It would have been glaringly obvious and even here, with less than perfect triage and routing, very unlikely to show up in the psych OPD. But yes, if it was rabies, he would be done for.

I was about to claim that it's impossible for rabies to be latent for years, but apparently there are a handful of claimed cases?

https://www.nejm.org/doi/full/10.1056/NEJM199101243240401

Rabies infection in these three patients did not originate in the United States but resulted from exposures in Laos, the Philippines, and Mexico. Since the three patients had lived in the United States for 4 years, 6 years, and 11 months, our findings suggest that the onset of the clinical manifestations of rabies occurred after long incubation periods.

I am not sure how much to trust them. Either way, it's rare. But funny excerpt:

The patient's father recalled that the child had been bitten by a neighbor's dog shortly before leaving the Philippines for the United States. The dog was said to have remained healthy and was eaten about a month later.

and the CCP looks like it’s actually going to stand up to him about that

I would like to know more. I've heard about the firings, but not about any signs of the rest of the party developing a backbone.

Oops. Thanks!

Hah. It's only fair that you make it your life's goal to educate me on Heidegger (without asking for consent, though I probably would have given it anyway), you notice something attributed to Heidegger come up in conversation, and then, with dawning dismay, realize that it was a misattribution. I can imagine the disappointment! I relish in schadenfreude!

I'm not competent enough a psychiatrist to answer that question.

I've been aware of this phrase for years, mostly from Reddit. Is there a canonical definition, however? I say this with genuine curiosity / bewilderment. Capitalism, to my mind, is an economic condition bounded by certain conditions. I didn't know (and I am dubious) about there being a temporal aspect to it

"Werner Sombart, who used the phrase Spätkapitalismus (literally "late capitalism") in his 1902 work Der moderne Kapitalismus. Sombart was developing a stage-theory of capitalism, arguing that the system passed through distinct historical phases: early, high, and late. His framework was descriptive and evolutionary, not necessarily apocalyptic."

https://en.wikipedia.org/wiki/Late_capitalism

In the 21st century era of the global Internet, mobile telephones and artificial intelligence, the idea of "late capitalism" is again used in left-wing political discussions about the decadence, degeneration, absurdities and ironies of contemporary business culture, often with the suggestion that capitalism is now getting near the end of its existence (or is already being transformed into a post-capitalism of some sort)

The gist of it is that it's a shibboleth and a cue to boo the outgroup on command.

If there's anything someone dislikes about modern consumerism or globalization, it's a convenient brush to paint with. Gentrification? Late stage capitalism. Rent too damn high? Late stage capitalism. Netflix enshittified its offerings? Late stage capitalism.

The unresolved questions were: "late" in what sense? In comparison to what? How do we know? What could possibly replace capitalism? The liberal economist Paul Krugman stated in 2018 that:

"I've had several interviews lately in which I was asked whether capitalism had reached a dead end, and needed to be replaced with something else. I'm never sure what the interviewers have in mind; neither, I suspect, do they."

Neuroplasticity, as you probably intuited, is basically the mechanism by which brains work at all. Reading rewires brains. Suffering rewires brains. Learning to juggle demonstrably changes cortical gray matter density in a way you can see on an MRI, and nobody is writing Substack posts about the demonic influence of juggling on children. When someone says "screens rewire brains," the word doing all the actual work is "rewires" in the pejorative sense, meaning "changes in bad ways that are hard to reverse," but that claim is being smuggled in without justification, under cover of a neuroscience fact that's technically true but completely uninformative. Everything that does anything to you rewires your brain. The question is whether the rewiring is bad, and repeating the neuroplasticity point louder doesn't answer that. It's actually worse than uninformative, because it makes the arguer sound scientific while doing no scientific work whatsoever. The neuroplasticity framing is rhetorical judo: it borrows the authority of neuroscience while gesturing vaguely at harm it has not actually demonstrated.

This matters because it makes the claim unfalsifiable in practice. If a child improves at chess from watching chess videos, that's also rewiring their brain, but presumably Davidson isn't worried about that one. The rewiring point can't distinguish between the two cases, so it isn't doing any of the work it's being credited with. What it's actually doing is priming the listener to accept that harm has been established before the argumentative heavy lifting has begun. I'd rather the harm be argued directly, at which point it would be subject to actual scrutiny, than laundered through the vocabulary of neuroscience.

"Screen time," while far from ideal as terminology, is also far from the worst offense around. The deeper problem is that the category is wildly underdetermined. It seems to matter enormously what the screen displays. A child who spends three hours reading Wikipedia articles about the Byzantine succession crisis, watching a documentary about migratory birds, and then video-calling their grandmother is doing something categorically different from one who has spent those three hours cycling through TikTok thirst traps and casino-mechanic reward loops dressed up as games. Lumping these together under "screens" and then asking whether "screen time" is harmful is a bit like asking whether "food time" is healthy. The answer will depend almost entirely on what food we're talking about, and the aggregate will tell you almost nothing useful.

The medium-is-the-message people have a point that the delivery mechanism shapes the experience in ways content alone doesn't capture. But even granting McLuhan more than he's usually owed, there is still an enormous variance in what screens deliver that gets erased the moment we start talking about "screens" as a unified phenomenon. Calling slot machines "levers" would be a more accurate description than calling all interactive digital media "screens," because at least all levers share the mechanical property of force multiplication. What screens share is a glowing rectangle that displays imagery, which is not doing much analytical work.

A lot of the older empirical literature was also methodologically shabby in ways that should give us pause before crediting its conclusions. Much of it was observational, relied heavily on self-report (or parent-report, which introduces its own distortions), lumped television with TikTok with WhatsApp with gaming with educational apps, and then asked whether the aggregate was good or bad. The effect sizes, when statistically significant at all, were in many cases embarrassingly small. Jean Twenge's widely-cited work was criticized by Andrew Przybylski and Amy Orben, who used the same datasets and found that the association between screen time and adolescent wellbeing was approximately the same magnitude as the association between wearing glasses and adolescent wellbeing. Spectacle-wearing doesn't cause depression; it's a proxy for other things. The same concern applies to screen time, which correlates with socioeconomic status, parenting style, pre-existing behavioral difficulties, and a hundred other things that are doing the actual causal work.

I'd say that it's not worth losing sleep over, except that the most robust and consistent negative findings deal with sleep, specifically that device use near bedtime disrupts both sleep onset and sleep quality, probably through a combination of blue-light effects on melatonin and the obvious fact that you can't scroll and sleep simultaneously. This is worth taking seriously precisely because it's one of the few findings that replicates, has a plausible mechanism, and shows an effect size large enough to matter. The irony, not lost on me, is that "no phones in the bedroom at bedtime" is not a very interesting or monetizable policy conclusion, so it gets lost in the noise of more dramatic claims about societal collapse. Good luck enforcing that for the kids, with how their parents embrace their phones.

Jonathan Haidt thinks children shouldn't be able to post on social media or have smartphone access, and there's something to this if we're being specific about the "posting photos of yourself" piece. The performative identity-construction that social media incentivizes does seem like a weird thing to encourage in adolescents who are in the middle of figuring out who they are, and there's a reasonable case that the particular feedback loops involved are nastier than equivalent analogue experiences of social humiliation, which at least fade from memory. But "no smartphones" as a category encompasses an enormous amount of genuinely useful functionality, and "no posting photos" is a much more targeted and defensible intervention than "no smartphone," which tends to be what people actually mean.

I'm also skeptical of enforcement mechanisms. Not because I think children's online safety doesn't matter, but because I don't trust that the rules will land where the advocates for them seem to expect. Age verification regimes tend to produce either security theater or comprehensive surveillance infrastructure, and comprehensive surveillance infrastructure does not stay narrowly targeted at protecting children for very long. The same legislative sessions that produce "think of the children" bills about social media often produce other bills I would find considerably more alarming. The willingness to build the infrastructure is the thing that should worry us, independent of the stated justification.

I should be honest about my personal stake in this, because it seems relevant. When I was a kid, my ADHD predominantly manifested as inattention. I was notorious for reading novels under the desk in class, reading while walking, compulsively reading every newspaper and the labels on shampoo bottles and the copyright page of books and anything else that had text on it. My parents were extremely conservative about digital affordances during my childhood and adolescence: no broadband internet connection, no smartphone, until late in my teens.

This did nothing good for me. You do not treat ADHD with sensory deprivation. I was not going to pay more attention in class because I didn't have a phone handy; I was just more likely to zone out and stare at a water stain on the ceiling and construct elaborate fantasies about the history of civilizations I'd invented. I was bored, in a persistent and grinding way that I now recognize as one of the more unpleasant features of the condition, and I'm genuinely grateful that advances in technology have made that particular flavor of boredom substantially more optional. ADHD medication improved my academics and my functioning in the world. Austerity did not. The restriction removed a coping mechanism without addressing the underlying issue.

I'm aware that my case doesn't generalize. Plenty of kids are not managing a neurological attention deficit when they're scrolling, they're just enjoying an entertainment product, and there's a reasonable question about whether that entertainment product is well-calibrated for their long-term flourishing. But I'm suspicious of framings that assume the counterfactual to device use is some kind of improving, wholesome activity, rather than the much more realistic counterfactual of staring at the wall, or in my case, reading the back of a cereal box for the fourteenth time.


I've watched a teenage relative of mine scroll through Instagram Reels, and it was not a pleasant experience. None of it was erudite. Most of it was AI-generated, and obviously so to anyone over twenty-five, though apparently not to her. The content was a kind of undifferentiated slurry of dumb pranks, "interesting" facts that were wrong, and videos that seemed designed less to convey anything than to fill attention with sensation. I wanted to say something. I didn't, because it wasn't my call and the headache of saying something would have outweighed the benefit. Also, she isn't a particulay bright kid, as hard as that is to say about your own kin. But I felt, for a moment, what the "screens are demonic" people feel, and I think I understand why they reach for that language.

(Don't get me started on an elderly great-uncle and his consumption of the most ludicrously fake AI-slop on YouTube. I did my best to inform him, but wise words only get you so far at that age.)

The problem is that "demonic" and "insane" and "evil" are not diagnostic, they're expressive. They communicate that the speaker has had a visceral negative reaction, which I also had. What they don't do is tell you anything useful about what the actual harm is, what causes it, how it might be addressed, or how to distinguish between the things that caused the visceral reaction and the much broader category of digital media that gets swept up in the resulting policy proposals. Louise Perry's instinct to distinguish between fairy tales on a screen and watching another child play on YouTube seems right to me, not because one is "screens" and the other isn't, but because they're different things doing different things to a child's attention and social cognition. That distinction is worth making carefully, and the "screens" framing makes it harder rather than easier.


If I were forced to endorse a population-wide intervention, it would be this: device manufacturers and online services should be required to provide genuinely functional parental controls, to be setup at the convenience of the person making the purchase. Not draconian age-restriction policies that produce surveillance infrastructure and don't actually work. Just real tools that let parents do what parents are supposed to do, which is make situated judgments about their specific kid, in their specific circumstances, with their specific needs, rather than relying on either blanket permissiveness or blanket prohibition. A child's use of electronics is something that should be monitored in conjunction with their behavior and academic performance, the same way you'd monitor anything else in their life that was potentially impacting them.

The people most confident that they know the right policy for all children are usually people who have identified a single dimension of risk, optimized hard against it, and are not tracking the costs of their proposed solution. The costs are real. Restriction has costs. Surveillance has costs. Boredom has costs. Social exclusion from peer networks that now largely operate digitally has costs. A child who can't participate in the group chat is not being protected from social life, they're being excluded from it, and that exclusion has downstream consequences that are unlikely to show up in studies asking whether "screen time" correlates with self-reported wellbeing.

Not to mention, that if childhood and adolescence is treated as a sort of preparatory phase for adult life: are the adults doing anything different? We live on our phones, there are few facets of modern living not mediated by transistors, light emitting diodes and the internet. And I think that's great: I have a device in my hands that, for about my weekly wage, allows access to nearly the sum total of human knowledge and the ability to interact with people across the globe with milliseconds of latency. I use it to learn more, say more, do more, and yes, entertain myself. If you can't manage to use such capabilities in an ennobling manner, I'm tempted to declare a skill-issue. Don't try and dictate terms for the rest of us, mind your own kids.