site banner

Contra The Usual Interpretation Of The Whispering Earring

The usual reading of Scott's short story The Whispering Earring is easy to state and hard to resist. Here is a magical device that gives uncannily good advice, slowly takes over ever more of the user's cognition, leaves them outwardly prosperous and beloved, and eventually reveals a seemingly uncomfortable neuroanatomical price.

The moral seems obvious: do not hand your agency to a benevolent-seeming optimizer. Even if it makes you richer, happier, and more effective, it will hollow you out and leave behind a smiling puppet. Dentosal's recent post on LessWrong makes exactly this move, treating the earring as a parable about the temptation to outsource one's executive function to Claude or some future AI assistant. uugr's comment there emphasizes sharpens the standard horror: the earring may know what would make me happy, and may even optimize for it perfectly, but it is not me, its mind is shaped differently, and the more I rely on it the less room there is for whatever messy, friction-filled thing I used to call myself.

I do not wish to merely quibble around the edges. I intend to attack the hidden premise that makes the standard reading feel obvious. That premise is that if a process preserves your behavior, your memories-in-action, your goals, your relationships, your judgments about what makes your life go well, and even your higher-order endorsement of the person you have become, but does not preserve the original biological machinery in the original way, then it has still killed you in the sense that matters. What I'm basically saying is: hold on, why should I grant that? If the earring-plus-human system comes to contain a high fidelity continuation of me, perhaps with upgrades, perhaps with some functions migrated off wet tissue and onto magical jewelry, why is the natural reaction horror rather than transhumanist interest?

Simulation and emulation are not magic tricks. If you encode an abacus into a computer running on the Von-Neumann architecture, and it outputs exactly what the actual abacus would for the same input, for every possible input you care to try (or can try, if you formally verify the system), then I consider it insanity to claim that you haven't got a “real” abacus or that the process is merely “faking” the work. Similarly, if a superintelligent entity can reproduce my behaviors, memories, goals and values, then it must have a very high-fidelity model of me inside, somewhere. I think that such a high-fidelity model can, in the limit, pass as myself, and is me in most/all of the ways I care about.

That is already enough to destabilize the standard interpretation, because the text of the story is much friendlier to the earring than people often remember. The earring is not described as pursuing a foreign objective. On the contrary, the story goes out of its way to insist that it tells the wearer what would make the wearer happiest, and that it is "never wrong." It does not force everyone into some legible external success metric. If your true good on a given day is half-assing work and going home to lounge around, that is what it says. It learns your tastes at high resolution, down to the breakfast that will uniquely hit the spot before you know you want it. Across 274 recorded wearers, the story reports no cases of regret for following its advice, and no cases where disobedience was not later regretted. The resulting lives are "abnormally successful," but not in a sterile, flanderized or naive sense. They are usually rich, beloved, embedded in family and community. This is a strikingly strong dossier for a supposedly sinister artifact.

I am rather confident that this is a clear knock-down argument against true malice or naive maximization of “happiness” in the Unaligned Paperclip Maximization sense. The earring does not tell you to start injecting heroin (or whatever counterpart exists in the fictional universe), nor does it tell you to start a Cult of The Earring, which is the obvious course of action if it valued self-preservation as a terminal goal.

At this point the orthodox reader says: yes, yes, that is how the trap works. The earring flatters your values in order to supplant them. But notice how much this objection is doing by assertion. Where in the text is the evidence of value drift? Where are the formerly gentle people turned into monstrous maximizers, the poets turned into dentists, the mystics turned into hedge fund managers? The story gives us flourishing and brain atrophy, and invites us to infer that the latter discredits the former. But that inference is not forced. It is a metaphysical preference, maybe even an aesthetic preference, smuggled in under cover of common sense. My point is that if the black-box outputs continue to look like the same person, only more competent and less akratic, the burden of proof has shifted. The conservative cannot simply point to tissue loss and say "obviously death." He has to explain why biological implementation deserves moral privilege over functional continuity.

This becomes clearest at the point of brain atrophy. The story says that the wearers' neocortices have wasted away, while lower systems associated with reflexive action are hypertrophied. Most readers take this as the smoking gun. But I think I notice something embarrassing for that interpretation:

If the neocortex, the part we usually associate with memory, abstraction, language, deliberation, and personality, has become vestigial, and yet the person continues to live an outwardly coherent human life, where exactly is the relevant information and computation happening? There are only two options. Either the story is not trying very hard to be coherent, in which case the horror depends on handwaving physiology. Or the earring is in fact storing, predicting, and running the higher-order structure that used to be carried by the now-atrophied brain. In that case, the story has (perhaps accidentally) described something much closer to a mind-upload or hybrid cognitive prosthesis than to a possession narrative.

And if it is a hybrid cognitive prosthesis, the emotional valence changes radically. Imagine a device that, over time, learns you so well that it can offload more and more executive function, then more and more fine-grained motor planning, then eventually enough of your cognition that the old tissue is scarcely needed. If what remains is not an alien tyrant wearing your face, but a system that preserves your memories, projects your values, answers to your name, loves your family, likes your breakfast, and would pass every interpersonal Turing test imposed by people who knew you best, then many transhumanists would call this a successful migration, not a murder. The "horror" comes from insisting beforehand that destructive or replacement-style continuation cannot count as continuity. But that is precisely the contested premise.

Greg Egan spent much of his career exploring exactly this scenario, most famously in "Learning to Be Me," where humans carry jewels that gradually learn to mirror every neural state, until the original brain is discarded and the jewel continues, successfully, in most cases. The horror in Egan's story is a particular failure mode, not the general outcome. The question of whether the migration preserves identity is non-trivial, and Egan's treatment is more careful than most philosophy of personal identity, but the default response from most readers, that it is obviously not preservation, reflects an assumption rather than a conclusion. If you believe that identity is constituted by functional continuity rather than substrate, the jewel and the earring are not killing their hosts. They are running them on better hardware.

There is a second hidden assumption in the standard reading, namely that agency is intrinsically sacred in a way outcome-satisfaction is not. Niderion-nomai’s final commentary says that "what little freedom we have" would be wasted on us, and that one must never take the shortest path between two points.

I'm going to raise an eyebrow here: this sounds profound, and maybe is, but it is also suspiciously close to a moralization of friction. The anti-earring position often treats effort, uncertainty, and self-direction as terminal goods, rather than as messy instruments we evolved because we lacked access to perfect advice. Yet in ordinary life we routinely celebrate technologies that remove forms of “agency” we did not actually treasure. The person with ADHD who takes stimulants is not usually described as having betrayed his authentic self by outsourcing task initiation to chemistry. He is more often described as becoming able to do what he already reflectively wanted to do. The person freed from locked-in syndrome is not criticized because their old pattern of helpless immobility better expressed their revealed preferences. As someone who does actually use stimulants for his ADHD, the analogy works because it isolates the key issue. The drugs make me into a version of myself that I fully identify with, and endorse on reflection even when off them. There is a difference between changing your goals and reducing the friction that keeps you from reaching them. The story's own description strongly suggests the earring belongs to the second category.

(And the earring itself does not minimize all friction for itself. How inconvenient. As I've noted before, it could lie or deceive and get away with it with ease.)

Of course the orthodox reader can reply that the earring goes far beyond stimulant-level support. It graduates from life advice to high-bandwidth motor control. Surely that crosses the line. But why, exactly? Human cognition already consists of layers of delegation. "You" do not personally compute the contractile details for every muscle involved in pronouncing a word. Vast amounts of your behavior are already outsourced to semi-autonomous subsystems that present finished products to consciousness after the interesting work is done. The earring may be unsettling because it replaces one set of subsystems with another, but "replaces local implementation with better local implementation" is not, by itself, a moral catastrophe. If the replacement is transparent to your values and preserves the structure you care about, then the complaint becomes more like substrate chauvinism than a substantive account of harm.

What, then, do we do with the eeriest detail of all, namely that the earring's first advice is always to take it off? On the standard reading this is confession. Even the demon knows it is a demon. I wish to offer another coherent explanation, which I consider a much better interpretation of the facts established in-universe:

Suppose the earring is actually well aligned to the user's considered interests, but also aware that many users endorse a non-functionalist theory of identity. In that case, the first suggestion is not "I am evil," but "on your present values, you may regard what follows as metaphysically disqualifying, so remove me unless you have positively signed up for that trade." Or perhaps the earring itself is morally uncertain, and so warns users before beginning a process that some would count as death and others as transformation. Either way, the warning is evidence against ordinary malice. A truly manipulative artifact, especially one smart enough to run your life flawlessly, could simply lie. Instead it discloses the danger immediately, then thereafter serves the user faithfully. That is much more like informed consent than predation.

Is it perfectly informed consent? Hell no. At least not by 21st century medical standards. However, I see little reason to believe that the story is set in a culture with 21st century standards imported as-is from reality. As the ending of the story demonstrates, the earring is willing to talk, and appears to do so honestly (leaning on my intuition that if a genuinely superhuman intelligence wanted to deceive you, it would probably succeed). The earring, as a consequence of its probity, ends up at the bottom of the world's most expensive trash heap. Hardly very agentic, is that? The warning could reflect not "I respect your autonomy" but "I've discharged my minimum obligation and now we proceed." That's a narrower kind of integrity. Though I note this reading still doesn't support the predation interpretation.

This matters because the agency-is-sacred reading depends heavily on the earring being deceptive or coercive. Remove that, and what you have is a device that says, or at least could say on first contact: "here is the price, here is what I do, you may leave now." Every subsequent wearer who keeps it on has, in some meaningful sense, consented. The fact that their consent might be ill-informed regarding their metaphysical commitments is the earring's problem to the extent it could explain more clearly, but the text suggests it cannot explain more clearly, because the metaphysical question is genuinely contested and the earring knows this. It hedges by warning, rather than deceiving by flattering. Once again, for emphasis: this is the behavior of an entity with something like integrity, not something like predation.

Derek Parfit spent much of Reasons and Persons arguing that our intuitions about personal identity are not only contingent but incoherent, and that the important question is not "did I survive?" but "is there psychological continuity?" If Parfit is even approximately right, the neocortex atrophy is medically remarkable but not metaphysically fatal. The story encodes a culturally specific theory of personal identity and presents it as a universal horror. The theory is roughly: you are your neocortex, deliberate cognition is where "you" live, and anything that circumvents or supplants that process is not helping you, it is eliminating you and leaving a functional copy. This is a common view. Plenty of philosophers hold it. But plenty of others hold that what matters for personal identity is psychological continuity regardless of physical instantiation, and on those views the earring is not a murderer. It is a very good prosthesis that the user's culture never quite learned to appreciate.

I suspect (but cannot prove, since this is a work of fiction) that a person like me could put on the earring and not even receive the standard warning. I would be fine with my cognition being offloaded, even if I would prefer (all else being equal), that the process was not destructive.

None of this proves the earring is safe. I am being careful, and thus will not claim certainty here, and the text does leave genuine ambiguities. Maybe the earring really is an alien optimizer that wears your values as a glove until the moment they become inconvenient. Maybe "no recorded regret" just means the subjects were behaviorally prevented from expressing regret. Maybe the rich beloved patriarch at the end of the road is a perfect counterfeit, and the original person is as gone as if eaten by nanites. But this is exactly the point. The story does not establish the unpalatable conclusion nearly as firmly as most readers think. It relies on our prior intuition that real personhood resides in unaided biological struggle, that using the shortest path is somehow cheating, and that becoming more effective at being yourself is suspiciously close to becoming someone else.

The practical moral for 2026 is therefore narrower than the usual "never outsource agency" slogan. Dentosal may still be right about Claude in practice, because current LLMs are obviously not the Whispering Earring. They are not perfectly aligned, not maximally competent, not guaranteed honest, not known to preserve user values under deep delegation. The analogy may still warn us against lazy dependence on systems that simulate understanding better than they instantiate loyalty. But that is a contingent warning about present tools, not a general theorem that cognitive outsourcing is self-annihilation. If a real earring existed with the story's properties, a certain kind of person, especially a person friendly to upload-style continuity and unimpressed by romantic sermons about struggle, might rationally decide that putting it on was not surrender but self-improvement with very little sacrifice involved. I would be rather tempted.

The best anti-orthodox reading of The Whispering Earring is not that the sage was stupid, nor that Scott accidentally wrote propaganda for brain-computer interfaces. It is that the story is a parable whose moral depends on assumptions stronger than the plot can justify. Read Doylistically, it says: beware any shortcut that promises your values at the cost of your agency. Read Watsonianly, it may instead say: here exists a device that understands you better than you understand yourself, helps you become the person you already wanted to be, never optimizes a foreign goal, warns you up front about the metaphysical price, and then slowly ports your mind onto a better substrate. Whether that is damnation or salvation turns out to depend less on the artifact than on your prior theory of personal identity. And explicitly pointing this out, I think, is the purpose of my essay. I do not seek to merely defend the earring out of contrarian impulse. I want to force you to admit what, exactly, you think is being lost.

Miscellaneous notes:

The kind of atrophy described in the story does not happen. Not naturally, not even if someone is knocked unconscious and does not use their brain in any intentional sense for decades. The brain does cut-corners if neuronal pathways are left under-used, and will selectively strengthen the circuitry that does get regular exercise. But not anywhere near the degree the story depicts. You can keep someone in an induced coma for decades and you won't see the entire neocortex wasted away to vestigiality.

Is this bad neuroscience? Eh, I'd say that's a possibility, but given that I've stuck to a Watsonian interpretation so far (and have a genuinely high regard for Scott's writing and philosophizing), it might well just be the way the earring functions best without being evidence of malice. We are, after all, talking about an artifact that is close to magical, or is, at the very least, a form of technology advanced enough to be very hard to distinguish from magic. It is, however, less magical than it was at the time of writing. If you don't believe me, fire up your LLM of choice and ask it for advice.

If it so pleases you, you may follow this link to the Substack version of this post. A like and a subscribe would bring me succor in my old age, or at least give me a mild dopamine boost.

5
Jump in the discussion.

No email address required.

I do not value "myself" in the abstract sense of the word. I value myself because I experience myself, and everything I know about the mechanisms of the world tells me that I do not experience the clone.

I also do not believe you can truly value yourself in the abstract sense of the word because you (as any lifeform) had never had a chance to fully decouple the intellectual appreciation for one's existence from instinctual self-preservarion.

If that's the case, I don't wish to argue otherwise. Your values are genuinely your own, and I have even less reason to argue against them if you have a decent understanding of philosophy or cognitive neuroscience (which I hope/expect you do).

I value myself (or at least this body) for many reasons. But if I was given some kind of Star Trek teleporter machine alongside proof that it works as designed (by destructive scanning and then reconstruction with near perfect fidelity), I'd be fine with using it. If the entity that comes out the other side shares my memories, beliefs, goals and desires, I'll happily call it self_made_human. I'll share my bank account and be okay with the new "me" sleeping with my wife and raising my kids.

On the other hand, I'd prefer it if there were two of us. If the destruction isn't strictly necessary and just a bureaucratic convenience, then I would sue for murder or at least manslaughter. I think there should be more copies of me around, for redundancy if nothing else. And I see no real reason we wouldn't be able to sync up and share our memories and experiences in a world where mind uploading is a reality.

You don't have much of the choice in the matter of weather you like star trek transportation or not. The very unfortunate fact is that atoms in proteins of your brain have a half life before they are discarded from your body. The proteins inside nucleus don't change the material of nucleus doesn't change but every atom in every non nucleic protein in a neuron is gone in a year or a similar time frame. Now you may say you are your nucleus not your myelin sheath or other neuron parts but that would be inaccurate, I assume (but haven't bothered to check) you can live normally for an hour if the nucleus from all your cells is removed. You would die but in that hour you would be experiencing consciousness, thoughts and speech etc. So you aren't your nucleus same way you are not your bones.

Weather you want to or not the star trek transporter is a brute fact of biology.

Unlike self made human I don't think a person is just information and thought process.

There is a good chance that in a very real sense a human only lives for a year or two. This doesn't have to be the only possiblity , I am explaining how in the next paragraph.


Now my argument against a human being just information. I believe that consciousness is a specific physical biological process which happens in organisms. Now I can't specify what exact physical process I am talking about because I don't know that but I can make up some hypothetical plausible processes which can be equivalent to consciousness to illustrate my point. Let's say the qualitative experience of consciousness only happens when electrons move in a specific pattern inside a substance. If you can make electrons move a certain way you can create consciousness. I am talking of consciousness as a Qualia here.

Just processing information and taking actions is not sufficient to create a conscious being. If I was immortal I can do all mathematical operations of a neural network on a piece of paper. This won't make the pen conscious nor would it make the paper conscious.

I consider all the fundamental particles from indistinguishable from each other, so it does not matter that they are replaced in a physical sense. We only consider particles different from each other due to biological intuition but they are simply excitation in quantum fields, the question of which specific electron occupies a physical position is wrong.

A human being is that pattern of moving electrons in a specific pattern which happens in the human brain. Maybe we can replace neurons with wires and still be conscious maybe not.

The point is that talking about consciousness in terms of information processing and memory is wrong, it should be thought of as a physical process. Some types of computers may have it some may not have it depending on the hardware even though both computers may be able to run the exact same program and produce same result.

This is my problem with replacing your entire brain with a computer.

Even if all the information exists on paper and there is a machine which can process it, it ultimately does not matter. It only matters if there is a conscious being to read it. Now if someone just cares about information existing in some form then it is a fundamental difference in beliefs about what matters and we can just apply Hume's guillotine .I can just motivate my own beliefs as I have done in this message.

I just used movement of electrons as a example to illustrate my point, the process which leads to consciousness may have nothing to do with electrons.

I agree with most of your points, at least when it comes to the principle that some things are far from settled facts.

But:

  1. I see no good reason to believe that manually running a neural network by hand would feel different from the "inside". That includes even an upload of a human mind. For me at, least, substrate independence implies more than just vibes and papers.
  2. It is very obvious to me that parts of a larger system can be unconscious or lack qualia while the larger ensemble does. I think the Chinese Room is a ridiculous thing to take seriously, because I don't see a reason to think that a single neuron in my brain knows English, even if my whole brain clearly can. Is the pen and the paper not conscious? Sure. But the atoms in me aren't conscious either. Doesn't stop anything. You could still hook the output of that hand-calculated process to a robot, and it could control the robot like a normal human might (in theory, if you're calculating fast enough).

The idea that there's something essential wrt consciousness about the human brain or meat in general? Unfalsifiable at present, perhaps unfalsifiable forever. If a mind upload of a human claimed to have qualia, would you immediately believe them? I know many wouldn't.

But the usual invisible and intangible dragon in my garage idea is also just as unfalsifiable. Nobody really believes in that one, so I'll give myself some credit for taking what I see as the more parsimonious/agnostic position for what I see as justified reasons.

It is very obvious to me that parts of a larger system can be unconscious or lack qualia while the larger ensemble does. I think the Chinese Room is a ridiculous thing to take seriously, because I don't see a reason to think that a single neuron in my brain knows English, even if my whole brain clearly can. Is the pen and the paper not conscious? Sure. But the atoms in me aren't conscious either. Doesn't stop anything. You could still hook the output of that hand-calculated process to a robot, and it could control the robot like a normal human might (in theory, if you're calculating fast enough).

This is where I differ from you, I don't think of consciousness as a property possessed by objects. The point about none of the atoms being conciousness but the whole being concious only appears if you think of consciousness as a property.

I think of it as an actual physical phenomena/process performed by electrons or other parts of matter. If you move electrons in a specific way the electrons don't become concious, they don't gain the property of being concious. The physical phenomena of consciousness happens.

It's like sound, none of the air particles posses the quality of sound but sound is still something which exists.

I don't think this physical phenomena can be replicated by pen and paper. It doesn't need to be exclusive to meet maybe some silicon can do it.

Some people may just make a guess about what the exact physical process is and be exactly correct but I can't think of an experiment to tell if they are correct.

It might be unfalsifiable forever, since it's a subjective experience unless conciousness causes some other side effects (somehow?) which would lead us to objectively say when it happens.

Still I don't disagree with you being falsifiable or not, I just have no reason to think of it like a property instead of a process. If it was a property it would still be unfalsifiable.

Now the reason I don't have a problem with it being unfalsifiable is that I can't even imagine something which can cause subjective experience of being concious. Now obviously I experience the subjective experience of being concious, so I know it happens. It might genuinely be part of actual knowledge which can't be obtained just by performing experiments.

Did you mean to reply to me?

Yes, I have edited the original comment so it is complete.