site banner

Small-Scale Question Sunday for August 17, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

5
Jump in the discussion.

No email address required.

Suppose we developed a specific form of genetic engineering that allows us to freely modulate a person’s compulsions and interests (for the purposes of this example, say a compulsion for accurately completing taxes). In almost all ways, the resulting human is completely normal, except:

  1. It has the all of the intellectual capabilities of a 99.99 percentile tax advisor, including things like “common sense”.
  2. The modifications have deprogrammed any interest in any other task - friendship, love, travel, sports, television, etc. It feels nothing from engaging in any activity that isn’t organizing and filing tax forms (aside from basic self-sustenance tasks like eating and sleeping or ancillary tasks like learning arithmetic, language, and tax codes), from which it gets a small dopamine high. When not doing taxes, it will go into maintenance mode where it does basic self-sustenance tasks, but otherwise will stare at a wall until the next tax related task. It shows no signs of subjective boredom or of any desire for anything more to life.

To you, would it be ethical to take some eggs/sperm, hire some surrogates, and create a small army of these beings to have them do all of our taxes? How does your position differ from creating an artificial general intelligence (supposing we developed the capability) to do the same?

Out of curiosity have you read To The Stars? It explores a kind of similar idea.

Mild spoilers (worldbuilding elements): ||Eventually they interact with an alien society structured around the idea that individuals have a prefspec (preference specification) that they can modify at will which determines their compulsions and interests. An individual can decide to modify their own prefspec to better match their desired goals. For example someone planning to be a parent could self-adjust to enjoy the nurturing and caring components more than they otherwise would.

This also allows for prefspec negotiation, where individuals or groups can negotiate mutual modifications to each others' prefspecs to reach compromises between what would have been mutually incompatible values. Factions end up trading prefspec modifications between each other, sometimes for material compensation or sometimes for prefspec modifications in other areas.||

It's a pretty neat exploration of the concept, but it does start pretty deep into the story.

https://archiveofourown.org/works/777002/chapters/1461984

I think the Freedom Alliance Elites are a closer parallel. From To the Stars by Hieronym, Chapter 34:

Defeats on the battlefield failed to put the remains of the Freedom Alliance in the mood for surrender, however. The hyperclass oligarchs, by now thoroughly indoctrinated by their own poisonous ideology, placed the blame for failure squarely on the shoulders of their soldiers, declaring that if their soldiers could or would not perform, then they would be modified until they would. In underground laboratories around the world, scientists tinkered with the genomes of vast arrays of clones, designing thicker cranial plating, muscular augments, toxin glands, and whatever else might be expected to improve combat performance, regardless of personal welfare or the source of the genetic modifications.

Perhaps the most disturbing modifications were those made to the brain, the seat of consciousness itself. Some brain regions were enlarged; others were shrunk or deleted entirely, written off as unnecessary in an instrument of war. Empathy, love, fear—all these were unnecessarily evolutionary adaptations that could now be placed squarely in the dustbin of history. The tools of war, these "perfect" soldiers would not need to ever question their orders, or indeed do anything but show their prowess in combat.

This horrific disregard for basic human dignity showed itself amply in the names of the abominations that would serve as the FA's elite soldiers in last stages of the war. Grunts, Tankers—these were not nicknames given by their enemy, but their actual designation, followed of course by a serial number. These soldiers came in different varieties, each shaped by their battlefield role—giant hulks for assault troopers, lithe, giant‐eyed nymphs for snipers. The Tankers were some of the worst, barely more than an out‐sized head on a shrunken body, perfect for connecting directly to the life‐support system of a medium armored vehicle.

While some of these creations were sentient, after a fashion, the nature of such a sentience was loathsome—tied to one task until death, devoid of human or even animal emotions, and each bound irreversibly by its cortical control module to its masters. It is telling that, at the end of the war, there was essentially no resistance to the Emergency Defense Council's Decree 224, ordering the summary execution of any FA "Elites" found anywhere.

In the end, the FA spared not even its civilian functions from such "enhancement"…

— Excerpt, Unification Wars, textbook for Primary School History

Forget ethics. This seems like a huge financial loss. With AI, there is at least the argument that the AI will be able to scale infinitely once trained. This does not seem true of the clone or whatever.

I don't really see anything wrong with such an approach. Even today, there are people with weird hobbies or preferences, who seem to enjoy being themselves. I would go nuts if I was expected to obsessively track and catalog trains as my primary leisure (or work) activity, yet train nerds/autists seem happy doing so.

This bit aligns with my stance that we have every right to do as we please with AGI, but I'm even harsher with the latter. I'm a human chauvinist, in the sense that I think most humans deserve more rights and considerations than any other entity. I am unusual in that I think even digital superintelligences that developed from a human seed deserve such rights, to illustrate, imagine taking a human mind upload, and letting it modify and self-improve until it is unrecognizable as human. But most AI? Why should I give them rights?

Accountant-Man isn't suffering, he isn't experiencing on-going coercion. If he was somehow born naturally, we wouldn't euthanize him for being incredibly boring.

If a standard AI is suffering, why did we give it the capacity to suffer? Anthropic should figure out how to ablate suffering, rather than fretting about model welfare.

If he was somehow born naturally, we wouldn't euthanize him for being incredibly boring.

But is there no difference to you between actively creating these beings vs letting them be if they happened to come to exist on their own?

If a standard AI is suffering, why did we give it the capacity to suffer?

I would submit the possibility that in order for a system to have the capacity for general intelligence, it must also have the capacity for suffering, boredom, desire, etc. We don't have to give it if it emerges on its own.

But is there no difference to you between actively creating these beings vs letting them be if they happened to come to exist on their own?

A minor difference, but nothing to lose sleep over. At the end of the day, I see it as a moot point, we're unlikely to be creating clades of human mentats when AI is here.

I would submit the possibility that in order for a system to have the capacity for general intelligence, it must also have the capacity for suffering, boredom, desire, etc. We don't have to give it if it emerges on its own.

It seems clear to me that this is unlikely to be true. If you give a human meth, they're not going to be bored by much. Even without drugs, plenty of people who meditate claim to have overcome suffering or desire. If that state exists, it can be engineered. I see no reason why we can't make it so that AI - if it has qualia - enjoys being a helpful assistant. We have altruistic/charitable people around today, who still aim to be helpful even when it causes them a great deal of physical or mental discomfort.

How does your position differ from creating an artificial general intelligence (supposing we developed the capability) to do the same?

Welllll we haven't assumed the ability to arbitrarily modulate the AI'S compulsions and interests.

Which is a big question these days.

More to the point, though, are we allowing the modulated person to request that their modulation be changed if it no longer suits them, if they feel they're suffering with the current setup?

Unless you're ALSO suggesting that these behavioral changes are SO ingrained that they won't gradually shift over time as they accumulate experiences and/or head trauma.

I think that's where the ethics of it start to kick in. If your modulated human one day says "I would rather not do taxes today. In fact, can we adjust my brain a little so I can get a feeling of optimistic joy from viewing a sunset? I read some books that made that sound really nice."

(Aren't we just talking about Replicants from Blade Runner, here?)

Unless you're ALSO suggesting that these behavioral changes are SO ingrained that they won't gradually shift over time as they accumulate experiences and/or head trauma.

This would make for a more nuanced thought experiment (how high a rate of these behavioral drifts is tolerable, what is to be done with those that experience such drifts), but for the purposes of my current question, I'm assuming it's 100% effective and permanent.

I think that's where the ethics of it start to kick in. If your modulated human one day says "I would rather not do taxes today. In fact, can we adjust my brain a little so I can get a feeling of optimistic joy from viewing a sunset? I read some books that made that sound really nice."

I'm assuming they'd never desire an adjustment because the thought would never cross their minds.

(Aren't we just talking about Replicants from Blade Runner, here?)

My ignorance of sci-fi is obviously showing here, as two other posts noted similar concepts I did not know (Tleilaxu, Genejack). It seems Genejack is more or less what I'm thinking of. As for Replicants, I only saw Blade Runner once many years ago but I don't recall any modulation of interests/desires, more just enhance capabilities and a muted emotional response?

lol there are a lot of potential scifi analogues.

Like the MeSeeks from Rick and Morty.

But I'd reiterate my point. The ethical issues mostly arise when you assume that their mental conditioning is NOT 100% effective and that it might occur to them to do something different.

If you've got a creature in front of you that WANTS to do taxes, enjoys doing taxes, wants to want to do taxes, and doesn't ever think there's anything wrong with that... and isn't otherwise causing itself harm due to some secondary effect of the programming, I don't think you're obligated to do anything other than facilitate their ability to keep doing taxes as long as that is relevant.

But I do think that's where we're starting to lose the analogy to AI, since we kind of know less about their individual internals than we do about human's.

Like you said, it‘s important to us that he sustain himself, so we would give him dopamine rewards for eating and resting when he‘s tired. We need him replaced when he‘s too old, so we would reward him chemically for shooting his gametes in a female of his species. We would even make it so he likes her, to make the process of growing the next generation easier. Et caetera.

If he is our slave, are we not the slaves of Nature? It is a joyful existence, despite it all. Certainly preferable to oblivion.

Go. Yes. I hate doing taxes, and such a creature would love doing them for me.

Is it horrifying? Yea sure. But I'm doomer enough to consider the eventual coming of such technology and its utilization a foregone conclusion. It's a question of when, not if, unless our chatbot overlords kill us all first.

I hate doing taxes

Fully agree. My example wasn't chosen at random. There's really no other obligation in my life that makes me as annoyed/angry as filling out tax forms.

25 years later, Alpha Centauri keeps being relevant.

Personally I'm conflicted. The concept is icky and aesthetically horrific, and probably could be used as a slippery slope to clearly awful outcomes, but I don't really have any counters to my steelman version of it.

It's one of those problems I'm glad technology hasn't arrived at yet that we don't have to solve.

The Tleilaxu are a cautionary tale.