Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
Suppose we developed a specific form of genetic engineering that allows us to freely modulate a person’s compulsions and interests (for the purposes of this example, say a compulsion for accurately completing taxes). In almost all ways, the resulting human is completely normal, except:
To you, would it be ethical to take some eggs/sperm, hire some surrogates, and create a small army of these beings to have them do all of our taxes? How does your position differ from creating an artificial general intelligence (supposing we developed the capability) to do the same?
Out of curiosity have you read To The Stars? It explores a kind of similar idea.
Mild spoilers (worldbuilding elements): ||Eventually they interact with an alien society structured around the idea that individuals have a prefspec (preference specification) that they can modify at will which determines their compulsions and interests. An individual can decide to modify their own prefspec to better match their desired goals. For example someone planning to be a parent could self-adjust to enjoy the nurturing and caring components more than they otherwise would.
This also allows for prefspec negotiation, where individuals or groups can negotiate mutual modifications to each others' prefspecs to reach compromises between what would have been mutually incompatible values. Factions end up trading prefspec modifications between each other, sometimes for material compensation or sometimes for prefspec modifications in other areas.||
It's a pretty neat exploration of the concept, but it does start pretty deep into the story.
https://archiveofourown.org/works/777002/chapters/1461984
I think the Freedom Alliance Elites are a closer parallel. From To the Stars by Hieronym, Chapter 34:
More options
Context Copy link
More options
Context Copy link
Forget ethics. This seems like a huge financial loss. With AI, there is at least the argument that the AI will be able to scale infinitely once trained. This does not seem true of the clone or whatever.
More options
Context Copy link
I don't really see anything wrong with such an approach. Even today, there are people with weird hobbies or preferences, who seem to enjoy being themselves. I would go nuts if I was expected to obsessively track and catalog trains as my primary leisure (or work) activity, yet train nerds/autists seem happy doing so.
This bit aligns with my stance that we have every right to do as we please with AGI, but I'm even harsher with the latter. I'm a human chauvinist, in the sense that I think most humans deserve more rights and considerations than any other entity. I am unusual in that I think even digital superintelligences that developed from a human seed deserve such rights, to illustrate, imagine taking a human mind upload, and letting it modify and self-improve until it is unrecognizable as human. But most AI? Why should I give them rights?
Accountant-Man isn't suffering, he isn't experiencing on-going coercion. If he was somehow born naturally, we wouldn't euthanize him for being incredibly boring.
If a standard AI is suffering, why did we give it the capacity to suffer? Anthropic should figure out how to ablate suffering, rather than fretting about model welfare.
But is there no difference to you between actively creating these beings vs letting them be if they happened to come to exist on their own?
I would submit the possibility that in order for a system to have the capacity for general intelligence, it must also have the capacity for suffering, boredom, desire, etc. We don't have to give it if it emerges on its own.
A minor difference, but nothing to lose sleep over. At the end of the day, I see it as a moot point, we're unlikely to be creating clades of human mentats when AI is here.
It seems clear to me that this is unlikely to be true. If you give a human meth, they're not going to be bored by much. Even without drugs, plenty of people who meditate claim to have overcome suffering or desire. If that state exists, it can be engineered. I see no reason why we can't make it so that AI - if it has qualia - enjoys being a helpful assistant. We have altruistic/charitable people around today, who still aim to be helpful even when it causes them a great deal of physical or mental discomfort.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Welllll we haven't assumed the ability to arbitrarily modulate the AI'S compulsions and interests.
Which is a big question these days.
More to the point, though, are we allowing the modulated person to request that their modulation be changed if it no longer suits them, if they feel they're suffering with the current setup?
Unless you're ALSO suggesting that these behavioral changes are SO ingrained that they won't gradually shift over time as they accumulate experiences and/or head trauma.
I think that's where the ethics of it start to kick in. If your modulated human one day says "I would rather not do taxes today. In fact, can we adjust my brain a little so I can get a feeling of optimistic joy from viewing a sunset? I read some books that made that sound really nice."
(Aren't we just talking about Replicants from Blade Runner, here?)
This would make for a more nuanced thought experiment (how high a rate of these behavioral drifts is tolerable, what is to be done with those that experience such drifts), but for the purposes of my current question, I'm assuming it's 100% effective and permanent.
I'm assuming they'd never desire an adjustment because the thought would never cross their minds.
My ignorance of sci-fi is obviously showing here, as two other posts noted similar concepts I did not know (Tleilaxu, Genejack). It seems Genejack is more or less what I'm thinking of. As for Replicants, I only saw Blade Runner once many years ago but I don't recall any modulation of interests/desires, more just enhance capabilities and a muted emotional response?
lol there are a lot of potential scifi analogues.
Like the MeSeeks from Rick and Morty.
But I'd reiterate my point. The ethical issues mostly arise when you assume that their mental conditioning is NOT 100% effective and that it might occur to them to do something different.
If you've got a creature in front of you that WANTS to do taxes, enjoys doing taxes, wants to want to do taxes, and doesn't ever think there's anything wrong with that... and isn't otherwise causing itself harm due to some secondary effect of the programming, I don't think you're obligated to do anything other than facilitate their ability to keep doing taxes as long as that is relevant.
But I do think that's where we're starting to lose the analogy to AI, since we kind of know less about their individual internals than we do about human's.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Like you said, it‘s important to us that he sustain himself, so we would give him dopamine rewards for eating and resting when he‘s tired. We need him replaced when he‘s too old, so we would reward him chemically for shooting his gametes in a female of his species. We would even make it so he likes her, to make the process of growing the next generation easier. Et caetera.
If he is our slave, are we not the slaves of Nature? It is a joyful existence, despite it all. Certainly preferable to oblivion.
More options
Context Copy link
Go. Yes. I hate doing taxes, and such a creature would love doing them for me.
Is it horrifying? Yea sure. But I'm doomer enough to consider the eventual coming of such technology and its utilization a foregone conclusion. It's a question of when, not if, unless our chatbot overlords kill us all first.
Fully agree. My example wasn't chosen at random. There's really no other obligation in my life that makes me as annoyed/angry as filling out tax forms.
More options
Context Copy link
More options
Context Copy link
25 years later, Alpha Centauri keeps being relevant.
Personally I'm conflicted. The concept is icky and aesthetically horrific, and probably could be used as a slippery slope to clearly awful outcomes, but I don't really have any counters to my steelman version of it.
It's one of those problems I'm glad technology hasn't arrived at yet that we don't have to solve.
More options
Context Copy link
The Tleilaxu are a cautionary tale.
More options
Context Copy link
More options
Context Copy link