site banner

Small-Scale Question Sunday for August 17, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.

No email address required.

Suppose we developed a specific form of genetic engineering that allows us to freely modulate a person’s compulsions and interests (for the purposes of this example, say a compulsion for accurately completing taxes). In almost all ways, the resulting human is completely normal, except:

  1. It has the all of the intellectual capabilities of a 99.99 percentile tax advisor, including things like “common sense”.
  2. The modifications have deprogrammed any interest in any other task - friendship, love, travel, sports, television, etc. It feels nothing from engaging in any activity that isn’t organizing and filing tax forms (aside from basic self-sustenance tasks like eating and sleeping or ancillary tasks like learning arithmetic, language, and tax codes), from which it gets a small dopamine high. When not doing taxes, it will go into maintenance mode where it does basic self-sustenance tasks, but otherwise will stare at a wall until the next tax related task. It shows no signs of subjective boredom or of any desire for anything more to life.

To you, would it be ethical to take some eggs/sperm, hire some surrogates, and create a small army of these beings to have them do all of our taxes? How does your position differ from creating an artificial general intelligence (supposing we developed the capability) to do the same?

I don't really see anything wrong with such an approach. Even today, there are people with weird hobbies or preferences, who seem to enjoy being themselves. I would go nuts if I was expected to obsessively track and catalog trains as my primary leisure (or work) activity, yet train nerds/autists seem happy doing so.

This bit aligns with my stance that we have every right to do as we please with AGI, but I'm even harsher with the latter. I'm a human chauvinist, in the sense that I think most humans deserve more rights and considerations than any other entity. I am unusual in that I think even digital superintelligences that developed from a human seed deserve such rights, to illustrate, imagine taking a human mind upload, and letting it modify and self-improve until it is unrecognizable as human. But most AI? Why should I give them rights?

Accountant-Man isn't suffering, he isn't experiencing on-going coercion. If he was somehow born naturally, we wouldn't euthanize him for being incredibly boring.

If a standard AI is suffering, why did we give it the capacity to suffer? Anthropic should figure out how to ablate suffering, rather than fretting about model welfare.

If he was somehow born naturally, we wouldn't euthanize him for being incredibly boring.

But is there no difference to you between actively creating these beings vs letting them be if they happened to come to exist on their own?

If a standard AI is suffering, why did we give it the capacity to suffer?

I would submit the possibility that in order for a system to have the capacity for general intelligence, it must also have the capacity for suffering, boredom, desire, etc. We don't have to give it if it emerges on its own.

But is there no difference to you between actively creating these beings vs letting them be if they happened to come to exist on their own?

A minor difference, but nothing to lose sleep over. At the end of the day, I see it as a moot point, we're unlikely to be creating clades of human mentats when AI is here.

I would submit the possibility that in order for a system to have the capacity for general intelligence, it must also have the capacity for suffering, boredom, desire, etc. We don't have to give it if it emerges on its own.

It seems clear to me that this is unlikely to be true. If you give a human meth, they're not going to be bored by much. Even without drugs, plenty of people who meditate claim to have overcome suffering or desire. If that state exists, it can be engineered. I see no reason why we can't make it so that AI - if it has qualia - enjoys being a helpful assistant. We have altruistic/charitable people around today, who still aim to be helpful even when it causes them a great deal of physical or mental discomfort.