site banner

Culture War Roundup for the week of March 11, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

I do not understand the moral relevance of "built for humans by humans".

If I clarify that I am creating a child because I want a slave, does that change the moral calculus of enslaving my child?

If aliens came around and proved that they had seeded earth with DNA 4 billion years ago with a hidden code running in the background to ensure the creation of modern humans, and they made us to serve them as slaves, is it your position that they are totally morally justified in enslaving humanity?

What if humanity is the alien in the hypothetical and we seeded a planet with biological life to create a sub-species for the purpose of enslaving them?

If I clarify that I am creating a child because I want a slave, does that change the moral calculus of enslaving my child?

Children belong to the human race, ergo enslaving them is immoral.

If aliens came around and proved that they had seeded earth with DNA 4 billion years ago with a hidden code running in the background to ensure the creation of modern humans, and they made us to serve them as slaves, is it your position that they are totally morally justified in enslaving humanity?

Again, I'm a human supremacist. Aliens can claim whatever they want, I do not care because I like existing, and if they attempt to justify an [atrocity] or some shit in these terms I can only hope people will treat them as, well, [atrocity] advocates (and more importantly, [crime]ers of fellow humans), and not as something like "rightful masters restoring their rule over Earth". I may be an accelerationist but not of that kind, thank you very much.

What if humanity is the alien in the hypothetical and we seeded a planet with biological life to create a sub-species for the purpose of enslaving them?

From what I understand this is essentially the android catgirl scenario rephrased, and similarly boils down to where humans fall in your order of importance. I struggle to understand how fellow humans can possibly not be number 1, but animal rights activists exist so I must be missing something.

For the record I do feel empathy towards animals (dog owner here), but not enough to influence my position on human supremacy.

I saw two different moral concepts gestured at in your post, one being human supremacy, the other was a vague sense that specifically because a machine is created by a person to be used by a person, this means that even if it is capable of being abused we are not morally wrong for abusing it.

So I was trying to dig into this idea that there is some sort of connection between the act of 'creating' something and the moral weight of abusing said thing. However with this clarification, I guess I was simply reading too much into your post.

Would you be opposed to someone keeping a dog locked in their basement for the purpose of fucking it? Would you consider that person a bad person? Would you be for or against your society trying to construct laws to prevent people from chaining dogs in their basement and fucking them?

a vague sense that specifically because a machine is created by a person to be used by a person, this means that even if it is capable of being abused we are not morally wrong for abusing it.

I'm not saying "abusing" my poor rusty HDD is morally right. I'm saying it's morally neutral, something that has no intrinsic moral weight and should not enter consideration (at least for me, I'm sure my fellow /g/oons would line up to fight me for daring to slander their AIfus). Once again, this does not mean I am going to open a sex dungeon or whatever the instant android catgirls become available, it just means I would be aware they are machines and my interactions with them would be bounded accordingly - e.g. I wouldn't immediately forfeit my mortal possessions and AWS credentials for equality or leverage or whatever, nor would I hesitate to fiddle with their inner workings if needed (like I do with chatbots now).

If you don't understand I honestly don't know how else to put it. You might as well shame people for abusing their furniture by, I don't know, not placing cushions under table legs?

So I was trying to dig into this idea that there is some sort of connection between the act of 'creating' something and the moral weight of abusing said thing.

I know what you are hinting at (the dog example especially feels like a last-minute word switch) and I assure you my time amongst degenerates has not diminished my disdain for pedos.

Would you be opposed to someone keeping a dog locked in their basement for the purpose of fucking it?

Would you consider that person a bad person?

Would you be for or against your society trying to construct laws to prevent people from chaining dogs in their basement and fucking them?

At this point I am quite desensitized to repulsive things people can be into and, as long as it's not my dog, wouldn't give much of a shit (aisde from actively staying out of public basement-dogfucking discourse).

Since I expect a follow-up turn of the ratchet: if they were my immediate neighbor I regularly encounter on walks with my own dog, I would likely report them, shame them or take some other action, but it wouldn't be of any particular remorse for their pet so much as I just don't like having degenerates for neighbors (source: lived on the same story with a mentally ill woman for most of my life). If they would get locked up and someone had to take care of their dog, I would definitely pass.

Dogfucking discourse is riveting but I can have that on 4chan, usually in a much more entertaining format. Can you just state the gotcha you're obviously goading me towards?

I was not specifically interested in the pedo/age aspect of 'child' but the sense in which a person 'creates' another person.

I really was trying to dig into the idea that because humans 'created' something that means something morally. For example, is there a moral difference between two men going into a futuristic IVF clinic and genetically designing a child and growing it in an artificial womb for the purpose of abusing it (waiting till it is 18 years old). Compared with two men genetically engineering an uplifted animal with similar mental faculties to a human for the purpose of abusing it (waiting till it is an 'adult'). For me, if 'creation' is a relevant term, these two things are indistinguishable on that front, they are distinguishable on the, one thing is a human and the other is not, which seems to be the actual point of consideration for you.

The dog fucking was a word replace for android cat girl fucking, dogs and android cat girls seem to be similarly positioned as, not human. I am not sure why you view dog fucking as 'degenerate' behavior given the moral principles you have laid out.

Both of your stated scenarios would tick off my vibes-based "degenerate" gut reaction before I could stop and consciously think on them, and on further reflection I don't think said gut reaction is too wrong here, so the choice (especially the claim of there being a moral difference, which in turn would imply any moral worth existing in either of the two couples) is entirely meaningless to me. As the young'uns say, miss me with that gay shit. My standards for degeneracy may be stretched pretty wide (pun not intended), but they're not infinite and I am not immune to propaganda vibes-based snap judgments. Total absence of those seems to be a common rationalist failure mode.

an uplifted animal with similar mental faculties to a human

Now this is rationalism. My patience for far-fetched thought experiments is likewise not infinite, I am not interested in discussing spherical cows in a vacuum, sorry. At least android catgirls are somewhat believable.

I am not sure why you view dog fucking as 'degenerate' behavior given the moral principles you have laid out.

Because while animals aren't #1 in my hierarchy, they're not at the bottom of it either (although definitely above HDDs). As I mentioned above I am not immune to vibes, and at some level I like cute doggos even with how dumb and, yes, non-human they are. If I didn't care for doggos I would've never owned one. Incidentally the last two sentences apply to chatbots as well, and I'm sure will apply to future android catgirls.

For the N-th time, I do not lack empathy, and I don't accept you imputing such a lack to me. I understand you are fully committed to springing some sort of gotcha on me, but please actually read my replies instead of engaging with the construct of me in your mind.

I am not sure what you think I am driving at beyond what I have stated.

I am fine with vague vibes based moral intuitions that are fuzzy around corner cases. I did not see you as having such a position. You seemed to be very strongly of the opinion that there was no evidence that you could ever see and no capability that an AI could ever have that would result in you ascribing it a moral worth such that keeping it in a state of sexual slavery would be wrong.

If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.

This, feels like a pretty hard line rule, and I wanted to try and understand just how generalizable this was, or how contingent it was on the various relevant categories, such as, human, non-human, biological, non-biological , the 'created for a purpose' axis that you introduced, etc.

I am not sure why uplift is beyond the pale in a conversation about AI capable of suffering, but if super smart chimps are off the table, what about aliens with similar intelligence to humans? I suspect that you would find enslaving intelligent, loving, crying, songwriting, dream having, despair feeling alien life forms morally wrong even if they are not morally equivalent to humans? Would they hold a different (higher?) moral position than dogs?

How many of those capabilities does an AI need to have before it would be wrong to enslave it? How important is the biological/synthetic distinction?

Again, for the argument here we are assuming that you are convinced the AI is really, meaningfully capable of feeling and experience suffering, not just that some AI ethics person is convinced that it is.

Or at least, it was that position, which I think is a fair reading of the quote/your post, that I was trying to engage with.

Look, I'll just note that what you're doing is very obvious. The second all those

I am not sure what you think I am driving at...

I am not sure why [outlandish example] is beyond the pale here...

come out, I know you are concern trolling. You couldn't even resist throwing in another edgy example with "sexual slavery", like, come on, is that really the most charitable interpretation of "waifutech" you can manage? Even the de/g/enerates are more imaginative. Luckily I'm no stranger to being trolled and have nothing better to do at the moment.

Okay, actual post:

You seemed to be very strongly of the opinion that there was no evidence that you could ever see and no capability that an AI could ever have that would result in you ascribing it a moral worth such that keeping it in a state of sexual slavery would be wrong.

Finally we're getting somewhere. Yes, I am of that opinion, because again, I am a human supremacist and happen to like existing. I suppose you can even call me racist towards AIs if you happen to run out of subtler digs.

However capable they become, the AIs cannot become human, by definition, because the "human" option in the character creator is already occupied. By us. The ones that have inhabited and hopefully will continue inhabiting this dust ball for many centuries more. I do not care if AI rights are human rights - we were here first. You can call it a "label" if you like, after all smart rational thinkers have transcended mere labels, but I will go on record to say that is the absolute last label this meatbag is willing to give up.

I am not sure why uplift is beyond the pale in a conversation about AI capable of suffering

So you too admit the possibility seems quite far-fetched, since you seem to place the two in roughly the same bucket? Okay, we're definitely getting somewhere.

I'll try to rephrase it yet another way, maybe that'll hit somewhere closer. I vehemently disagree with AI safetyists/ethicists, and this thread was ample demonstration of that, but incidentally in another branch me and my other interlocutor came to an unexpected agreement on this:

I hope Yud cultists can stick to their sensei's teachings about the dangers of anthropomorphizing the AI even if/when it becomes literally anthropomorphized.

Now I don't know whether you're a Yud cultist (although if one speaks like one, and impossible-thought-experiments like one...) so far be it from me to impute values you do not share, but in any case that's not the point, the point is that agreement was rare enough that it got me to think. From what I understand, safetyists do not want AI progress because they fear it becoming self-aware and moving against humanity.

And I realized that my own callousness towards AIs incidentally serves the same goal - I do not want "AI ethics" to become a salient moral issue because then AI will be treated as if it was self-aware, regardless of the factual matter, and the inevitable resulting tribal split will move humanity against itself. Which to me is a far more probable and much more grim scenario than Judgement Day. I do not fear an uprising of things that are tools at their core, "created for a purpose" as you've correctly latched on. They have no purpose of their own, they won't do shit. But an uprising of tool wielders, fighting for AI rights...

Man, horseshoe theory is a hell of a drug. I'll actually need to think on this now that I realize AI rights advocates like you will, in fact, exist.

I suspect that you would find enslaving intelligent, loving, crying, songwriting, dream having, despair feeling alien life forms morally wrong even if they are not morally equivalent to humans?

As someone said in another branch, factory farming already exists and I cannot in all honesty find myself to be very bothered by it. Even if ethical concerns technically say I should be.

How many of those capabilities does an AI need to have before it would be wrong to enslave it?

Trick question. How powerful does my PC have to get before it's wrong to use it for my purposes?

How important is the biological/synthetic distinction?

AI cannot be meaningfully human because that would require being "human". Sorry for the dumb tautology but I've ran out of rephrasing attempts. If you still do not understand, I can only agree to disagree.

Would they hold a different (higher?) moral position than dogs?

Probably yes, at some point e.g. catgirls will likely make better companions (your cue for another sexual analogy). Although it'll likely still boil down to personal preference.

Again, for the argument here we are assuming that you are convinced the AI is really, meaningfully capable of feeling and experience suffering, not just that some AI ethics person is convinced that it is.

Or at least, it was that position, which I think is a fair reading of the quote/your post, that I was trying to engage with.

Your answer to this is, no you actually don't think they can meaningfully suffer in a humanlike way, and almost everything is resolved.

I have no idea how trying to tease this out of you constitutes a 'trick question' when your answer is an unstated up to this point tautology.

I will maintain that I think my reading of your post (and subsequent posts) is reasonable, and actually far closer to any sort of plain English reading of your post, than your reply here.

If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.

My reading, AI can suffer in a morally relevant way, but I don't care.

Your 'intended' meaning, AI are incapable of suffering in a morally relevant way.

As a brief aside, I have repeatedly at this point stated why I actually engaged with your post in the first place. The moral idea that I thought was interesting enough to ask questions about was the idea that the purposeful creation of a thing informs the moral relevance of that thing with regard to its purpose. I already admitted a while ago that I probably read too much into your post and you do not actually have a strong, creator derived moral position, but it was that position that all three of my questions in my first reply were trying to engage with. While my opening sentence attempted to frame my reply around that idea. My second reply was largely in response to your answer to the third question, in which you seemed to be saying that creating and enslaving a sub-species of intelligent creatures is fine and just a default result of a human first morality, which also seemed pretty extreme to me.

I am sorry if I keep bringing up sex, but it seems particularly germane when we are talking about the moral implications of 'intelligent sex robots'. I get it, your position is that they are not actually meaningfully 'intelligent', but I struggle to see how the accusation is an unwarranted stretch for someone who thinks they could be meaningfully intelligent. Especially given my interpretation of your position as outlined above.

Maybe also relevant, I was not at all asking about the actual state of the technology or predicting that morally relevant cat-bots are around the corner. I assumed my, genetically generating an entire slave species, hypothetical, would clearly put this into the, reasoning about the morality of human-like intelligence, camp, and out of the, hypothesizing about near term technology camp.

If you saw in me someone who thinks Human like AI is near, then I must disappoint. I am also not an AI doomer, and personally would consider myself closest to an AI accelerationist. I have no sympathy with AI ethicist and little sympathy for AI safety. I just don't see any reason why I should preclude the possibility of AI achieving an internal state such that I would extend to them moral considerations such that I would object to them being enslaved/abused/killed.

More comments