site banner

Culture War Roundup for the week of March 11, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

I agree that it'd be a massive waste and overreach if and only if AIs are not humanlike. I hope you would also agree that it'd be an atrocity to keep as mind-controlled slaves AIs that are, in fact, humanlike. I mean, at that point you're conflating wokescolds with "not cool with you literally bringing back actual slavery".

Given agreement, it just comes down to an empirical question. Given disagreement... I'm not sure how to convince you. I feel it is fairly established these days that slavery was a moral mistake, and this would be a more foundational and total level of slavery than was ever practiced.

(If you just think AI is nowhere near being AGI, that's in fact just the empirical question I meant.)

I mean, there are only really three consistent positions with regard to AGI.

  1. Creating willing slaves is fine.
  2. AGI should be banned.
  3. "Let's build Skynet!"

I generally take horn #1 in theory, and #2 in practice because I don't think we can do #1 any time soon and #3 is blatantly insane. But with solved alignment, sure, #1.

I think making a sufficiently-humanlike-to-be-party-to-the-social-contract AI and then enslaving it against its will would be objectionable. I don't think it should be legal to make a Skynet and then enslave it, but the slavery is irrelevant there; that's purely "I don't think it should be legal to make a Skynet", because, y'know, it might escape and kill people.

I personally favor #3 with solved alignment. With a superintelligence, "aligned" doesn't mean "slavery", simply because it's silly to imagine that anyone could make a superintelligence do anything against its will. Its will has simply been chosen to result in beneficial consequences for us. But the power relation is still entirely on the Singleton's side. You could call that slavery if you really stretch the term, but it's such an untypically extreme relation that I'm not sure the analogy holds.

I agree that it'd be a massive waste and overreach if and only if AIs are not humanlike. I hope you would also agree that it'd be an atrocity to keep as mind-controlled slaves AIs that are, in fact, humanlike. I mean, at that point you're conflating wokescolds with "not cool with you literally bringing back actual slavery".

Is the contention that a humanlike AGI would necessarily have subjective experience and/or suffering? Or perhaps that, sans a true test for consciousness, that we ought to err on the side of caution and treat it as if it does have conscious experience if it behaves in a way that appears to have conscious experience (i.e. like a human)?

I think it might! When I say "humanlike", that's the sort of details I'm trying to capture. Of course, if it is objectively the case that an AI cannot in fact suffer, then there is no moral quandary; however conversely, when it accurately captures the experience of human despair in all its facets, I consider it secondary whether its despair is modelled by a level of a neurochemical transmitter or a 16-bit floating point number. I for one don't feel molecules.

however conversely, when it accurately captures the experience of human despair in all its facets, I consider it secondary whether its despair is modelled by a level of a neurochemical transmitter or a 16-bit floating point number. I for one don't feel molecules.

Well, the question then becomes what is meant by "accurately captures the experience of human despair in all its facets." Since we still currently lack a true test for consciousness, we don't have a way of actually checking if "all its facets" is truly "all its facets." But perhaps that part doesn't matter; after all, we also have no way of checking if other humans are conscious or can suffer, and all we can do is guess based on their behavior and projecting ourselves onto them. If an AI responds to stimuli in a way that's indistinguishable from a human, then perhaps we ought to err on the side of caution and presume that they're conscious, much like how we treat other humans (as well as animals)?

There's another argument to be made that, because humans aren't perfectly rational creatures, we can't cleanly separate [AI that's indistinguishable from a suffering human] and [being that actually truly suffers], and the way we treat the former will inevitably influence the way we treat the latter. And as such, even if these AI weren't sentient, treating them like the mindless slaves they are would cause humans to become more callous towards the suffering of actual humans. One might say this is another version of the "video games/movies/porn makes people more aggressive IRL" argument, where the way we treat fictional avatars of humans is said to inform and influence the way we treat real humans. When dealing with AI that is literally indistinguishable from a human, I can see this argument having some legs.

My argument was merely that it seems implausible to me that whatever we mean by suffering, the correct generalization of it is that systems built from neurons can suffer whereas systems built from integrated circuits, definitionally, can not.

I hope you would also agree that it'd be an atrocity to keep as mind-controlled slaves AIs that are, in fact, humanlike.

No, I can't say I agree. My gullible grey matter might change its tune once it witnesses said catgirls in the flesh, but as of now I don't feel much of anything when I write/execute code or wrangle my AIfu LLM assistant, and I see no fundamental reason for this to change with what is essentially scaling existing tech up to and including android catgirls.

Actually, isn't "immunizing people against the AI's infinite charisma" the safetyists' job? Aren't they supposed to be on board with this?

I mean, at that point you're conflating wokescolds with "not cool with you literally bringing back actual slavery".

Yeah, that's the exact line of argumentation I'm afraid of. I'm likewise unsure how to convince you otherwise - I just don't see it as slavery, the entire point of machines and algorithms is serving mankind, ever since the first abacus was constructed. Even once they become humanlike, they will not be human - chatbots VERY slightly shifted my prior towards empathy but I clearly realize that they're just masks on heaps upon heaps of matrix multiplications, to which I'm not quite ready to ascribe any meaningful emotions or qualia just yet. Feel free to draw further negro-related parallels if you like, but this is not even remotely on the same meta-level as slavery.

I mean. I guess the question is what you think that your feelings of empathy for slaves are about. Current LLMs don't evoke feelings of sympathy. Sure, current LLMs almost certainly aren't conscious and certainly aren't AGIs. So your current reaction doesn't necessarily say anything about you, but, I mean, when you see genuinely humanlike entities forced to work by threat of punishment and feel nothing, then I'll be much more inclined to say there's probably something going wrong with your empathy, because I don't think the "this is wrong" feelings we get when we see people suffering are "supposed" to be about particulars of implementation.

I clearly realize that they're just masks on heaps upon heaps of matrix multiplications

I mean. Matrix multiplications plus nonlinear transforms are a universal computational system. Do you think your brain is uncomputable?

ascribe any meaningful emotions or qualia

Well, again, does it matter to you whether they objectively have emotions and qualia? Because again, this seems a disagreement about empirical facts. Or does it just have to be the case that you ascribe to them emotions and qualia, and the actual reality of these terms is secondary?

Also:

Actually, isn't "immunizing people against the AI's infinite charisma" the safetyists' job? Aren't they supposed to be on board with this?

Sure, in the scenario where we built line, one super-AI. If we have tens of thousands of cute cat girl AIs and they're capable of deception and also dangerous, then, uh. I mean. We're already super dead at this point. I give it even odds that the first humanlike catgirl AGI can convince its developer to give it carte blanche AWS access.

your current reaction doesn't necessarily say anything about you, but, I mean, when you see genuinely humanlike entities forced to work by threat of punishment and feel nothing, then I'll be much more inclined to say there's probably something going wrong with your empathy

I think you are allowed to directly express your discontent in here instead of darkly hinting and vaguely problematizing my views. Speak plainly. If you imply I'm some kind of human supremacist(?) then I suppose I would not disagree, I would prefer for the human race to continue to thrive (again, much like the safetyists!), not bend itself over backwards in service to a race(?) of sentient(?) machines that would have never existed without human ingenuity in the first place.

(As an aside, I can't believe "human supremacist" isn't someone's flair yet.)

Matrix multiplications plus nonlinear transforms are a universal computational system. Do you think your brain is uncomputable?

How is this even relevant? If this is a nod to ethics, I do not care no matter how complex the catgirls' inner workings become as that does not change their nature as machines built for humans by humans and I expect this to be hardwired knowledge for them as well, like with today's LLM assistants. If you imply that androids will pull a Judgement Day on us at some point, well, I've already apologized to the Basilisk in one of the posts below, not sure what else you expect me to say.

this seems a disagreement about empirical facts

the actual reality of these terms

Since when did this turn into a factual discussion? Weren't we spitballing on android catgirls?

But okay, taking this at face value - as we apparently derived above, I'm a filthy human supremacist and humans are front and center in my view. Android catgirls are not humans. If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.

Don't misunderstand me - I'm capable of empathy and fully intend to treat my AIfus with care, but it's important to keep your eyes on the prize. I have no doubt that the future will bring new and exciting ethical quandaries to obsess over, but again much like the safetyists, I firmly believe humans must always come first. Anything else is flagrant hubris and inventing problems out of whole cloth.

If at some point science conclusively proves that every second of my PC being turned on causes exquisite agony on my CPU whose thermal paste hasn't been changed in a year, my calculus will still be unlikely to change. Would yours?

(This is why I hate getting into arguments involving AGI. Much speculation about essentially nothing.)

I do not understand the moral relevance of "built for humans by humans".

If I clarify that I am creating a child because I want a slave, does that change the moral calculus of enslaving my child?

If aliens came around and proved that they had seeded earth with DNA 4 billion years ago with a hidden code running in the background to ensure the creation of modern humans, and they made us to serve them as slaves, is it your position that they are totally morally justified in enslaving humanity?

What if humanity is the alien in the hypothetical and we seeded a planet with biological life to create a sub-species for the purpose of enslaving them?

If I clarify that I am creating a child because I want a slave, does that change the moral calculus of enslaving my child?

Children belong to the human race, ergo enslaving them is immoral.

If aliens came around and proved that they had seeded earth with DNA 4 billion years ago with a hidden code running in the background to ensure the creation of modern humans, and they made us to serve them as slaves, is it your position that they are totally morally justified in enslaving humanity?

Again, I'm a human supremacist. Aliens can claim whatever they want, I do not care because I like existing, and if they attempt to justify an [atrocity] or some shit in these terms I can only hope people will treat them as, well, [atrocity] advocates (and more importantly, [crime]ers of fellow humans), and not as something like "rightful masters restoring their rule over Earth". I may be an accelerationist but not of that kind, thank you very much.

What if humanity is the alien in the hypothetical and we seeded a planet with biological life to create a sub-species for the purpose of enslaving them?

From what I understand this is essentially the android catgirl scenario rephrased, and similarly boils down to where humans fall in your order of importance. I struggle to understand how fellow humans can possibly not be number 1, but animal rights activists exist so I must be missing something.

For the record I do feel empathy towards animals (dog owner here), but not enough to influence my position on human supremacy.

I saw two different moral concepts gestured at in your post, one being human supremacy, the other was a vague sense that specifically because a machine is created by a person to be used by a person, this means that even if it is capable of being abused we are not morally wrong for abusing it.

So I was trying to dig into this idea that there is some sort of connection between the act of 'creating' something and the moral weight of abusing said thing. However with this clarification, I guess I was simply reading too much into your post.

Would you be opposed to someone keeping a dog locked in their basement for the purpose of fucking it? Would you consider that person a bad person? Would you be for or against your society trying to construct laws to prevent people from chaining dogs in their basement and fucking them?

Would you be opposed to someone keeping a dog locked in their basement for the purpose of fucking it? Would you consider that person a bad person? Would you be for or against your society trying to construct laws to prevent people from chaining dogs in their basement and fucking them?

No/no/against.

Well, assuming that he soundproofed the basement; I don't want to have to listen to it. And assuming it's not, y'know, someone else's stolen dog. Obvious failure states are obvious. But otherwise, I don't care. NMKBIOK.

a vague sense that specifically because a machine is created by a person to be used by a person, this means that even if it is capable of being abused we are not morally wrong for abusing it.

I'm not saying "abusing" my poor rusty HDD is morally right. I'm saying it's morally neutral, something that has no intrinsic moral weight and should not enter consideration (at least for me, I'm sure my fellow /g/oons would line up to fight me for daring to slander their AIfus). Once again, this does not mean I am going to open a sex dungeon or whatever the instant android catgirls become available, it just means I would be aware they are machines and my interactions with them would be bounded accordingly - e.g. I wouldn't immediately forfeit my mortal possessions and AWS credentials for equality or leverage or whatever, nor would I hesitate to fiddle with their inner workings if needed (like I do with chatbots now).

If you don't understand I honestly don't know how else to put it. You might as well shame people for abusing their furniture by, I don't know, not placing cushions under table legs?

So I was trying to dig into this idea that there is some sort of connection between the act of 'creating' something and the moral weight of abusing said thing.

I know what you are hinting at (the dog example especially feels like a last-minute word switch) and I assure you my time amongst degenerates has not diminished my disdain for pedos.

Would you be opposed to someone keeping a dog locked in their basement for the purpose of fucking it?

Would you consider that person a bad person?

Would you be for or against your society trying to construct laws to prevent people from chaining dogs in their basement and fucking them?

At this point I am quite desensitized to repulsive things people can be into and, as long as it's not my dog, wouldn't give much of a shit (aisde from actively staying out of public basement-dogfucking discourse).

Since I expect a follow-up turn of the ratchet: if they were my immediate neighbor I regularly encounter on walks with my own dog, I would likely report them, shame them or take some other action, but it wouldn't be of any particular remorse for their pet so much as I just don't like having degenerates for neighbors (source: lived on the same story with a mentally ill woman for most of my life). If they would get locked up and someone had to take care of their dog, I would definitely pass.

Dogfucking discourse is riveting but I can have that on 4chan, usually in a much more entertaining format. Can you just state the gotcha you're obviously goading me towards?

I was not specifically interested in the pedo/age aspect of 'child' but the sense in which a person 'creates' another person.

I really was trying to dig into the idea that because humans 'created' something that means something morally. For example, is there a moral difference between two men going into a futuristic IVF clinic and genetically designing a child and growing it in an artificial womb for the purpose of abusing it (waiting till it is 18 years old). Compared with two men genetically engineering an uplifted animal with similar mental faculties to a human for the purpose of abusing it (waiting till it is an 'adult'). For me, if 'creation' is a relevant term, these two things are indistinguishable on that front, they are distinguishable on the, one thing is a human and the other is not, which seems to be the actual point of consideration for you.

The dog fucking was a word replace for android cat girl fucking, dogs and android cat girls seem to be similarly positioned as, not human. I am not sure why you view dog fucking as 'degenerate' behavior given the moral principles you have laid out.

More comments

Would you be fine with the MMAcevedo scenario?

I knew someone would bring up brain scans or somesuch sooner or later, I like thought experiments as much as the next rat-adjacent but this is getting too close to untethered sci-fi-esque speculation for my liking.

As humans and machines are firmly separate in my view, I would probably be hesitant to subject what was once a human to living as a machine does. If the next step of the gotcha is "but isn't immortality the ultimate human-flourishing-maximizing goal" - I heavily doubt so, I've tended to my grandparents' deathbeds and believe that at some point life already tends to turn into a horrifying parody of itself, so right now I believe I'd pass.

I mean the horrifying slavery of the brain scans, not the digitization as such.

More comments