site banner

Culture War Roundup for the week of January 30, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

Has this group had any discussion regarding AI use in pornography, specifically, 'deepfakes?' Its come out recently that a major up-and-coming twitch streamer 'Atrioc' (who was recently married, and ostensibly very pro-feminist, but while funny these facts are technically irrelevant to the matter at hand) had been viewing porn 'deepfakes' of multiple major female twitch streamers, including possibly his best friend's girlfriend (last part unconfirmed but highly possible). He's come out with an apology and its a whole thing but I'm sure this community is more interested with the moral/ethical questions therein than internet-celebrity drama so I won't bore you with it.

The following are my perspectives on a few of the potential questions regarding 'deepfake' porn, and AI porn in general. I'd love to hear what other people think about these perspectives, because my thoughts are currently very incomplete on the issue.

First and foremost, I have a strong intuitive feeling that it is deeply wrong, perhaps tantamount to some form of sexual harassment/assault (but of course the non-violent sort) to make 'deepfake' pornography of other non-consenting people. For example taking a picture from a celebrity's instagram and using AI to transform it into a high-fidelity (but technically fake) nude picture of them seems functionally the same as i.e. spying on them in the shower or when they're changing, which are actions I think we can all agree would be some form of wrong or illegal sexual violation (or perhaps we can't? you tell me). The way I think about this is by considering that a strong enough AI program would be theoretically capable of using a clothed picture of someone to actually reconstruct the way the exact way they look naked, which would be quite literally equivalent to the aforementioned situation/looking at them with x-ray glasses, etc. which again (I think) we and most people agree would be wrong. And so, less-powerful AI capable of doing something similar seem to be at least on that gradient of wrong, if not exactly as bad.

Furthermore, AI that actually transplants people's faces onto video depictions of sexual intercourse (which is ostensibly what 'Atrioc' was caught doing) seem worse, or maybe just bad in a different way. I don't have a similar thought experiment to justify why I feel that way but the wrongness of it is my strong intuition nonetheless.

However, I can also sort of see the argument, at least abstractly, that it's a victimless crime. On the other extreme of the spectrum, fantasizing in one's own imagination about the way people look when they're naked, or how it might feel to have sex with them, is not only generally recognized as a very benign behavior, but is also known as something almost everyone does, men and women both. Sometimes, people do this even completely unconsciously, i.e. in their dreams. And what's the difference between looking at a very (or fully) realistic recreation of the way someone might look with their clothes off, and using one's own imagination to do so? What if one's own imagination was very vivid, and you had seen many naked people before thus making your training data very good, and so you also could reasonably expect to make a relatively accurate recreation of the way someone looked while naked, only in your own mind's eye?

The thing is, acknowledging these potential similarities between an action I find morally acceptable and the one I find morally wrong, still doesn't make my intuition about the wrongness of 'deepfakes' any weaker. I feel like there must be some thing that I haven't considered about it yet, which is where I'm hoping you guys might have insight. The only distinction I've found somewhat convincing so far is maybe that the mass-distribution via the internet is what makes it wrong? In other words I find it less wrong (but still wrong somewhat) to make a highly/fully realistic nude of someone and keep it entirely on one's own computer, more so than I find it wrong to make such an image and then distribute it online. This is especially weird because the former is even more apt of a comparison to i.e. peeping on someone in the locker room which is obviously (?) wrong. So why does it seem more okay to me? Help!

I have a few potential explanations that I'm considering as candidates for the source of my cognitive dissonance here:

  1. Perhaps in reality none of the aforementioned actions are wrong. It's not wrong to spy on someone in the locker room, and so it's not wrong to use 'x-ray glasses' to see through their clothes, or use an AI to edit a picture to do functionally the same thing.

  2. Perhaps instead, in reality it actually is wrong to imagine or fantasize about what other people look like while naked. The reason this is so commonly accepted as benign is because its so unenforceable to prevent. But if sexual mores are so arbitrary/constructed that something that would otherwise be wrong can just be arbitrarily agreed-upon as acceptable just because its unenforceable, how really wrong can any ('victimless') violation of sexual mores be said to be? And thus how really wrong is the other situation, where one uses AI?

This kind of segues into 3. which is: Perhaps in reality the ultimate causes of this dissonance are that modern-day sexual mores are completely stupid, so deeply incoherent that acceptance of any one of them will necessarily lead to cognitive dissonance when contrasted against some other. Is the solution to the 'deepfake' issue then to try and change our society's sexual morals/ethics into something more internally coherent?

None of these really address why I feel different about 'turning a clothed photo into a nude' and 'transplanting, in a realistic way, a non-consenting individual's face onto an actor in a depiction of sexual intercourse.' I have no concrete ideas as to why the latter feels overall worse, but also in some other (minor) ways not as bad. And the latter situation is what the whole controversy with the streamer is all about AFAIK. Very confused about all this.

What's right here, and why? What should even be done? Should 'deepfakes' be illegal because of these potential moral/ethical concerns? Should the act of making a deepfake be illegal, or just distributing it? (I think if we wanted to, we could make both of these things illegal. We might not be able to enforce preventing anyone from making them considering the AI-cat is out of the bag, but it still might be worthwhile to have its illegality on the books if it really is wrong. In other circles I'm seeing the claims that a ban would be unenforceable (motivated thinking?) but it seems trivially easy to functionally ban at least the distribution of 'deepfake' porn in a way that would almost certainly actually reduce the dissemination of such porn if not completely eliminate it. Just as i.e. child sexual abuse imagery or zoophilia porn.

I also see a lot of people in other circles being prompted by this discussion to argue about the ethics of AI image generation in general. I generally think this is basically stupid. The arguments which claim that AI image generation is tantamount to plagiarism (of the dataset images, I suppose) are all basically worthless as far as I can tell. But people who have bought into this line of thinking are thus now going as far as to say that i.e. photorealistic porn (even that depicting completely synthetic likenesses) that is generated with AI is a sexual violation (of all the nude or semi-nude women in pictures in the dataset I guess?) Either way I am wholly unconvinced by these arguments and think they basically all stem from a bad understanding of how the AI work, so I don't think I'm super interested in discussing this axis of the debate. But I mention it because this community sometimes surprises me so if anyone here has a really strong argument as to why this might make sense that they think I haven't seen before, feel free to mention it.

The only violation here is one of licensing; the deepfakes use a person's likeness without their permission. Other than that, I don't see any problem.

Atrioc is only in trouble because, as he said through his tears in his apology, he's done all he can to make his chat and audience inclusive, especially to women. He cultivated an audience of offense-seeking hall monitors and now it's biting him in the ass when he's made a mistake, as everyone does. If he was a fucking degenerate none of his audience would have cared. I can't imagine someone like Asmongold's audience particularly giving a shit, beyond mocking him viciously for paying for porn. He made this rod for his own back, to some degree.

As other people have said, this is no different in concept from NSFW fanfics the likes of which litter sites like AO3, or NSFW "fan art" or celebrity photoshops anything else. It's just better, that's all. And yeah I can understand people feeling uncomfortable seeing works like that about themselves. But they can always just not look.

But they can always just not look.

If it's there, though, they'll still know it's there, even when they're not looking at it. Thus they will suffer some psychological harm they otherwise wouldn't have suffered, if it just wasn't published in the first place.

Is it moral for me to publish something, if the very fact that it has been published will cause someone to suffer psychologically? I think unless the value gained by publishing that thing is high (high in a relative sense, as in, greater than zero) it is immoral to do this. And I think the value gained by something like porn is basically zero.

If it's there, though, they'll still know it's there, even when they're not looking at it. Thus they will suffer some psychological harm they otherwise wouldn't have suffered, if it just wasn't published in the first place.

The same idea applies to using people's likeness in memes. Like for the woman in the "first world problems" meme--I am sure that being the literal poster-child of getting-upset-over-silly-things isn't what she wanted out of life.

If those memes are distributed for free--and they are--does the woman have the right to ask websites to take them down?

So? We can't go our whole lives avoiding doing anything that causes people trivial amounts of "psychological harm". That kind of mental safetyism is abhorrent and would have us all walking on eggshells for our entire lives, and having our behaviour dictated by utility monsters. Or empathy monsters, I suppose.

The value gained from porn obviously isn't zero, as people pay for it, and people enjoy it.

So? We can't go our whole lives avoiding doing anything that causes people trivial amounts of "psychological harm". That kind of mental safetyism is abhorrent and would have us all walking on eggshells for our entire lives, and having our behaviour dictated by utility monsters. Or empathy monsters, I suppose.

This kind of distress is felt by a vast majority of the human race, at least that portion who even understands what the Internet and photos are. Unless you're claiming that most of humanity are utility monsters, I'd suggest that if you don't feel such distress, you are extremely weird and you should avoid typical-minding on this subject.

This kind of distress is contingent on societal values though. It'd be incredibly weird in $X_BC to not care about the virginity of your daughter or future wife, less so now. "There are deepfakes of me" seems like something people would adapt to and stop being distressed about if it became commonplace (and there's a good chance it will, the interest is there and a stable-diffusion sized deepfake model you can torrent and run on your pc is technically doable). Is 'psychological distress' even bad, absent something worth being distressed over? Compare to ... confusion and doubt in a tough intellectual problem, it isn't comfortable or pleasant, but it isn't bad.

Distress is over-egging it a lot. Mild discomfort seems like it should be more appropriate, at best. Literally what is there to be upset over?

Woe is me, someone finds me attractive enough to go to the effort of faking a video of me having sex so they can watch it and pretend it's real. And people are paying for it! Oh no how awful I must feel, being in such demand.

If there's one thing we know about humans, it's that they absolutely hate feeling sexually desirable.

You are extremely weird and you should avoid typical-minding here.

You are extremely weird

Unnecessary. You already made this point, less antagonistically.