site banner

Culture War Roundup for the week of January 30, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

Has this group had any discussion regarding AI use in pornography, specifically, 'deepfakes?' Its come out recently that a major up-and-coming twitch streamer 'Atrioc' (who was recently married, and ostensibly very pro-feminist, but while funny these facts are technically irrelevant to the matter at hand) had been viewing porn 'deepfakes' of multiple major female twitch streamers, including possibly his best friend's girlfriend (last part unconfirmed but highly possible). He's come out with an apology and its a whole thing but I'm sure this community is more interested with the moral/ethical questions therein than internet-celebrity drama so I won't bore you with it.

The following are my perspectives on a few of the potential questions regarding 'deepfake' porn, and AI porn in general. I'd love to hear what other people think about these perspectives, because my thoughts are currently very incomplete on the issue.

First and foremost, I have a strong intuitive feeling that it is deeply wrong, perhaps tantamount to some form of sexual harassment/assault (but of course the non-violent sort) to make 'deepfake' pornography of other non-consenting people. For example taking a picture from a celebrity's instagram and using AI to transform it into a high-fidelity (but technically fake) nude picture of them seems functionally the same as i.e. spying on them in the shower or when they're changing, which are actions I think we can all agree would be some form of wrong or illegal sexual violation (or perhaps we can't? you tell me). The way I think about this is by considering that a strong enough AI program would be theoretically capable of using a clothed picture of someone to actually reconstruct the way the exact way they look naked, which would be quite literally equivalent to the aforementioned situation/looking at them with x-ray glasses, etc. which again (I think) we and most people agree would be wrong. And so, less-powerful AI capable of doing something similar seem to be at least on that gradient of wrong, if not exactly as bad.

Furthermore, AI that actually transplants people's faces onto video depictions of sexual intercourse (which is ostensibly what 'Atrioc' was caught doing) seem worse, or maybe just bad in a different way. I don't have a similar thought experiment to justify why I feel that way but the wrongness of it is my strong intuition nonetheless.

However, I can also sort of see the argument, at least abstractly, that it's a victimless crime. On the other extreme of the spectrum, fantasizing in one's own imagination about the way people look when they're naked, or how it might feel to have sex with them, is not only generally recognized as a very benign behavior, but is also known as something almost everyone does, men and women both. Sometimes, people do this even completely unconsciously, i.e. in their dreams. And what's the difference between looking at a very (or fully) realistic recreation of the way someone might look with their clothes off, and using one's own imagination to do so? What if one's own imagination was very vivid, and you had seen many naked people before thus making your training data very good, and so you also could reasonably expect to make a relatively accurate recreation of the way someone looked while naked, only in your own mind's eye?

The thing is, acknowledging these potential similarities between an action I find morally acceptable and the one I find morally wrong, still doesn't make my intuition about the wrongness of 'deepfakes' any weaker. I feel like there must be some thing that I haven't considered about it yet, which is where I'm hoping you guys might have insight. The only distinction I've found somewhat convincing so far is maybe that the mass-distribution via the internet is what makes it wrong? In other words I find it less wrong (but still wrong somewhat) to make a highly/fully realistic nude of someone and keep it entirely on one's own computer, more so than I find it wrong to make such an image and then distribute it online. This is especially weird because the former is even more apt of a comparison to i.e. peeping on someone in the locker room which is obviously (?) wrong. So why does it seem more okay to me? Help!

I have a few potential explanations that I'm considering as candidates for the source of my cognitive dissonance here:

  1. Perhaps in reality none of the aforementioned actions are wrong. It's not wrong to spy on someone in the locker room, and so it's not wrong to use 'x-ray glasses' to see through their clothes, or use an AI to edit a picture to do functionally the same thing.

  2. Perhaps instead, in reality it actually is wrong to imagine or fantasize about what other people look like while naked. The reason this is so commonly accepted as benign is because its so unenforceable to prevent. But if sexual mores are so arbitrary/constructed that something that would otherwise be wrong can just be arbitrarily agreed-upon as acceptable just because its unenforceable, how really wrong can any ('victimless') violation of sexual mores be said to be? And thus how really wrong is the other situation, where one uses AI?

This kind of segues into 3. which is: Perhaps in reality the ultimate causes of this dissonance are that modern-day sexual mores are completely stupid, so deeply incoherent that acceptance of any one of them will necessarily lead to cognitive dissonance when contrasted against some other. Is the solution to the 'deepfake' issue then to try and change our society's sexual morals/ethics into something more internally coherent?

None of these really address why I feel different about 'turning a clothed photo into a nude' and 'transplanting, in a realistic way, a non-consenting individual's face onto an actor in a depiction of sexual intercourse.' I have no concrete ideas as to why the latter feels overall worse, but also in some other (minor) ways not as bad. And the latter situation is what the whole controversy with the streamer is all about AFAIK. Very confused about all this.

What's right here, and why? What should even be done? Should 'deepfakes' be illegal because of these potential moral/ethical concerns? Should the act of making a deepfake be illegal, or just distributing it? (I think if we wanted to, we could make both of these things illegal. We might not be able to enforce preventing anyone from making them considering the AI-cat is out of the bag, but it still might be worthwhile to have its illegality on the books if it really is wrong. In other circles I'm seeing the claims that a ban would be unenforceable (motivated thinking?) but it seems trivially easy to functionally ban at least the distribution of 'deepfake' porn in a way that would almost certainly actually reduce the dissemination of such porn if not completely eliminate it. Just as i.e. child sexual abuse imagery or zoophilia porn.

I also see a lot of people in other circles being prompted by this discussion to argue about the ethics of AI image generation in general. I generally think this is basically stupid. The arguments which claim that AI image generation is tantamount to plagiarism (of the dataset images, I suppose) are all basically worthless as far as I can tell. But people who have bought into this line of thinking are thus now going as far as to say that i.e. photorealistic porn (even that depicting completely synthetic likenesses) that is generated with AI is a sexual violation (of all the nude or semi-nude women in pictures in the dataset I guess?) Either way I am wholly unconvinced by these arguments and think they basically all stem from a bad understanding of how the AI work, so I don't think I'm super interested in discussing this axis of the debate. But I mention it because this community sometimes surprises me so if anyone here has a really strong argument as to why this might make sense that they think I haven't seen before, feel free to mention it.

The object-level issue for me is uncontroversial: there is zero damage from deep fakes, bans on technology to produce arbitrary content (without intent to do direct harm) constitute an unconscionable intrusion and the beginning of a civilization-ending slippery slope, people who support this censorious approach (or even have moral intuitions in favor of it) are barbarians, and I can only wish they be politically disenfranchised somehow.

Naturally, they think the same, or less, of me.

Maybe that's too harsh of me. I've skimmed replies to this tweet today and they left me a bit shell-shocked. Ion had the misfortune of his tweet showing up outside his bubble, somewhere in Normie Twitter. So... Well. Most of the time there is no argument being made, they just gloat about him being «ratioed», or ask sarcastically «would it be okay with you if someone made gay porn with your likeness» or say «imagine this was done to your mother/daughter/sister», assuming the shared intuition of obvious harm – and, by implication, hypocrisy and sociopathy in Ion's support for AI fakes of other people but not of himself. (There's also the rhetoric of consent/boundaries/violation, peddled by women and feminists, which IMO speaks to profound egocentric entitlement and belief that women are owed desirable perceptions, and this deserves more scrutiny, but I won't go into it).

Some responses provide a semi-rational steelman, though:

Wow Its almost like people dont wanna have realistic looking fake pornography made of them and possibly spread around to where people In their personal life will see and presume real and treat them differently for

And here:

Here are some reasons:

1: some people may see it, assume it’s real, and stop following her

2: her family/friends might see it which is pretty rough even if it’s fake

3: brands she works with might see it and want to cut ties

4: it’s creepy to do this

And my response is, that would be totally okay, I do not need low-value people in my life. If someone (not dearly beloved but clinically demented) cannot/isn't willing to distinguish real and fake images based on context, or just has strong emotional reactions to highly-likely-fake images and can change attitude towards me on their basis, that person is a long-term liability and should be discarded. This, of course, is the generic principle behind provocative behavior and – in the extreme case – the saying «if you can't take me at my worst, you don't deserve me at my best», popular with crazy bitches. It's strategically sensible for them, because their life strategies are incompatible with being sanctioned for their worst. And disengaging with people who can't take the fake of me at my worst makes perfect strategic sense for me.

And then I realized how deep into a high-IQ, high-decoupler, high-trust bubble I am. For these normal (if very online) people, relations with others are quantitative in nature. They do not filter aggressively. Even if they themselves are more or less undamaged by fake stimuli, they can be materially harmed by losing «followers». They love their friends and family, among whom many are unable to mentally separate contexts. So these people's violent emotional reactions are attuned to their social reality, to the semi-magical realm of voodooist village or bullying in public school, where a literal straw man can go a long way in burying your reputation and, in the limit, you physically.

This just speaks to the need to separate society into more impregnable bubbles, I guess.


P.S. There's an additional, somewhat orthogonal aspect to this, explored e.g. by @SubstantialFrivolity downthread:

fantasizing about having a conversation is fine because having a conversation is fine. Fantasizing about punching someone in the face is bad because punching them in the face is bad. [...] isn't the record that would bother someone. It's the fact that you are doing it at all. "It's ok as long as I don't get caught" is literally the moral code of a child, but as an adult one should realize "no it's wrong even if nobody will ever know".

I've said it before and I'll say it again: Hajnal-type people, guilt-culture people, are fucking weird and freaky, and The Motte would do well to extend some charity to non-First Worlders who show little sympathy to them and readily join the anti-white brigade. They boast of individualism and freedom, but are chained at the very roots of their souls, and proud of those chains. They exoticize and mock Asian cultures for putting «face» and «shame» first, but it's much more healthy and, indeed, liberal and respectful to maturity of others (considering the inevitable consequence of moral blackmail) to care about your public reputation, than to agonize over concealed inner judgements – particularly by people who are not invested in being judicious, as is inevitable in modern societies that are just so much bigger than your dumb medieval Anglo village with a single parish. Their «Christian» morality is Harm OCD reified into a religion, a permanent backdoor inviting any psychopath to tinker and steer them (which inevitably happens in their collectives of substantial scale). Far from surpassing childish innocence, their moral code is more appropriate for ants than for adult humans.

But then again, they are the perfect substrate for building comfortable societies others will enjoy.

I admit that getting the population into the Goldilocks zone that combines dignity culture (which would allow people to fantasize, make and publish images without absurd social opprobrium), reasonable face culture (which would allow them to do so without OCD-like pangs of ridiculous inner guilt, so long as they don't intentionally harm anyone) and avoids utter atomization, degeneration and extinction, is probably a task for 200 IQ social engineers. Ones we (I hypothesize) have got pursue other projects. If it were achieved, it'd probably look something like Eastern Europe, only more functional.


P.P.S. One consideration those outraged people are missing, or perhaps deliberately staying mum on, is that more deepfakes = less deception. With the proliferation of highly realistic fake porn, the prior for a given image being real decreases. Right now a microceleb e-girl's never-before-seen nudes may as well be authentic leaks, and in fact most of the time they are (or self-promotional «leaks», as it happens). When manufacturing their near-equivalent is a matter of two clicks, the prior is «duh, nice GPU bro» and the reputational and emotional damage, even in their ostensible paradigm, trends toward zero. But for those who profit off exclusivity of their images of indecent nature... well, their woe is a separate matter, and undeserving of being laundered through this moral rhetoric.

If someone (not dearly beloved but clinically demented) cannot/isn't willing to distinguish real and fake images based on context, or just has strong emotional reactions to highly-likely-fake images and can change attitude towards me on their basis, that person is a long-term liability and should be discarded.

I think the matter is more subtle than this.

I obviously think most people can, on the abstract level, distinguish between real and fake images. However I'm not willing to use this fact to jump to the conclusion that most people, including many people valuable enough to keep in one's life, wouldn't have strong emotional reactions to some types of even fake images depicting a person, especially images of a sexual nature. And again however much on conscious level they know the images are fake, I feel like the reactions many people have to these images could at least somewhat change their attitude towards the person ostensibly depicted in a real and meaningful way.

I think most people think differently about a person after fantasizing about having sex with them, than they did before such a fantasy crossed their minds. I certainly think that most people would think differently about them, and would almost certainly in some unconscious way treat them differently, after fantasizing about such a thing 100 times. And I think as much is even more true if they've had access to photorealistic depictions of this fantasy that are fake but produced by something other than their imagination, in other words, images that are much easier for their sexual-lizard-brain to believe are real even if they on a higher more abstract level know that the images are fake.

Other than that, you're right that legally these things shouldn't literally be banned. It was a mistake to include that set of questions in the body of my post. And you're right that I think most of the online discourse surrounding the subject misses the mark one way or another, which is nothing new when it comes to subtle moral/ethical issues.

Aside from all this, though, the disconnect I felt existed in my intuition was resolved by another commenter, who described the delineation as such: anything in my head or on my hard-drive, and exclusively in my head or on my hard-drive, is entirely my business. But as soon as I i.e. start to publish online what is on my hard drive, the probability that persons depicted in even fake pornography will find out that someone has done as much starts to approach 1. And, that's where I've started to cross a line. This is more or less what the quoted 'steelman' arguments you found on twitter are getting at, even if still for somewhat wrong reasons: publishing that material, making it highly likely if not certain that the depicted persons will be made to know of it, is what is wrong, at least morally/ethically. By doing so I've made it their business, where previously it was only my own. Regardless of the particular way in which they're affected, that you might personally think shouldn't matter to them, i.e. loss of followers, family/friends potentially seeing it and not knowing its fake, or at minimum, that they just don't like having to know about it -- ultimately it wasn't necessarily my right to make them deal with any of these things, even if I think they shouldn't care about them. The commenter who described the analogy I found so apt likened it to fantasizing about a person sexually, and then directly telling them that you have, in fact, have fantasized about them sexually. Maybe you think they still shouldn't care. But as far as I'm concerned, by doing this you've made something that was formerly solely your business, into their business, in a way I don't think you should.