site banner

Culture War Roundup for the week of September 12, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

40
Jump in the discussion.

No email address required.

Finally something that explicitly ties AI into the culture war: Why I HATE A.I. Art - by Vaush

This AI art thing. Some people love it, some people hate it. I hate it.

I endorse pretty much all of the points he makes in this video. I do recommend watching the whole thing all the way through, if you have time.

I went into this curious to see exactly what types of arguments he would make, as I've been interested in the relationship between AI progress and the left/right divide. His arguments fall into roughly two groups.

First is the "material impact" arguments - that this will be bad for artists, that you're using their copyrighted work without their permission, that it's not fair to have a machine steal someone's personal style that they worked for years to develop, etc. I certainly feel the force of these arguments, but it's also easy for AI advocates to dismiss them with a simple "cry about it". Jobs getting displaced by technology is nothing new. We can't expect society to defend artists' jobs forever, if they are indeed capable of being easily automated. Critics of AI art need to provide more substantial arguments about why AI art is bad in itself, rather than simply pointing out that it's bad for artists' incomes. Which Vaush does make an attempt at.

The second group of arguments could perhaps be called "deontological arguments" as they go beyond the first-person experiential states of producers and consumers of AI art, and the direct material harm or benefit caused by AI. The main concern here is that we're headed for a future where all media and all human interaction is generated by AI simulations, which would be a hellish dystopia. We don't want things to just feel good - we want to know that there's another conscious entity on the other end of the line.

It's interesting to me how strongly attuned Vaush is to the "spiritual" dimension of this issue, which I would not have expected from an avowed leftist. It's clearly something that bothers him on an emotional level. He goes so far as to say:

If you don't see stuff like this [AI art] as a problem, I think you're a psychopath.

and, what was the real money shot for me:

It's deeply alienating, and if you disagree, you cannot call yourself a Marxist. I'm drawing a line.

Now, on the one hand, "leftism" and "Marxism" are absolutely massive intellectual traditions with a lot of nuance and disagreement, and I certainly don't expect all leftists to hold the same views on everything. On the other hand, I really do think that what we're seeing now with AI content generation is a natural consequence of the leftist impulse, which has always been focused on the ceaseless improvement and elevation of man in his ascent towards godhood. What do you think "fully automated luxury gay space communism" is supposed to mean? It really does mean fully automated. If everyone is to be a god unto themselves, untrammeled by external constraints, then that also means they have the right to shirk human relationships and form relationships with their AI buddies instead (and also flood the universe with petabytes of AI-generated art). At some point, there seems to be a tension between progress on the one hand and traditional authenticity on the other.

It was especially amusing when he said:

This must be how conservatives feel when they talk about "bugmen".

I guess everyone becomes a reactionary at some point - the only thing that differs is how far you have to push them.

that this will be bad for artists, that you're using their copyrighted work without their permission, that it's not fair to have a machine steal someone's personal style that they worked for years to develop, etc.

It is difficult to convey in human language how absurd this argument seems. Is there anyone willing to actually defend it? What do people think "an artist's style" is, and why do they believe that it is, in any meaningful sense, something that can be "owned", on which property rights in any form could be enforced? At the moment, my best guess is that people making such arguments are either so thoroughly confused that they have nothing of value to say, or so dishonest that good-faith conversation with them is flatly impossible.

It seems in line with how people treat trade dress. The underlying justification was to prevent customer confusion where someone buys a product they did not intend because the branding was so similar. That has been expanded into the look-and-feel standard in the digital age. It's not that far of a logical leap to apply that to a case of training an AI on a specific artist. Take a test case of a programmer with no art skills training an AI on a particular artist they like to produce an app or a game and profiting off the similarity and you've got a precedent for that sort of thing. I don't think that's the correct way to handle things but I can see something like that happening.

If one artist learns another artist's style, should they be prevented from selling work in that style? Under current law or any previous laws I'm aware of, absolutely not. So where does this idea of protecting artistic style come from? Certainly "trade dress" or "look and feel" have never been applicable before, so why should they be applicable now? And if they are applied, why are they not applied for human artists as well?

The ease of copying is the difference. Every artist that wants to copy artist X's style has to learn it themselves and it's "bodybuilding hard". It doesn't really scale. With art AIs anyone can teach it to draw in any style you have enough copies of, you just need some gigaflopses, and then the training weights can be duplicated infinitely.

Romm Art Creations Ltd. v. Simcha Int'l., Inc. seems applicable to me but I'm just some rando on the internet. Typical plaintiff work compared to a defendant work that was enjoined in that case.

I think the decision in question was simply wrong, and do not think judges actually deliver such decisions on anything approaching a regular basis.

compare this and this.

That's two different artists, with the former intentionally getting as close to the latter's art style as possible, and then using it for his own profit. No one pretends that this is even slightly objectionable.

It is, but that decision is crazy sauce and apparently wasn't pursued after the preliminary injunction.

People are generally confused about these kinds of rights and only have a vague idea of "intellectual property rights" (which is considered by Richard Stallman as a deliberately misleading propaganda term in itself), based on FUD spread by music, movie, book, software etc publishers.

The term "intellectual property" blurs the lines between and masks the purpose behind different kinds of laws like copyright law, trademark law, patent law, trade secrets, the banning of industrial espionage etc. People don't understand even the basics, like ideas can't be copyrighted, only concrete expression, etc.

In this context an artist's style seems like just another natural intellectual property.

It doesn't seem much more absurd than saying that Mickey Mouse is something that can be owned and be subject to property rights. Which is exactly what existing copyright law does.

Determining whether someone has copied an artist's style would be more difficult than determining whether someone copied the design of Mickey Mouse, but given that "in the style of X artist" prompts are extremely popular with SD users right now, and people can coherently discuss how accurate or not the AI was at reproducing the requested style, it doesn't seem like it's totally impossible.

It doesn't seem much more absurd than saying that Mickey Mouse is something that can be owned and be subject to property rights. Which is exactly what existing copyright law does.

"A representation of mickey mouse" and "the style of mickey mouse" might not sound too different; after all, most of the words are the same. In the same way, we might say that looking at the moon and travelling to the moon are similar, since most of the words are the same. The reality is somewhat different.

Mickey Mouse is a specific representation. "style" is the raw material representations are made of. Metallica can copyright "Enter Sandman." They can't copyright musical notes, or even angry-sounding musical notes played on an electric guitar, but the latter is what you are advocating. You're saying that someone should be able to copyright "grim detective stories set in the 1920s", or "stories about D-Day."

You are claiming that people have a right to exclusive ownership of specific colors, line widths and angles, textures, curves and shapes, elements of composition and rhythm, not as a description of of a specific drawing or subject, but in general across all drawings. You aren't claiming that people can copyright mickey mouse, you're claiming they can copyright specific kinds of circles, the color orange, and vanishing-point perspective. That is what style is: heuristics for simplifying the infinite complexity of the real world into something more easily expressible. Artists can develop styles, or discover them. They cannot own them in any meaningful sense. No one can. Claiming otherwise is a naked assertion to ownership of someone else's brain.

As an aside, SD can generate pictures of Mickey Mouse doing novel things, same with any Marvel characters and so on. If I'm not allowed to release a new cartoon of Mickey Mouse (or Batman) acting out a new story, are the SD authors allowed to release this model?

(This is a distinct topic from style.)

the AI isn't itself a new cartoon of mickey mouse or batman. It can be made to create a new cartoon of mickey mouse and batman, in much the same way photoshop or a pencil and paper can. Why should the model be regulated differently from paper or drawing software?

Photoshop does not contain anything specific to mickey mouse, I have to know what he looks like if I want to create a picture of him. Meanwhile, SD does know what mickey looks like, I don't have to know. Even a blind man who has no idea what the mouse looks like can create images of him because SD contains the info of what the character looks like.

I'd agree with you if I had to type in a full, detailed description of what mickey mouse looks like, color, shape etc, and SD knew how to draw him only afterwards.

SD pretty much contains a representation of mickey mouse in the model weights. I'm not allowed to release a textured 3d mesh of mickey mouse, even though the user first has to choose a viewing angle and a light source position etc in order to render a pic of mickey from that 3d asset. Similarly here with SD we don't have a 3d mesh, but have something that can be controlled in slightly differently but is still a representation. Just because the format is neural weights instead of explicit 3d assets, the situation is very similar. Else what do you say about neural encodings of distance fields from which the surface can be recovered? How about NeRFs?

Photoshop does not contain anything specific to mickey mouse, I have to know what he looks like if I want to create a picture of him. Meanwhile, SD does know what mickey looks like, I don't have to know. Even a blind man who has no idea what the mouse looks like can create images of him because SD contains the info of what the character looks like.

It contains the colors yellow, red, black and white. It contains curve tools that can represent lines of specific angles and specific thicknesses, and a raster grid on which they can be presented.

Neither photoshop nor the AI contain an actual image of Mickey Mouse. They both contain the tools necessary to depict mickey mouse. Photoshop lacks the idea of mickey mouse, and so needs a human who does have that idea. The AI simply contains the idea. Not a picture of mickey, the idea of mickey.

Even a blind man who has no idea what the mouse looks like can create images of him because SD contains the info of what the character looks like.

Even a blind man can create an image of mickey in photoshop; a custom UI would make it easier but is not actually necessary. square canvas, new layer, circle tool>center-of-canvas>150-pixel-radius, set line width to 3 pixels, circle tool> center-of-canvas-minus80px_x-minus80px_y>80-pixel-radius, etc,etc...

SD pretty much contains a representation of mickey mouse in the model weights.

In much the same way that my brain contains a representation of mickey mouse, yes. In other senses, very much no. There is no picture, there is no mesh. There is no actualized output contained in the model. There is the idea, just as there is in my own mind. The AI is a rudimentary mind, not a collection of pictures. I'm pretty sure this can be proved mathematically, just based on the size of the final model versus the size of the training set, versus theoretical limits of data compression. The original pictures are not in there in any meaningful sense.

Else what do you say about neural encodings of distance fields from which the surface can be recovered? How about NeRFs?

I have no idea what this means. Elaborate?

In much the same way that my brain contains a representation of mickey mouse, yes.

Yes, but you can't release your brain. It's not an artefact or a tool. Humans and their minds have a very different standing under the law than inanimate objects and information-carrying media.

I have no idea what this means. Elaborate?

There are new ways of representing 3D scenes or 3D geometry using neural networks. They encode the properties of the 3D scene in neural network weights, and they can be used to create new images. But the representation has no notion of images, pixels, vertices, textures etc, it's all a bunch of "opaque" neural weights.

Here's one variant described: https://youtube.com/watch?v=T29O-MhYALw

The point is, law usually cares about intended use and how one interacts with the thing, not the implementation details. And nobody really knows how courts will treat these new methods. Laws were not made with the knowledge of such things, so interpretations of the wider goals will have to guide the court's work.

More comments

I'm aware that existing copyright law doesn't cover style, and I'm not saying it should. I'm just saying it's not as absurd and incoherent as you made it out to be, that's all.

In what way is it not absurd or incoherent? The original quote claims that artists have a right to the styles they use. What would such a right actually look like, in detail? The above is an attempt to actually describe what it would look like, and why such an idea is crazy. If you think it's not crazy, I'd be interested in hearing an argument as to why.