site banner

Culture War Roundup for the week of December 19, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

16
Jump in the discussion.

No email address required.

Begun, the Butlerian Jihad has:

/r/dune is not accepting AI-generated art.

This applies to images created using services such as DALL-E, Midjourney, StarryAI, WOMBO Dream, and others. Our team has been removing said content for a number of months on a post-by-post basis, but given its continued popularity across Reddit we felt that a public announcement was justified.

We acknowledge that many of these pieces are neat to look at, and the technology sure is fascinating, but it does technically qualify as low-effort content—especially when compared to original, "human-made" art, which we would like to prioritize going forward.

Ok, the Dune one's a little funny given the in-universe history, but a pretty wide breadth of art-focused hosts have banned AI-generated art (to the extent they can detect it) or have sometime-onerous restrictions on what AI-genned art can be used. Some sites that still allow AI art, such as ArtStation or DeviantArt, have had no small amount of internal controversy as a result. Nor is this limited to art: StackOverflow's ban on ChatGPT-generated responses makes a lot of sense given ChatGPT's low interest in accuracy, but Google considers all AI-generated text spam as a category for downranking purposes, to whatever extent they care to detect it. And a lot of mainstream political position seems about what you'd expect.

Most of these are just funny, in no small part because alternatives remain (uh... maaaaaybe excepting Google?). This is a little more interesting:

We are writing in response to your correspondence of October 28, 2022 as counsel to Kristina Kashtanova. Kashtanova was recently granted copyright registration no. VAu001480196 for her work “Zarya of the Dawn” (the “Work”).

Subsequent to Kashtanova’s successful registration of the Work, the Office initiated cancellation of her registration on the basis that “the information in [her] application was incorrect or, at a minimum, substantively incomplete” due to Kashtanova’s use of an artificial intelligence generative tool (“the Midjourney service”) as part of her creative process. The concern of the Office appears to be that the Work does not have human authorship, or alternatively that Kashtanova’s claim of authorship was not limited to exclude elements with potential non-human authorship. We are writing to affirm Kashtanova’s authorship of the entirety of the Work, despite her use of Midjourney’s image generation service as part of her creative process.

Zarya of the Dawn isn't actually a good piece -- and not just for the gender Culture War reasons; its MidJourney use isn't exactly masterful and probably just an attempt to cash in on Being First -- but most art isn't good. Quality isn't the standard used by the Copyright Office or copyright law more broadly.

The standard is complicated, not least of all because copyright itself is complicated. Sometimes that's in goofy ways, like in Naruto v. David Slater et al. (better known as the Ape Selfie case), whether an animal had the ability to bring a copyright suit for a picture taken by that animal. While Naruto fell on statutory standing questions in an unregistered copyright suit, the Copyright Office issues a regularly-updated compendium of practices for those seeking registration that seems to reference it or a similar case, among other pieces:

As discussed in Section 306, the Copyright Act protects “original works of authorship.” 17 U.S.C. § 102(a) (emphasis added). To qualify as a work of “authorship” a work must be created by a human being. See Burrow-Giles Lithographic Co., 111 U.S. at 58. Works that do not satisfy this requirement are not copyrightable.

The U.S. Copyright Office will not register works produced by nature, animals, or plants. Likewise, the Office cannot register a work purportedly created by divine or supernatural beings, although the Office may register a work where the application or the deposit copy(ies) state that the work was inspired by a divine spirit. Examples:

  • A photograph taken by a monkey

But while animal pictures or naturally-formed rocks are one example left outside of the scope of "authorship", it's not the only one:

Similarly, the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author. The crucial question is “whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.”

Most of these examples are trivial : size changes, manufacturing requirements, simple changes to a song's key, or direct output of diagnostic equipment. The most complex currently listed example is "A claim based on a mechanical weaving process that randomly produces irregular shapes in the fabric without any discernible pattern", which is the sort of highly specific thing that makes you sure someone's tried it.

It'll be interesting to see if the next update has text on AI-generation, and if so, if the Office tries to separate different levels of human interaction (or, worse, the models themselves).

The US Copyright Office's determinations do not control court interpretation of the Copyright Act, so it's possible that prohibitions on registering ai-generated or ai-assisted art or text would still leave some ownership rights. But it's unlikely, and registration is required before someone can get statutory damages. Now most people aren't going to care much about the legal exactidues of copyright for their Original Character Donut Steel 8-Fingers to start with. Because all copyright claims are federal or international law, and there is no federal small claims court (and no meaningful international court), these protections are fairly minimal for hobbyist or end-users even when present and when the user cares, anyway.

But it isn't too hard to think of problems that could come about, anyway. There's already a small industry of pirates that scrape public spheres for artwork and creations to repeat (cw: badly drawn cartoon butts). To what limited extent these have been kept in check, that's because traditional retailers are at least worried about the outlier case where someone's willing and obnoxious enough to prove a point, or at least unsure they're at far enough distance for tort and PR purposes. And this is a signal, if a weak signal, for other matters like whether the business would care for liability if their USB cable burns down your house, or you demand a return for the clothing that fell apart seconds after you put it on, or a thousand other minor things.

It's... not clear how long that lasts, if AI-gen is outside of copyright, categorically, but also hard for humans to detect (and filtered for AI-art humans find hard to detect). I was cautiously hopeful that tools like StableDiffusion could end up a helpful tool for artists, but a lot of artists are concerned enough about the concept to be willing to burn down the field and join hands with Disney to do it. I don't think people are going to like what happens when the groups optimized for a copyright-free existence become hard to distinguish from their own sphere, and able to happily intervene within it.

Zarya of the Dawn

Off-topic, but looking it up, I found this, apparently a real part of the comic. I found it hilarious, it looks like a parody of the recent obsession with "mental health".

Depending on how strongly you believe in the "Clown World" meme (and subsequently, depending on how strongly you associate clowniness with The Joker from DC), that does seem like a plausible prediction. Plausible, anyhow.

I wish the /r/dune mods had done that post in character.

Anyway, regarding Google's downranking: I assume that's pragmatism in the fight against SEO spam. Google operates in a bizarre hellscape dominated by Goodhart's Law. Blanket-downranking AI text might well be efficient at separating the wheat from the chaff. Sure, that's conditional on being able to detect it at all. But SEO isn't about sneaking AI content into human repositories. It's about exploring the attack surface of Google Search and, upon finding something promising, guzzling as much algorithmic attention as possible. Athena, unfolding fully clothed from the mind of Zeus. I'm...not terribly bothered by the prospect of filtering more of that out.

I can't wait for the deep philosophizing about precisely how much 'human' authorship is required for copyright to attach.

Because whatever it is, the current state of AI art lets you sprint right up to that line, stop on a dime before crossing it, and then stick your pinky toe over it if you want.

Seriously. If the human does the basic sketch outline of a given concept and feeds the sketch into the AI with a detailed prompt of what they want that sketch to eventually look like, how much of the authorship is theirs?

Or the reverse. Have the AI produce the basic sketch of the concept and then the artist develops it from there. Or the artist develops a sketch, has the AI turn it into a Paint-by-numbers picture, and the artists, following the AIs instructions, paints in the final details.

Or the AI produces the concept from scratch, but the artist goes in and modifies every part of it in some way such that every pixel of the image has been 'manually' changed, even if the base image is recognizable.

Or do the classic 'cheater' move of having the AI produce something originally, place some paper over the image with a backlight, and trace over it manually and claim the work as your own. Tracing over photographs is generally frowned upon, at least if you're a professional artist, but at least that's actual human hands producing the end result.

The AI can aid the artist to almost any degree in any stage of the process. It's parameters can be adjusted with finely grained particularity to have as much or as little influence as the law claims is required.

It's even crazier for written works. If an AI produces an essay that effectively conveys the ideas the 'writer' has in their head such that they are satisfied with publishing it with minimal edits, does that somehow invalidate the ideas as written? How much editing to make the AI's words 'your' words, especially if the AI's words already convey your thoughts in a perfectly cromulent manner? What if the writer just uses those 'predictive text' programs that lets them write faster by filling in the words it thinks the person wants, but the writer manually approves each one?

(Note, as a matter of pride I would still want to physically type out most of my correspondence, including comments like this. It seems like that is a fair expectation when you communicate with other humans whilst representing yourself as a human presenting their own ideas that you not filter it through a middleman. But again, what does 'filter through a middleman' mean in practice?)

And I'll even agree that it is 'wrong' to represent yourself as having artistic skills if you rely on the AI to actually produce the work, I'll agree that you shouldn't say that an AI work is 'yours,' certainly not without disclosing the fact that AI was involved.

But this particular angle of attack, making AI-produced works exempt from copyright, is not going to stem the tide that's coming and will only produce a lot of serious-sounding but absurd-in-practice rules and enforcement mechanisms that will waste a lot of people's time.

Artists are definitely NOT effectively winning people to their sides by being so whiny about the issue, rather than attempting to suggest reasonable policy prescriptions that might at least be politically viable.

A big part of it for me is that you generally want to believe you're communicating with another human who is capable of integrating information and learning and taking some of that information 'to heart' as it were.

NOT an automated bot that is incapable of changing it's mind. Time spent discussing stuff with an AI is genuinely wasted time, in this sense. So if I want other people to do me the courtesy of actually reading what I write and responding to it with 'genuine' interest, I can do them the courtesy of typing the words out myself so they are actually the product of my brain.

The question is, what is even a reasonable policy prescription here?

Requiring that people hold valid licenses for their entire training sets isn't even enforceable enough to do anything. And letting these things go unhampered means that you have machines that can launder any sort of intellectual property.

I'm not even sure where the law as it stands lands us. The result of that Github Copilot lawsuit is a total mystery to me for instance.

Requiring that people hold valid licenses for their entire training sets isn't even enforceable enough to do anything

Why do we think this is unenforceable? Unappetizing because it hits AI hard, sure, but surely not unenforceable.

NAI, who suffered a high-profile hack & went on to watch their model be used as the basis for a thousand merges and derivatives, recently posted an infographic about how you can use their (unique, supposedly) style tokens to identify if a model is derived from theirs, and even the rough degree to which it was mixed in.

Tokens (even unused tokens!) are identifiable components of the model file structure, and (probably?) have to be. Combinations of tokens (which almost all of the NAI style tokens are) would be harder if you didn't know them, but they're still testable in seconds on consumer GPU or minutes on CPU. But very few people are interested in controlling tokens, and while NAI isn't the only group, there's not a ton of artists there.

Artists want to enforce on the media-level output, which is... more complicated, at best.

Well how would you enforce it?

Any model without an associated public data set is presumed guilty.

ban proprietary ML

I mean I guess that works, and I don't even have objections to that. But I don't think presumed guilt is legally tractable.

Models don't have rights.

The people who make them do. The civil forfeiture argument was never convincing to anybody. I don't see how this is any different.

My hypothetical model is speech which is free by default and you need to prove it's illegal in court before you can censor me or you're denying me due process.

Otherwise, enjoy the crypto wars again, I'm sure we can find a way to make weights fit on a t shirt.

A completed model doesn't really have stored information about the datasets used to generate it in any accessible manner (not least of all because it could easily outsize the model by several orders of magnitude). Someone could easily say a model was generated on dataset X and instead use dataset X + dataset Y, and proving otherwise would be very hard at current understanding of how models work.

And there's a variety of complications downstream from that -- if the original model X was trained on purely legal data, and someone brings a tuned model that they say was only trained on a subselection from rights-compliant source, for example.

(And then there's the downstream economic forces: if half of the image hosts require you to give up whatever AI/ML rights for submission, then this goes wonky places even if most hardcore artists don't use them.)

Right the only way to make this work would be nothing up your sleeve training-wise. You'd provide your training set and if your model can't replicate, bam, you're busted.

You'd need to disclose not only your training set, and model, but also the training environment and initial configuration. And then someone would need to spend hundreds to hundreds of thousands to do the actual training.

That's an interestingly roundabout way of mass-banning, but it runs into the same problem as just trying to ban the tech, in that a lot of people are just going to smuggle AI-gen outputs as 'naturally'-generated.

You'd need to disclose not only your training set, and model, but also the training environment and initial configuration

Done and done.

That's an interestingly roundabout way of mass-banning

I'm ok with this.

a lot of people are just going to smuggle AI-gen outputs as 'naturally'-generated.

Which is going to happen anyway, in fact it already is happening.

The question is, what is even a reasonable policy prescription here?

If I were suggesting them, on behalf of artists, I might go with laws that specifically outlaw representing work that involved AI models above a certain size at any stage as 'human created.' I would suggest that limitations could be placed on the commercial use of AI art, particularly that used in marketing and production of mass media. A simple way to restrict this could be to render such creations as public domain works instantly.

Maybe require any time a model above a certain size is 'trained' the dataset being used must be registered with [government agency] and artists might have some process by which they can opt out of inclusion in the data.

Would these rules be hard to enforce? Wildly. Would they be relegated to irrelevance as improved models appear? Probably.

But these suggestions are at least isomorphic to existing regulatory regimes and could utilize processes that currently exist.

Would these rules be hard to enforce? Wildly. Would they be relegated to irrelevance as improved models appear? Probably.

Perhaps the human brain - even when there are many of them grouped together and collaborating - are unable to come up with reasonably enforceable regulations on AI-generated images that placate its naysayers, and we'll have to rely on some sort of advanced AI in the future to come up with such regulations. Because, yeah, it already looks like the cat's out of the bag at this point. The models that are out there are already very powerful and can run very quickly on old consumer-level hardware, so even if all development in this were to stop right now, the trouble will remain.

I would suggest that limitations could be placed on the commercial use of AI art, particularly that used in marketing and production of mass media. A simple way to restrict this could be to render such creations as public domain works instantly.

I think this would help to an extent, but just because some piece of image is in the public domain wouldn't prevent a business from using it and from having copyright over the final product. Neither Die Hard nor its soundtrack is in the public domain just because it used clips copied straight from Beethoven's 9th symphony. I expect that assets that really specify the product's brand, like, say, the design of the main character, might be forced to be completely human designed, though, which could help. But then again, perhaps trademark law could come into play to provide legal protections even if copyright were lost.

The models that are out there are already very powerful and can run very quickly on old consumer-level hardware, so even if all development in this were to stop right now, the trouble will remain.

A close analogy is to cryptography, which the government tried at first to regulate as literal weapons, but not only was this a ridiculous position, it was impossible in practice due to the nature of cryptography itself.

And now crypto is ubiquitous and, of course, runs on consumer grade hardware.

Hard to see AI taking a different path, honestly.

just because some piece of image is in the public domain wouldn't prevent a business from using it and from having copyright over the final product.

Well that's what I'm saying, if you use AI work in your commercial product, the law states now that the whole work is public domain.

Thus there is incentive on the creator's part to be very, VERY careful about what they include in their product and fastidious about keeping records to prove each step is non AI-generated. And creates an incentive on others to try and catch them in the act.

Well that's what I'm saying, if you use AI work in your commercial product, the law states now that the whole work is public domain.

Thus there is incentive on the creator's part to be very, VERY careful about what they include in their product and fastidious about keeping records to prove each step is non AI-generated. And creates an incentive on others to try and catch them in the act.

I see, so you mean essentially creating another type of copyright status that's a sort of "infectious public domain," where its mere use in another work "infects" the entirety of the work with "public domain" status. If this were implemented and enforced, it seems like it could work, but my guess is it that it would also effectively decimate the current entertainment industry. Having to fastidiously document every single brush stroke that went into creating every single background prop in a film or every single floor texture in a video game would increase the costs of production of these things massively, to the extent that I think the business case just wouldn't be there anymore for most companies. And I think such level of documentation would be required so as to prevent companies from trivially getting around the regulation with a "don't ask, don't tell" approach.

I suppose there could be multiple tiers of "infectious public domain" for AI generated images where businesses could use AI for some low level things but not others and still retain copyright over the final product so as to not to be so onerous to the production process, but I admit that's getting too deep into the details of things that I'm ignorant of for me to form a meaningful opinion on.

but my guess is it that it would also effectively decimate the current entertainment industry.

Not seeing the downside, personally.

Having to fastidiously document every single brush stroke that went into creating every single background prop in a film or every single floor texture in a video game would increase the costs of production of these things massively, to the extent that I think the business case just wouldn't be there anymore for most companies.

Yeah, but we'll have to do that anyway if our goal is to limit copyright to only human-created works.

I don't know how you solve this issue any other way.

And I think such level of documentation would be required so as to prevent companies from trivially getting around the regulation with a "don't ask, don't tell" approach.

Oh yes, there would be people who consider it worth the risk. Especially as it becomes way, way harder to tell AI art from human.

A simple one would be to just pay a given artist to sign off on AI art as if he was the creator, and who is willing to lie under oath and investigation that he personally created all of those works.

I suppose there could be multiple tiers of "infectious public domain" for AI generated images where businesses could use AI for some low level things but not others and still retain copyright over the final product so as to not to be so onerous to the production process,

Yeah concept art, storyboarding, rough drafts, all things that don't make it into the final, saleable product could probably escape scrutiny.

The way I'm conceiving this is that "if you publish a work for purpose of sale to the public, it must not contain AI generated content."

but my guess is it that it would also effectively decimate the current entertainment industry.

Not seeing the downside, personally.

Well, the issue is that the entire purpose of such regulations is to appease the people who depend on these industries for their income. If the regulations just destroy their income in a different way - instead of replacing 10 artists with 1 artist who uses AI to be 10x as productive, it's just replacing 10 companies that hire artists with just 1 company that hire artists - it seems like it wouldn't appease those people. The ultimate point of any law surrounding copyright or intellectual property in general is to protect incomes, after all.

Yeah concept art, storyboarding, rough drafts, all things that don't make it into the final, saleable product could probably escape scrutiny.

The way I'm conceiving this is that "if you publish a work for purpose of sale to the public, it must not contain AI generated content."

This seems workable, but also kind of a nightmare scenario for the people who would want these regulations. In this scenario, all the rote work that goes into producing the textures you actually see on the screen must be painstakingly hand-painted, but all the creative work that went into creating the concepts behind could have AI aid, thus reducing the number of the less rote, more creative artistic jobs. Perhaps better than AI being used in every step of the process at least, I suppose, and perhaps only a little worse than how it is now, since from what I understand, most art jobs in the industry tend to be rote work anyway.

Well that's what I'm saying, if you use AI work in your commercial product, the law states now that the whole work is public domain.

Do you have a citation for this? That sounds extremely unlikely to me and would completely upend my understanding of copyright law. Which isn't to say you're wrong as I'm not well versed in copyright law, but a lot of people I work with have been including AI-generated material in their commercial products for a while now and I'd be shocked if their legal teams okayed that if it made the whole work public domain.

EDIT: Nevermind, I missed that this was a hypothetical.

Yeah, I'm proposing a policy solution that might be politically viable, not one that currently exists.

And, from a deeper level, it's not hard to miss some writing on the wall for the broader concept. StableDiffusion 2.0 has released a few weeks ago, closely followed by 2.1, with some nice new features and also a couple somewhat noticable subtractions: the new tokenizer has removed tags related to celebrities and almost all living or recently-living artists, along with anything that triggered the NSFW filter. The upcoming StableDiffusion 3.0 plans to allow a manual opt-out for artists from the training side.

The stated reasons for these changes are condensed here, but the less overt reason is probably public controversy and things like this. I reallllllly don't want to get into the legal questions of the state actor doctrine, but I would like to suggest that there's legal spaces around this discussion that might be weighing heavily on his mind..

Now, in theory, there's some technical advantages to this approach, not just the bizarre legal ones. Furry Diffusion trainers have already found many problems in tuning 1.x-variants due to the often-overloaded nature of common terms, and the broad concept of encouraging models more specifically focused for the interests and desires of specific people makes a good deal more sense than the limits and flaws of post-generation filtering.

But in turn, that personally-optimized tuning is hard and energy-intensive, and currently not available for a lot of people, even as GPU prices have dropped a bit.

Or it might be reasonable to argue that these excluded spaces not that important, were we not also having thousands of culture war battles to the teeth over everything else remotely related to sex, or spending tremendous amounts of (tbf, unuseful) attention on celebrities, or near-worshipping some of the excluded artists. And it's hard to see why all three are uniquely reasonable to set-aside.

And accepting these limitations at the OpenAI initial training level risks anyone trying to uncollar a locally-tuned version being stigmatized and viewed as interested solely in the very bad acts that OpenAI fears being tarred with themselves. That Eshoo letter, after all, is just as pissed about locally-generated 'bad' art as that made on a server.

Perhaps coincidentally, attempts to fund a porn-friendly version just got kicked off Kickstarter, on the tail end of this rather vague post.

I'm somewhat skeptical that the only pressures being applied are the public ones.

Almost everyone already has access to an image generator that can produce images of whatever they want. The only difference is that they can't share them with other people because they're inside their heads. Nobody worries much about this because it's normalized and because you can't show them to other people and pretend they're real. If illicit images become common enough, people will think of them the same way and they'll lose their power.

Really enjoyed these posts, two comments I'd add.

On the copyright side, I think it makes sense that the output of AI art generators can't be copyrighted. At least, as long as the use of copyrighted art to train an AI model isn't copyright infringement (I think it would pretty clearly be fair use currently). Otherwise you could do something like:

1. Find an artist whose style you like

2. Train an AI art generator on that artists works

3. Produce new works in that same style, whose copyright you own but the original author doesn't

That seems problematic to me. Especially since if you had spent time learning to produce art in that same artists style without the AI it could be a copyright violation. Laundering copyright violations through an AI seems like a problem to me.

On the legal front, I can't believe anyone is surprised by this arising as an issue. I remember when AI dungeon was new and it got used so often for the production of NSFW content that the model started to produce it in response to ordinary queries, eventually leading the developers to make some of changes to the model. The legal questions also seem complex. If I generate photo-realistic CP of a child who does not actually exist, is that a crime? Does it generate liability? Just for the individual actually producing or possessing the image or for the model developers? What if I create nudes of a celebrity? Would that be a tort? Maybe related to the use of likeness or image? What about various states revenge porn laws?

These questions do not have obvious answers to me and I understand why no one wants to be the first to find out!

  1. Find an artist whose style you like
  1. Train an AI art generator on that artists works
  1. Produce new works in that same style, whose copyright you own but the original author doesn't

That seems problematic to me. Especially since if you had spent time learning to produce art in that same artists style without the AI it could be a copyright violation.

I don't think it could, since styles can't be copyrighted. At best, it could be a trademark violation, if the artist in question has their style trademarked in some way. But in that case, the AI can't help you launder that, because it doesn't matter how you copied someone else's trademark, just that you did it. If you put a certain shade of red on the bottom of your shoes and sell them, that's a trademark violation whether you did it intentionally with malicious intent or you did it by randomly placing shoe parts and paint into a duffel bag and shaking them up and out popped, by pure chance, shoes with that particular shade of red on the bottom.

IANAL though, so someone please correct me if I'm wrong.

Especially since if you had spent time learning to produce art in that same artists style without the AI it could be a copyright violation.

I don't understand this; it is my understanding that this absolutely would NOT be a copyright violation. A style cannot be copyrighted, AFAIK, and the styles of influential artists have been copied forever (indeed, that is what it means to be an "influential artist). Can you elaborate?

If I generate photo-realistic CP of a child who does not actually exist, is that a crime?

Not in the US but possibly elsewhere. Unless of course the image is obscene or somehow is unprotected and illegal speech in another way, which is unlikely.

If I generate photo-realistic CP of a child who does not actually exist, is that a crime?

I'm going to bite that bullet and say 'yes'. The defence there is "but it's not a real child". However, the impetus is "I want to fuck a real child, but since I can't do that without being thrown in jail, this is the next-best thing". Or else "I don't want to fuck kids, but I'm happy to produce art for the sickos who do and take their money".

Since the consumer of child porn most likely would fuck a child if they could manage it, then that is indicative of desire to commit a crime (as for all the MAPs who are "but I don't want to do anything to a real child, I'm just romantically/sexually attracted without that being my will", if you're consuming child porn, yeah that argument doesn't hold too much water). Getting child porn of real children being raped and abused is not a victimless crime. Moving it one step up, 'this is photo-realistic so it looks like a real child but is computer-generated' is only a fig leaf. Since you can't fuck a kid without getting into trouble, and you can't have porn of real kids being really fucked without getting in trouble, you're settling for the next best thing.

Since you can't fuck a kid without getting into trouble, and you can't have porn of real kids being really fucked without getting in trouble, you're settling for the next best thing.

You can't tie up a non-consenting woman up and have sex with her, so therefore doing the same to a consenting woman who is pretending to be non-consenting is settling for the next best thing (and must be illegal or wrong). Spot the flaw?

I'm going to bite that bullet and say 'yes'. The defence there is "but it's not a real child". However, the impetus is "I want to fuck a real child, but since I can't do that without being thrown in jail, this is the next-best thing". Or else "I don't want to fuck kids, but I'm happy to produce art for the sickos who do and take their money".

Since the consumer of child porn most likely would fuck a child if they could manage it, then that is indicative of desire to commit a crime (as for all the MAPs who are "but I don't want to do anything to a real child, I'm just romantically/sexually attracted without that being my will", if you're consuming child porn, yeah that argument doesn't hold too much water).

Presuming all this, does it also follow that photorealistic first person shooter games, if they eventually become possible, ought to be illegal? Or rather, a video playthrough of a photorealistic first person shooter game where the player murders innocent bystanders. Of course, there are plenty of reasons to want to watch a video of a photorealistic first person shooter game other than wanting to live out the fantasy of what's depicted in the video game but lacking the legal ability to do so, but those reasons can apply to fictional photorealistic CP as well.

In the US, it is not illegal to merely desire to commit a crime.

Punishment for a status is particularly obnoxious, and in many instances can reasonably be called cruel and unusual, because it involves punishment for a mere propensity, a desire to commit an offense; the mental element is not simply one part of the crime but may constitute all of it. This is a situation universally sought to be avoided in our criminal law; the fundamental requirement that some action be proved is solidly established even for offenses most heavily based on propensity, such as attempt, conspiracy, and recidivist crimes.[4] In fact, one eminent authority has found only one isolated instance, in all of Anglo-American jurisprudence, in which criminal responsibility was imposed in the absence of any act at all

Powell v. Texas, 392 US 514, 543 (Black, J, concurring).

Besides, the only reason that non-obscene child porn is not protected by the First Amendment is that its production harms the child involved. New York v. Ferber, 458 U.S. 747 (1982). Hence, non-obscene depictions of children having sex that does not involve a real child are protected speech.

What is that one isolate instance?

That argument would also apply to someone writing the following erotic fanfiction: "two people have sex very erotically but without meeting legally relevant definitions of obscenity. One of them was only 17 years, 364 days, 23hours, and 59 minutes old, you sick fuck" (check out my AO3 account and fanbox for more sexy action featuring minors)

Can that be made illegal too, since it betrays a desire to bang minors? In fact, that parenthetical comment could be prosecuted under current caselaw about "pandering" if we use your definition!

Can that be made illegal too, since it betrays a desire to bang minors?

For start, having actual sex with 17 years old is likely to be legal in many places. See https://en.wikipedia.org/wiki/Age_of_consent#/media/File:Age_of_Consent_-_Global.svg

But filming it, drawing it, or talking about it can still be illegal in many of the same places!

(Edit: or getting married before having sex, in those blue states that recently raised the marriage age to 18, but left the age of consent at 16 or lower!)

Unless of course the image is obscene

How could such an image possibly not be obscene

Because in the United States, a work is obscene only if 1) the average person applying contemporary community standards would find the work, taken as a whole, appeals to the prurient interest; AND 2) the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law; AND 3) the work, taken as a whole, lacks serious literary, artistic, political or scientific value.

In contrast, a work can be child pornography even if it is not obscene. So, a work which has substantial literary value, when taken as whole, is not obscene, but might be child porn. Similarly, a work which does not depict sexual conduct (or excretion, as some courts have said) cannot be obscene, but it can nevertheless be child pornography, because "the legal definition of sexually explicit conduct [in the federal child porn statute] does not require that an image depict a child engaging in sexual activity. A picture of a naked child may constitute illegal child pornography if it is sufficiently sexually suggestive." See here. And see US v. Knox, 977 F. 2d 815 (3rd Cir 1992)[Child porn conviction upheld where "[t]he tapes contained numerous vignettes of teenage and preteen females, between the ages of ten and seventeen, striking provocative poses for the camera. The children were obviously being directed by someone off-camera. All of the children wore bikini bathing suits, leotards, underwear or other abbreviated attire while they were being filmed. The government conceded that no child in the films was nude, and that the genitalia and pubic areas of the young girls were always concealed by an abbreviated article of clothing. The photographer would zoom in on the children's pubic and genital area and display a close-up view for an extended period of time . . . with the obvious intent to produce an image sexually arousing to pedophiles. "].

Hence, many works can be child porn, yet not obscene.

On the copyright side, I think it makes sense that the output of AI art generators can't be copyrighted. At least, as long as the use of copyrighted art to train an AI model isn't copyright infringement (I think it would pretty clearly be fair use currently)... 3. Produce new works in that same style, whose copyright you own but the original author doesn't

To an extent, (and even more so where steps 1/2 are replaced by 'use img2img'), though the state of copyright for 'traditional' cloning of media kinda makes this a weird or awkward question. The United States doesn't have any case quite as close to the line as the UK's infamous Red Bus case, but the Korean War stamp is pretty close: there certainly are ways in which even traditionally-created 'art' can be so derivative that it is infringement, even if the processes used to make the piece would otherwise allow copyright.

But these standards are incredibly tight. I like to use Rafman and similar 'found/outsider' art in furry contexts, simply because their 'transformative' nature is often limited to filing off signatures, but for a mainstream one, this Warhol v. Goldsmith case may fall one way or the other... and you'd have to overtrain the everliving hell out of a diffuser to get something that narrowly replicative. Indeed, the same complaints (in addition to the juvenille nature of the 'joke') would apply to any diffuser that produced the same result as the art in Leibovitz v. Paramount (cw: artistic nudity, mpreg), which is a clearly settled case. Or for a more boring example, see Cariou v. Prince. There's a pretty wide variety of contexts where lifting and directly copying an original work, even without commenting on that original work, is still considered transformative use, and while AI art can fall short of that, it's not a unique tool in doing so (compare: literally any color filter), and even the most moderately useful models will not favor doing so normally.

((And this entire thing is statutory interpretation: Congress could theoretically change the whole approach overnight for better or wo- ha, sorry, can't keep a straight face; any changes would be a clusterfuck even if best-intended, and more likely it'd get written by Disney.))

I do think there are novel technical and social problems, though -- img2img or overtuned models can launder art theft in ways that current perceptual-hash-and-search methods do not detect but clearly would not meet even the low standards of Cariou, we might want to consider AI-gen stuff more inherently economic than traditional 'inspiration', ML spam is a novel danger to artist communication and coordination.

These questions do not have obvious answers to me and I understand why no one wants to be the first to find out!

Yeah, it's perfectly reasonable that a company wanting to make AI art tools doesn't want to stick their foot into that bear trap; it'll be a huge resource drain away from their core mission, even in the optimistic case that they'd win every matter.

The trouble's that they don't really have a choice of avoiding the question; they've just decided to let someone answer it for them.

  1. Find an artist whose style you like

  2. Train an AI art generator low paid third world immigrant on that artists works

  3. Produce new works in that same style, whose copyright you own but the original author doesn't

It's just labor saving. The fundamental question is pretty to me is pretty much the old pirate question. It's not like we figured out how to reliably monetize infinitely copyable art in the past and suddenly AI interrupted a stable equilibrium. We're on the eve of the collapse of the streaming model which was reforged from the cable bundling model which was reforged from the... We're not going to find an answer to this question that doesn't have dozens of problems at least as bad as this and I'm still going to go on some torrent site and download whatever want anyways.

If I generate photo-realistic CP of a child who does not actually exist, is that a crime?

In Canada, yes, and they don't even have to be photo-realistic. They don't even have to be photos. Text would also be illegal.

Wait, so if someone loaded this page in Canada they may be in legal trouble?

An underaged human copulates with an old adult human

163.1 (1) In this section, child pornography means

...

(c) any written material whose dominant characteristic is the description, for a sexual purpose, of sexual activity with a person under the age of eighteen years that would be an offence under this Act;

...

Making child pornography

(2) Every person who makes, prints, publishes or possesses for the purpose of publication any child pornography is guilty of an indictable offence and liable to imprisonment for a term of not more than 14 years and to a minimum punishment of imprisonment for a term of one year.

Distribution, etc. of child pornography

(3) Every person who transmits, makes available, distributes, sells, advertises, imports, exports or possesses for the purpose of transmission, making available, distribution, sale, advertising or exportation any child pornography is guilty of an indictable offence and liable to imprisonment for a term of not more than 14 years and to a minimum punishment of imprisonment for a term of one year.

Possession of child pornography

(4) Every person who possesses any child pornography is guilty of

(a) an indictable offence and is liable to imprisonment for a term of not more than 10 years and to a minimum punishment of imprisonment for a term of one year; or

(b) an offence punishable on summary conviction and is liable to imprisonment for a term of not more than two years less a day and to a minimum punishment of imprisonment for a term of six months.

Accessing child pornography

(4.1) Every person who accesses any child pornography is guilty of

(a) an indictable offence and is liable to imprisonment for a term of not more than 10 years and to a minimum punishment of imprisonment for a term of one year; or

(b) an offence punishable on summary conviction and is liable to imprisonment for a term of not more than two years less a day and to a minimum punishment of imprisonment for a term of six months.

Interpretation

(4.2) For the purposes of subsection (4.1), a person accesses child pornography who knowingly causes child pornography to be viewed by, or transmitted to, himself or herself.

https://laws-lois.justice.gc.ca/eng/acts/c-46/section-163.1.html

It looks like it's possible. I'm not a lawyer though.

The offence would be under subsection (4.1), but I think subsection (4.2) means that not knowing that line was there means one would not be guilty of an offence under subsection (4.1). But maybe he would be upon loading the page a second time. Maybe you could argue it isn't the dominant characteristic. It would seem absurd for someone to be so convicted, so I would be surprised if there isn't some reason this wouldn't be considered an offence.

I'm somewhat skeptical that the only pressures being applied are the public ones.

Who do you think is applying non-public pressure, and how?

By definition, I don't know, and may never know, or even know what I suspect didn't happen. For some patterns I've seen elsewhere...

At the most likely and least objectionable side, I'd be very surprised if a variety of internet safety guardians have not been sending parades of horribles and warnings about how machine learning could undermine all of their good work and result in horrible abuse and probably make puppies cry, of varying levels of accuracy or honesty. Above that, slightly, I'd be only slightly surprised if congressional discussions weren't also happening in the background, starting at the 'my staff would like to hear how this works and you better have a good answer' to the 'I would invite your staff before X event occurs, and thankfully no subpoenas will be issued'.

At the intermediate, most of these ML training groups are dependent on datacenter resources and other materials which they don't actually own. This could range from 'pay us the full rates that no one pays in bulk' to 'do you want us looking at your data buckets' to completely being booted. And the datacenter resources in turn could be getting calls or letters. So on for banks, and the whole 'build your own' stack.

At the less-likely and more-objectionable end, you start to have someone in or adjacent to law enforcement (uh, including those safety guardians) sending the equivalent of 'I don't want a messy court case, and you don't want a messy court case, and your business and everyone you employed don't want a messy court case, so how about we have a meeting of minds?' Or the 'we've got a bill in planning with your businesses name on it'.

EDIT: and there's weirder stuff. SoFurry changed its policy on ageplay and adjacent written material recently, and one of the motivations involved threats related to the site owner's international business travel. Or gelbooru and google pressure.

Again, I don't know that any of this is happening, nor would there be a way to prove it isn't. So I don't really want to poke too much at it. But Defense Distributed is (and remains) instructive.

Hang on, I'm confused, haven't you officially said you're in favor of censoring problematic art that people draw? Like, several times you've brought up Problematic Furry Artists needing to be forced to stop drawing problematic things. Surely this pressure is no different than furaffinity banning things.

You've talked about several artists by name. I believe zaush(?) was one, but if you really want to make me dig I can find it for you. Just thought you'd be willing to come out and say it because you were so forthright about it before.

For Adam Wan/Zaush specifically, my complaint about his content was that he'd posted stuff on a (few) sites that prohibited that content, while not tagging it with any of the many widely-recognized terms used by people who really strongly objected to seeing that content, denied it was anywhere close while being even less believable than the typical 'she's really a thousand-year-old dragon', and which he mostly got away with because of his social connections and popularity. Which, to be fair, he's since gotten a lot better about, albeit as much because no one was buying it after some tweet oopsies.

I've generally been vague on the specifics for this matter outside of PM because the specifics aren't as interesting as the more general problem of how rules squish, but I do think it a useful case because it's one most people would expect social and legal rules to be much stricter.

I recognize that tagging has some coercive nature to it, but I don't think it's on the same scale as... almost anything else, and it is a pretty important social norm for the fandom. I'll admit I'm tempted to make an unprincipled exception for the specific content for that case because I dislike it to an extent I do few if any other kinks, but it remains useful even for matters like m/m, m/f, f/f or kinks that I do like.

I'm not sure what other artists. The only other person I can remember mentioning in that sorta context is (the author) KyellGold, but then only to contrast with the largely positive coverage that 'mainstream' indie works like Blue Is The Warmest Colour (and it's a far more marginal case than that work). And... uh, I found that offputting enough to skip over Aquifiers, but I've recommended a number of KyellGold's other works.

But I may be forgetting other stuff.

At the broader object level, there probably was (and is) some underlying pressure campaign behind FurAffinity banning the stuff to start with, especially given SoFurry's recorded legal pressures and the Google pressures applied against Gelbooru (and probably e621?), and I've not posted on it where I've done so for AI art restrictions here (albeit for much more than one site) or even smaller examples like the short VioletBlue delisting. Some of the reason's the above unprincipled exception, I'll admit, but some of that's because there was not (to my knowledge) anything as glaring and public as the Eshoo letter, and some's just that FurAffinity in specific made the change predating the Culture War Roundup and either predated or was pretty early in SSC-reddit's life.

I don't think places focused on the stuff (or just widely permissive for it) should be banned, could be banned, or should suffer several coercive or economic pressures; art isn't life. I've openly praised ArchiveOfOurOwn for resisting censorship, for example. And at a pragmatic level, as much as I dislike this class of content, it does seem better that outlets exist and are well-demarcated, both for the trivial benefit of letting people not see it, and for the more serious and important goal of keeping people focused on it in a sphere that can work to protect minors from adults rather than 'protect' people from content (contrast eg Discord, where official bans also unintentionally make it hard to block predators or their potential victims, or... everything going with Twitter's old safety policies).