site banner

Culture War Roundup for the week of October 24, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

20
Jump in the discussion.

No email address required.

@CeePlusPlusCanFightMe

Shutterstock will start selling AI-generated stock imagery with help from OpenAI

Today, stock image giant Shutterstock has announced an extended partnership with OpenAI, which will see the AI lab’s text-to-image model DALL-E 2 directly integrated into Shutterstock “in the coming months.” In addition, Shutterstock is launching a “Contributor Fund” that will reimburse creators when the company sells work to train text-to-image AI models. This follows widespread criticism from artists whose output has been scraped from the web without their consent to create these systems. Notably, Shutterstock is also banning the sale of AI-generated art on its site that is not made using its DALL-E integration.

This strikes me as fantastically stupid. Why would I buy AI-generated imagery from Shutterstock when I could just make it myself? In the near future, people who don't have high-end PCs won't even need to pay Stability or Midjourney for a subscription. Getting the open source version of SD to run smoothly on your phone is a mere engineering problem that will eventually be solved.

Maybe they just understand this market better than me? Never underestimate just how little work people are willing to put into things. Even playing around with prompts and inpainting for a few hours may be too much for most people, when they could just hand over $10 for a pretty picture on Shutterstock instead.

The "Contributor Fund" also makes me slightly more bearish on the prospect of there being any serious legal challenges to AI art. If there was any sector of the art market that I thought would have been most eager to launch a legal challenge, it would have been the stock photo industry. They seem like they're in the most obvious danger of being replaced. Undoubtedly, copyrighted Disney and Nintendo art was used to train the models, and those companies are notoriously protective of their IP, but they would also like to use the technology themselves and replace workers with automation if they can, so, they have conflicting incentives.

According to the article though, Shutterstock was already working with OpenAI last year to help train DALL-E, so apparently they made the calculation a while back to embrace AI rather than fight it. The "Contributor Fund" is pretty much a white flag. But maybe Getty will feel differently.

Edit to clarify a bit: What this seems to come down to is that they're adding a "DALL-E plugin" to their website. Why I would use Shutterstock as a middleman for DALL-E instead of just using DALL-E myself, I'm not sure. Their announcement makes it clear that they're not accepting AI submissions from sources besides their own plugin, due to outstanding legal concerns:

In this spirit, we will not accept content generated by AI to be directly uploaded and sold by contributors in our marketplace because its authorship cannot be attributed to an individual person consistent with the original copyright ownership required to license rights. Please see our latest guidelines here. When the work of many contributed to the creation of a single piece of AI-generated content, we want to ensure that the many are protected and compensated, not just the individual that generated the content.

There's been some talk here about corporations using AI art and then simply lying about its origin in order to retain copyright. If I use Megacorp X's art without their permission and they try to claim a copyright violation, and I claim they made it with AI so I can do whatever I want with it, I wonder where the burden of proof would be in that case?

Why would I buy AI-generated imagery from Shutterstock when I could just make it myself?

Why does McDonald's exist when you can make a hamburger yourself? People pay for convenience .

Yeah, but worth considering the inconvenience involved in having to track which rights you have purchased to which media, especially if you're a small business using a bunch of them. AI art lacks this issue, since you know nobody has the rights to the image because it's unique.

And people using stock images are people who are, for the most part, running small businesses, not consumers who we might expect to be lazy.

AI art lacks this issue, since you know nobody has the rights to the image because it's unique.

Not necessarily. AI art often involves seed images which may be copyright. Also, the person who produced the AI art my still try to claim copyright.

Ah, an unstated but crucial assumption in the post was that you personally the one who created the image. it's true, AI images grabbed off of a stock website are basically similar to regular stock images in all relevant respects.

There is something of FUD campaign going on with AI art property rights based on the idea of the model and model produced works being derivative from the works they were trained on. You've probably seen some of the comments reason along those lines in earlier threads here. Of course with IP rights, buying a right from someone who may themselves not have that right does not fully protect the purchaser, but that aspect is less well advertised so it may still seem worthwhile to purchase a license from a known entity.

There is something of FUD campaign going on with AI art property rights based on the idea of the model and model produced works being derivative from the works they were trained on.

IANAL, but I don't see that as FUD, I see it as an open legal question.

Ofc judges can decide whatever. But there's no way they're going to side with the artists, destroying AI tech. It'd be just yielding to China.

Ignoring practicalities, it just doesn't make any sense. Why couldn't you train AI on copyrighted works while still able to train your own biological neural network on them?

Have you tried to reproduce a copyrighted photo using only the latent representations stored in your biological network?

While biological networks and computer models have some similarities in abstract, in practice there are crucial differences.

it does seem fair to not want to be the test legal case for AI art

seems fair. I do think there's also a thing where it's not clear yet how suing someone for using ai art works since the way it works for art now is that if you use an image you don't have rights to, the way that shakes out is the person who originally made the image sues you for damages (and they can prove they made the image because they presumably have some timestamped evidence indicating so).

But who would be responsible for noticing and suing somebody who made an image with an AI trained on copyrighted images? How would they know it was AI generated? Sure, subpoena a chain of custody for the image, fine. How are you going to get a judge to agree with you that this piece of art looks like it was generated on copyrighted images if the image itself does not contain those images? Gotta get the judge onboard to get the subpoena.