site banner

Culture War Roundup for the week of January 16, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

Some more heating up in the AI image generation culture wars, with stock image company Getty Images suing Stability AI over alleged copyright violations. Here's Getty Image's full press release:

This week Getty Images commenced legal proceedings in the High Court of Justice in London against Stability AI claiming Stability AI infringed intellectual property rights including copyright in content owned or represented by Getty Images. It is Getty Images’ position that Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images absent a license to benefit Stability AI’s commercial interests and to the detriment of the content creators.

Getty Images believes artificial intelligence has the potential to stimulate creative endeavors. Accordingly, Getty Images provided licenses to leading technology innovators for purposes related to training artificial intelligence systems in a manner that respects personal and intellectual property rights. Stability AI did not seek any such license from Getty Images and instead, we believe, chose to ignore viable licensing options and long‑standing legal protections in pursuit of their stand‑alone commercial interests.

This follows a separate class action lawsuit filed in California by 3 artists against multiple image generation AI companies including Stability AI, Midjourney, and DeviantArt (which is an art sharing site, but which seems to be working on building its own image creation model). According to Artnews, "The plaintiffs claim that these companies have infringed on 17 U.S. Code § 106, exclusive rights in copyrighted works, the Digital Millennium Copyright Act, and are in violation of the Unfair Competition law." It seems to me that these 2 lawsuits are complaining about basically the same thing.

IANAL, and I have little idea of how the courts are likely to rule on this, especially English courts versus American ones. I know there's precedent for data scraping being legal, but those are highly context-dependent, and e.g. the Google Books case was contingent on the product not being a meaningful competitor to the books that were being scanned, which is a harder argument to make about an AI image generator with respect to a stock image service. In my subjective opinion, anything published on the public internet is fair game for training by AI, since others learning from viewing your work is one of the things you necessarily accept when you publish your work for public view on the internet. This includes watermarked sample images of proprietary images that one could buy. However, there's a strong argument to be made for the other side, that there's something qualitatively different about a human using an AI to offload the process of learning from viewing images compared to a human directly learning from viewing images such that the social contract of publishing for public consumption as it exists doesn't account for it and must be amended to include an exception for AI training.

Over the past half year or so, I'm guessing AI image generation is second only to ChatGPT in mainstream attention that has been directed towards AI-related stuff - maybe 3rd after self-driving cars, and so it's unsurprising to me that a culture war has formed around it. But having paid attention to some of AI image generation-related subreddits, I've noticed that the lines still don't really fit with existing culture war lines. There's signs of coalescing against AI image generation in the left, with much of the pushback coming from illustrators who are on the left, such as the comic artist Sarah C Andersen who's one of the 3 artists in that class action lawsuit, and also a sort of leftist desire to protect the jobs of lowly paid illustrators by preventing competition. But that's muddled by the fact that, on Reddit, most people are on the left to begin with, and the folks who are fine with AI image generation tools (by which I mean the current models trained on publicly-available but sometimes copyrighted images) are also heavily on the left, and there are also leftist arguments in favor of the tech for opening up high quality image generation to people with disabilities like aphantasia and others. Gun to my head, I would guess that this trend will continue until it's basically considered Nazism within 2 years to use "unethically trained AI" to create images, but my confidence level in that guess would be close to nil.

From a practical perspective, there's no legislation that can stop people from continuing to use the models that are already built, but depending on the results of these lawsuits, we could see further development in this field slow down quite a bit. I imagine that it can and will be worked around, and restrictions on training data will only delay the technology by a few years, which would mean that what I see as the true complaint from stock image websites and illustrators - unfair competition - wouldn't be addressed, so I would expect this culture war to remain fairly heated for the foreseeable future.

Show me where, in the stability AI software, the getty photos are saved. Show me how to get one of the getty images out of stability AI.

You can't.

Not out of place when discussing court cases, mind you, but when I see these rather strictly legalistic arguments used in the context of unprecedented, transformative tools, they seem lacking. Surely the circumstances call for more than pointing to such and such paragraph? From my shallow understanding of how society handled previous such changes, it's a vain hope, losers will simply have to cope.

This is where the legislator is supposed to actually be of any use to people.

He's supposed to get all the people in a room, hash out some compromise that doesn't completely destroy norms or completely stifle innovation and drill down some new norms so the people getting made redundant can merrily go into irrelevance without destitution.

But that only works if you have competent elites, so I guess legal fights it will be.