site banner

Culture War Roundup for the week of June 5, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

Brief roundup on AI image generation news.

First up: Is AI Killing the Stock Industry? A Data Perspective.

Today, a new perceived threat is on the rise: AI-generated imagery. And again, stock producers are asking themselves: Is the stock industry going to die? Should I quit photography or stock altogether? Or should I go all-in on producing AI images? Any such business decision should always be driven by data. Stock producers are not in the art industry, their goal is to satisfy market demands for visual media. In this blog post, we will attempt to answer some of these questions from a data perspective.

Since we're getting close-ish to the one year anniversary of high-quality AI image generation tools being made available to the public, I don't think it would be too early to start looking at the market impact of AI art, even though adoption takes time and I think it will be at least a few years before we get a clearer picture of what the long-term effects will look like. Intuitively, many have expected that stock photography would be one of the sectors of the art industry that is most vulnerable to replacement by AI, so if anyone is feeling the effects, it should be the stock image companies and the photographers/illustrators who sell their content via these companies.

The linked post proceeds by looking at a breakdown of contributor revenue generated by the main stock photo marketplaces. Overall, it shows minimal impact to stock photo sales over the last 9 months: revenue per download is steady at iStockPhoto, somewhat down at Shutterstock, and sharply up at Adobe Stock. It's possible that Adobe's windfall could be explained by Adobe's pro-AI stance, although I don't think the data in this post directly supports that conclusion. Only 13% of users surveyed for this writeup have attempted to sell AI-generated imagery, and AI images are performing slightly worse than non-AI images on the market:

One of the metrics we show for collections in Stock Performer is “STR”, short for “sell-through rate”. This is the percentage of files that have had at least one sale. Looking at all Adobe Stock files since 2022, we get a sell-through rate of 13%. Or, put another way, 87% of all Adobe Stock files created since 2022 have not made a sale yet. (The STR for all non-AI images is the same, 13%.) How does this compare to AI-generated images?

The sell-through rate for AI-generated images is somewhat lower, at 9%. So at this point, more AI-generated images don’t sell on average. To many contributors, this may not be a problem because it is much more cost-effective to produce large volumes of images than it is with traditional photography.

The main takeaway is that there has not yet been a mass exodus away from stock photo companies and towards in-house AI image generation. For those who are bearish on AI's ability to upend the stock photo industry, how much time do you think it will take? What further developments need to occur?

In other news, Forbes did this hit piece on Emad because of... reasons?

In reality, Mostaque has a bachelor’s degree, not a master’s degree from Oxford. The hedge fund’s banner year was followed by one so poor that it shut down months later. The U.N. hasn’t worked with him for years. And while Stable Diffusion was the main reason for his own startup Stability AI’s ascent to prominence, its source code was written by a different group of researchers. “Stability, as far as I know, did not even know about this thing when we created it,” Björn Ommer, the professor who led the research, told Forbes. “They jumped on this wagon only later on.”

I don't really know what the angle is here or why they felt the need to publish this now, so, you're free to speculate.

It does mention the lawsuits that Stability is currently facing over its use of training data:

Stability is also facing a pair of lawsuits which accuse it of violating copyright law to train its technology. It filed a motion to dismiss one from a class action of artists on grounds that the artists failed to identify any specific instances of infringement. In response to the other, from Getty Images, it said Delaware — where the suit was filed — lacked jurisdiction and has moved to change the location to Northern California or dismiss the case outright. Both motions are pending court review. Bishara declined to comment on both suits.

which will be important to watch going forward as they could have implications for other types of AI, including LLMs.

In other news, Forbes did this hit piece on Emad because of... reasons?

Paranoid conspiracy time: there's a large push in the ML world to limit the access that the public has to powerful models. Lots of this is couched in the language of "AI safety", but this term tends to be used less in the Yudkowskian "you've been transmogrified into a paperclip!" sense and more in the "it is unsafe if your model says anything that would make your modal San Franciscan feel icky" [1]. Because we don't want people using AI to output wrongthink or insufficiently-diverse generations [2], we must have strong gatekeepers preventing tous pollous from using these models to engage in harmful and/or toxic behavior. Naturally, the journalists at Forbes are cut from the same political cloth as our AI safety guardians. They too recognize the danger that AI-powered hate speech and hate images can pose.

Enter Emad "Prometheus" Mostaque. He gives the plebs access to an image-generation model that enables them to spit out all the non-diverse, objectified pin-up bimbos that they want. This is the exact fear, finally come to pass! Therefore, is it any wonder the journalists would seek to discredit Mostaque? Failing to do so could mean that his next project, whatever it may be, succeeds, and allows an even greater torrent of unsafe content to be spewed onto the net. Given these beliefs, it's only rational to attack the man.


[1] For an example of what "safety" means in practice, check out the old LaMDA paper from Google, in which the model fine-tuned for "safety" no longer says that it is understandable why people would be opposed to same-sex marriage; it instead vocally supports it. Anthropic's RLHF paper has its further-lobotomized model make a strong denunciation against plastic straws as well. These might not seem like a lot, but it's clear which side these models are playing for. Additionally, note that "safety" is used as an explicit rationale for not releasing models; OpenAI says as much in their GPT4 paper.

[2] Recall that DALL-E 2 was found to be modifying humans' prompts in order to add characters of specific races to the image outputs. The original post in that reddit thread was, of course, deleted, but I remember the evidence being pretty compelling (Gwern ended up weighing in at one point).