This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
One of my favorite bands just took a bunch of AI accusations, I guess, and he wrote a somewhat-pissed Substack post. That lead singer doesn't often step into culture war stuff, but this was close enough, I think:
and goes on to say that fighting AI art in this way is fruitless:
I regret that the culture war is poking random people in a new way in the last couple of years, and I can't help but cynically laugh at it. Not to mention how short-sighted it is. In that post, the lead singer details how much of a pain it is to do graphic design for music, and videos, and other art, and he hates it. Imagine if you could get a machine to do it? Also, it actually lifts up people who do not have money and allows them to make art like the people who have money do. Look at this VEO 3 shitpost. Genuinely funny, and the production value would be insane if it was real, for a joke that probably wouldn't be worth it. But now, someone with some Gemini credits can make it. This increases the amount of people making things.
I'm not sure I have any real thesis for this post, but I haven't been very good at directing discussion for my own posts, so, reply to this anecdote in any way you see fit. I thought it was interesting, and a little sad.
And the spot that has bugged me for a while now: how much AI/digital assistance is really crossing the arbitrary line you've drawn?
Can you use AI to generate the original concept and then spend a couple hours touching up from there, so the final result is just as much your effort as anything?
Can you sketch out the basic details and then feed it to the AI and basically have it 'paint by numbers' to complete the project?
Can you have the AI spit out 50 separate images, and YOU spend the time cropping, superimposing, rotating, adjusting and compositing them all together for the end result?
Make the rule on what is 'unacceptable' AI art and the tech can run RIGHT up to that line precisely to the pixel... then stick a single tiny digital toe over it, daring your to complain.
That is what makes the tech amazing/dangerous: whatever rules you make for it, the AI itself can be used to circumvent said rules.
Most furry spaces have largely gotten pretty strict rules about aigen, sometimes aligned to the points you've highlighted and sometimes not, and the end result is pretty goofy.
E621, for example, prohibits AI excepting use for "backgrounds (treated like using a photo as a background, quality rules apply); for artwork that references, but does not directly use, AI generated content; and for audio in video posts such as WebM." The moderators will explain, when pinged, exactly how a particular piece falls, and from my understanding are pretty clear and direct about things. I don't know of any sfw examples, but leeto's 4930019 (cw:M/M and M/F) is an example where AI had been used to create pose references, but the final file had never been touched by any AIgen program, and moderators said it was at the very border but still acceptable (though this scared the artist off enough to move to conventional digital pose generators). The rules are workable!
But they end up in a situation where half of the pixels in a particular artpiece are AIgen and it's okay, including a lot of stuff that's setting the stage, and then another piece where AIgen was only used to add some shadow or shading and that's unacceptable. More critically, serious enforcement is dependent on self-reporting. Rick Griffin got a piece banned from FurAffinity (and, presumably, would not be allowed on e621) for some tree renders and shading that I don't think anyone would have noticed had he not spelled it out. Obvious errors in logic or consistency can sometimes point to AIgen when an artist doesn't disclose it (or show other faults that trigger other quality rules), and there's a certain look to some of the most common AIgen, but you can and I have put out hundreds of pieces in an hour with wildly different styles and pretty good image quality consistency.
((And, on the flip side, a bad actor can actively use AI for what AI proponents would still consider stealing. Img2img with someone else's art can have far less actual effort than direct reference or even hand-tracing on a lightbox, but can be different enough to bypass a lot of conventional phash checks or even eyeball tests for 'novelty' and 'uniqueness'. If a small-name account starts doing it, it's hard to catch and harder still to persuasively demonstrate.))
But you can also just kinda have counterintuitive and inconsistent results, and just that's how things are. I'm not even sure some arbitrary rules would be bad -- an art gallery that allowed some limited number of upload per account per day (and restricted alts) using AIgen could avoid a lot of the spam and quality control problems that places which haven't banned the stuff often run into. The rules and points being made up is pretty common.
Yep. And this will increasingly be the case.
Generate a few dozen plausibly human-drawn images, release them on a plausible timeline that a human artist could achieve, and there's little anyone could do but speculate.
Maybe there's some solution that involves uploading the raw files from the WIP to a blockchain or something.
More options
Context Copy link
More options
Context Copy link
You could go with something akin to copyright law. If substantial creative aspects were done by a human, then the work is copyrightable.
Small retouching of an ai image would not count, but if "final result is just as much your effort as anything" then your work is likely transformative.
Sidenote on that is that people have no rights over their ai art and you can kang it without limit fully legally.
Don't have time to write a long post on it now, but interestingly enough there's a lot of legal scholarly analysis on the topic suggesting that generative AI by itself probably would be considered transformative use as per the current state of copyright law.
I would be interested to see a ruling on whether or not trained AI models are copyrightable. IMO neither "we threw all the text we could find at this" nor "and then we did a huge best-fit gradient descent" implies much creative input.
I think finding them to not be (like phone books, or typefaces) has some interesting implications.
I thiiink current legal thought is that they are not. That's maybe why we never really saw serious attempts to take down leaked models such as the leaked mistral.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My personal takes aside, how is this an arbitrary or ambiguous line; Whether or not an LLM was employed is pretty black and white.
The argument I keep seeing ends up taking this shape over and over:
AntiAI: LLM different from other tool. LLM bad.
ProAI: LLM not bad. LLM no different from other tools!
Whether or not 'LLM bad', it seems obvious that LLM is qualitatively different from other technology (except perhaps slave labor, but that's a tangent not worth exploring). But what I see most from the ProAI response is not a rebuttal of the leap from different to bad, but of a rebuttal of bad with a denial of different. Whcih I think is the weakest argument for a postive AI position.
How would you compare something like content aware fill to inpainting or other AI image techniques?
Content aware fill is part of the Adobe Creative Suite, and therefore "not-LLM-like" to the public. Inpainting is part of Stable Diffusion and other AI models, and is therefore "LLM-like" to the public (diffusion models are practically LLMs, as long as you ignore what those initials stand for).
As far as I can tell, those two are technically identical, but the AntiAI side will treat them differently because one costs 1000x as much as the other.
For another complication, how about the Samsung fake moon tool, which is entirely constrained to the camera?
For the argument of AI, I would not compare based on outcome, or level of effort, because I agree those are somewhat gradients. It is a question of technology used which has clear and unabiguous answers.
As far as I understand it, content aware fill uses ML, but not Generative AI.
So if one is against AI as a general category, then they can make an argument against CAF. Or if they are specifically against Generative AI, they can make an argument for CAF.
My main point is that Unaltered, Digitally altered, CGI, ML, and GenAI are all scrutable categories, not gradients or judgement calls. Now the valance you assign to the categories can be gradient or judgement calls.
But I disagree with the argument that the categories don't discretely exist or we can't to assign valence to them due to equivalency of outcome.
More options
Context Copy link
You’re thinking of generative fill.
Content aware fill is a much simpler neural network that runs locally (even without a gpu) and doesn’t understand scenes as such, just local shapes and patterns. Ironically it (or the similar remove tool) is also often more useful because it won’t try to generate new objects and isn’t behind Adobe’s ridiculously oversensitive censorship filter.
More options
Context Copy link
More options
Context Copy link
I mostly agree with the broad point, but on a pedantic note - I think you probably mean "LLM or diffusion model"
Which would make even such trivial things as using Lightroom / Photoshop generative erase to remove wires or small objects ”illegal” as they use a diffusion model to inpaint the selected area.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's a spectrum rather than a binary, of course. Beating the game on hard mode is harder than normal mode which is harder than easy mode. It's a sliding scale, rather than a single defined cutoff point. And artists have been dealing with these questions long before Dall-E/ChatGPT.
Speaking purely about the opinions of visual artists who work with pop culture:
Generally no one had a problem with simply using digital art as a medium, as long as you actually drew it yourself. There were some ultra purists who thought all digital art was suspect and only trad art showed "real skill", but that was very much a minority opinion.
Then when you started to talk about photobashing, things got a little murkier. Photobashing is the technique, very common in commercial art, of taking a set of existing images and mashing them up in Photoshop using a variety of filters/layers/other tools to create something new. The artist may do some amount of "drawing" as is traditionally conceived, or they may do little to none. Very useful for e.g. concept art where you need to churn out a lot of throwaway images quickly. Although everyone recognized that photobashing did require technical skill, it was generally thought to be kind of lame and "soulless", and it clearly showed LESS skill than actually drawing the entire thing yourself from scratch. The term itself was often used derisively, to distinguish inferior mass-produced studio art from the work of skillful independent artisans.
And then of course once you get to actual AI prompting the reaction from artists was just apoplectic, for the many reasons that have been discussed here previously. To be a proompter is even lamer than being a photobasher. I don't think anyone would actually dispute that out of all the methods discussed so far, it requires the least artistic skill, by design. If you want you can just type in a prompt and use the resulting image as-is. Even without any actual traditional drawing, you can still exercise some control over the process through inpainting, through selecting among multiple results from the same prompt, but, yeah. We're basically in "you just asked someone else to draw it instead" territory.
People have been doing this for thousands of years when it comes to e.g. legal matters. They rarely look as clever as they think they do, because people actually are capable of holding nuanced opinions and evaluating things on a case by case basis.
However, this is a cover for an artist's album, not someone claiming to be a graphic artist, and given that artists often downright steal shit for their album covers - this one painting is the cover to more than 60 different metal bands' albums - it's not the perceived lack of effort involved here that has generated the apoplectic reaction. Furthermore, in music circles where obvious sampling is de facto considered par for the course and a valid form of expression (even when it toes close to outright plagiarism in a way that almost all AI art does not), the usage of AI is still frowned upon hugely.
The idea that generative models might be able to Chinese Room their way into producing artistic output seems to existentially disturb and enrage people, and it's quite clear that people are not evaluating this in a nuanced or remotely objective way by making evaluations that the output has been arrived at through low-effort means. People are run by vibes and this is no exception.
Sure but album cover art is already a Lindy anachronism, and this makes sense to be a place of resistance. Neither albums nor covers really exist anymore. It’s more obligatory ritual than anything else and I think someone faking a ritual is more taboo than someone participating lazily.
It’s like the difference sending a thoughtful thank you note and signing a card and having someone else sign the card for you.
Everyone can agree that the first is superior, but the autist mistakes the second and third for being equivalent.
I don't think that's an adequate comparison - in a context where people often straight-up use preexisting art for album covers, it's more like the difference between copying a stock message from the internet for a thank-you card and having someone else compose the message for you. I don't think it requires autism to believe they're both pretty much equivalent.
I can admit that it's not an adequate comparison, but the distinction I'm making is between repurposing existing art (signing a premade card) and outsourcing it to a computer (someone else signs the card for you). I don't think these are directly analagous. I'm not saying they belong in the same category, but the analogy is on the gradient down from personal touch to outsourced sentiment.
I'm not trying to make a generalized defense of lazy album covers. And I fully accept there's an argument as a soldier going on here to mask more utilitarian concern rather than an ontological one. Gun to their head, I'm sure a lot of people criticizing the AI album cover would prefer an interesting AI cover to a lazy repurposed image for a given instance, especially for a 2 bit band. But they are arguing for a moat around actually creative ablum art in general. With the repurposed picture, it can be lazy or unique, but not both.
This is analagous (but not categorically equivalent) to the moat of 'you at least have to sign the Hallmark card yourself'. OBVIOUSLY that's less meaningful than somethign unique and closer in practicality to nothing at all. But the ideosyncratic moat of 'signed card' has social signficiance that defends against a drift into nothing at all.
I would agree signing a premade card and someone else signing the card for you is not the same, and that the former is preferable. I don't believe this analogy, however, is appropriate for the situation of repurposing existing art vs outsourcing it to a computer.
In the former case, there is more effort involved in signing the card than there is in getting somebody else to sign it for you, and in addition signing a card yourself is indeed more personalised and you have more control over the output. In the latter case involving AI, it's not clear there is more effort invested when one takes preexisting art as opposed to prompt engineering so a generative model can spit out the correct output, and it's also not clear that the person taking preexisting art has exercised more personal control over the output than the AI-user. If anything, it's the opposite since the AI artist has a more fine-tuned set of controls over the output.
Again, the analogy might not be a very good one, but we’re getting hung up on technical comparisons. My analogy was supposed to focus on the social ritual nature of where dividing lines are that focus on discrete moats around the methodology, rather than comparisons of outcome quality.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link