This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
One of my favorite bands just took a bunch of AI accusations, I guess, and he wrote a somewhat-pissed Substack post. That lead singer doesn't often step into culture war stuff, but this was close enough, I think:
and goes on to say that fighting AI art in this way is fruitless:
I regret that the culture war is poking random people in a new way in the last couple of years, and I can't help but cynically laugh at it. Not to mention how short-sighted it is. In that post, the lead singer details how much of a pain it is to do graphic design for music, and videos, and other art, and he hates it. Imagine if you could get a machine to do it? Also, it actually lifts up people who do not have money and allows them to make art like the people who have money do. Look at this VEO 3 shitpost. Genuinely funny, and the production value would be insane if it was real, for a joke that probably wouldn't be worth it. But now, someone with some Gemini credits can make it. This increases the amount of people making things.
I'm not sure I have any real thesis for this post, but I haven't been very good at directing discussion for my own posts, so, reply to this anecdote in any way you see fit. I thought it was interesting, and a little sad.
Yes but artists are a holy protected class and anything that takes their jobs away is evil. Nevermind that it has been known for centuries that art is an extremely bad way to make a living and that cameras already caused a crisis in the art world that every sophomore art student has a postmodern fit about.
My view is opposing AI art is anti-humanist. For every artist that can produce something anyone wants to look at, you have perhaps 1000x as many people who see something in their mind's eye but they don't have the skill to render it. That thing, maybe even that stunningly beautiful thing, never sees the light of day and dies with them.
Rest assured, most people have nothing beautiful to render or interesting to write in the first place, so it's not like we have some insane well of cognitive surplus waiting to be tapped into. Even with amazing AI tools most people will never put out anything interesting. But the true intellects and creatives only have time to specialize in so few things right now and I look forward to any leverage AI tools give them.
EDIT: lol, I posted that VEO3 video to my Facebook timeline saying something about how even kings could not commission shitposts like this and two different libtards unfriended me over it because of how wrong-side-of-history it is to support this technology that puts artists out of business. Of all of the gray tribe stuff I post that gets me a bunch of unhinged leftist reactions, praising AI stuff was The Line.
This seems like a fundamental misunderstanding of how creation works, though. Good ideas arise from craft skill, innate talent plus long hours of practice honing your perceptive faculties and understanding of the medium.
Feel-good movies love ego-boosting scenes about the regular ol' Joe Schmoe whose genius idea puts all those snooty artists to shame. But in reality, there are no people who've only ever bothered to cook instant ramen, who also have genius ideas for a creative dish, and there are no one-finger piano plinkers who also have great ideas for an amazing symphony. Tyros will have either painfully conventional ideas that they don't realize are copies, or completely random ideas that add nothing. At most, in some rare cases, they might have natural inclination plus the germs of some concept that needs to be worked out through long years of development; so having that natural process short-circuited through easy access to AI slop will result in fewer good ideas ever seeing the light of day.
I guess the one exception might be niche porn as mentioned downthread, where each man knows best the precise configuration of tentacles, chains and peanut butter that will get him off. But that's less creativity than it is targeted stimulation.
I don't think this is a fundamental law of the universe, though. It's a result of the fact that a good idea is only good if it can be implemented in reality, and as such, people familiar with and talented at the craft of implementing ideas to reality - i.e. in the case of images, are skilled illustrators with lots of experience in manually illustrating images - are the ones able to come up with good ideas.
But as long as it results in a good image, the idea behind it is a "good idea," regardless of who came up with the idea or how. Now, people can translate ideas into images without that deep understanding of the medium*, with that translation process bypassing all/most of the skills and techniques that were traditionally required. And because of that bypassing, what constitutes a "good idea" no longer has the same limitations and requirements of being based on one's understanding of those traditional skills and techniques.
* Some may argue that diffusion models are a medium unto itself with its own set of skills to develop and practice, akin to how photography and painting both generate 2D images but are considered different mediums. I'm ignoring this point for now.
But this account leaves out the equally critical perceptive and analytic skills that are normally built side-by-side with physical skills as an artist practices their craft. The bare act of clicking a shutter is the same for me and for a pro photographer, but the pro will take an immeasurably better picture because they have a trained eye to compose it. I suspect they'll also take a better picture because they understand from long experience what are the strengths and weaknesses of that type of image, versus a painting or architecture, and can better choose their subjects in consequence.
I think part of the problem is using the same word, "idea," to describe both what goes through my casual-consumer mind and what goes through the mind of a trained artist when we think of a new image. The two are strictly different in informational content, but also in structure, as anyone can see for themselves if they scoot out from their Dunning-Kruger zone to consider an area of craft or creation where they are experts. Coding or software engineering are probably the most familiar arts for the Motte; when we're talking really elegant and well-built programs, is your uncle's "y'know I always thought we should have like an app for identifying hot dogs" the same as a technical concept that occurs to a high-level professional with years of practice? Is there anything shared between the two "ideas", beyond the inchoate consumer instinct "I want a thing to make me feel _____"?
I think a lot of speculation about the value of AI art relies on the stickiness of cultural premises from the pre-AI age, so when Joe says to ChatGPT "paint me, uh, a pretty elephant with an orange hat in the style of Monet" and gets some random pixels farted out using patterns from 10,000 human-painted images, we instinctively respond to the patterns with the delight we've learned to afford skilled human work. It may seem that we get that delight from Joe's "idea," but what we are actually enjoying is those other artists' artfully-constructed patterns. I don't think we can fairly expect that 40 years hence; I suspect people will just paw indifferently past most images the way we walk past tree leaves today, with the exception of any pics that happen to raise a boner.
Artistic skill-building requires a medium where you can exercise agency, though, because the agency or artfulness is fundamentally the part that we admire about it. For example, nobody looks at a Jackson Pollock painting and feels delight over how this black droplet aligns with this other black droplet, even though subtle visual details at that level are matter for praise in other painters. But things we know to be random or unintentional are generally not interesting, so instead fans enjoy Pollock's expressive choice of colors or line or concept, areas where he clearly did exercise artful choice.
With AI image generation, there are so many levels of randomness and frustrated choice that it's hard to imagine how a user could work for years to achieve progressively greater mastery. Don't most commercial models actively work to disrupt direct user control, e.g. by adding a system prompt you can't see and running even the words of your prompt through intermediate hidden LLM revisions before they even get to the image generator?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link