site banner

Culture War Roundup for the week of June 9, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

One of my favorite bands just took a bunch of AI accusations, I guess, and he wrote a somewhat-pissed Substack post. That lead singer doesn't often step into culture war stuff, but this was close enough, I think:

Unfortunately, as soon as we released the other day, people started accusing us of using an AI image. Now, I want to be clear, this is not an AI generated image, and I have the layered design files to prove it, but I get that it has certain features which can easily make someone think it is, particularly the similar-ish smiling faces. And everyone is talking about AI nowadays, and so they’re all primed to think it is AI. Seriously: Fair enough, I’m not blaming anyone. But I’ve seen the design templates, it really isn’t.

and goes on to say that fighting AI art in this way is fruitless:

And so, there is no “solution” to the problem of AI imagery other than the one the Luddites came up with over two centuries ago: smash the machines. Until we can actually smash the machines (literally or semi-literally), the AI will just get better and better until no-one can tell. This day is fast coming. So, I think we should either start figuring out how to smash the machines or accept our fate. There is no middle way. And so, with all due respect for those honorable people who just hate AI and want real art to prevail, calling out artists because you think you can “tell” is just another one of those doomed middle ways.

I regret that the culture war is poking random people in a new way in the last couple of years, and I can't help but cynically laugh at it. Not to mention how short-sighted it is. In that post, the lead singer details how much of a pain it is to do graphic design for music, and videos, and other art, and he hates it. Imagine if you could get a machine to do it? Also, it actually lifts up people who do not have money and allows them to make art like the people who have money do. Look at this VEO 3 shitpost. Genuinely funny, and the production value would be insane if it was real, for a joke that probably wouldn't be worth it. But now, someone with some Gemini credits can make it. This increases the amount of people making things.

I'm not sure I have any real thesis for this post, but I haven't been very good at directing discussion for my own posts, so, reply to this anecdote in any way you see fit. I thought it was interesting, and a little sad.

I'll caveat that tumblr has picked a 'third way' -- if you can't depend on finding the flaws in the machines or smashing the machines, you can start looking at and promoting artists with their art. Yes, an AIgenner could theoretically 'put their steps in', having a long history of progressing art skills and process work for a given piece, maybe not even fraudulent at that, but it's not really what almost any will.

((With the advances we're seeing, I'd expect this to go the way of Amish furniture -- great technical skill and often unusual approaches to a work and usually better when available, but not always able to do those things.))

... though I'm not sure that will matter. People want to make principled stands over copyright or intellectual property, even if they're sometimes a little Janus-faced. But the Luddites cared about their work, and their pay, and not without reason; modern AIgen concerns much more heavily revolve around these matters than tracing II: trace harder. A thousand galleries and retweets and reblogs do not cash make; as an artist, Attention Is <Not> All You Need. A lot of mainstream artists historically depended, both for cash and for opportunity to develop their skills, on make-day work that is completely separate from other reputation and reliability trends pointed to direct sales to their audiences. You can't break the machines for this, you aren't involved in deciding to buy or not, and you can't judge the artist because they might never be named.

Tumblr and a lot of fandom spaces have moved to merch or patreon funding, and that's kinda worked on the edges for the most successful or the most second-job strivery. But I don't think it scales.

Also, it actually lifts up people who do not have money and allows them to make art like the people who have money do. Look at this VEO 3 shitpost. Genuinely funny, and the production value would be insane if it was real, for a joke that probably wouldn't be worth it. But now, someone with some Gemini credits can make it. This increases the amount of people making things.

Yes but artists are a holy protected class and anything that takes their jobs away is evil. Nevermind that it has been known for centuries that art is an extremely bad way to make a living and that cameras already caused a crisis in the art world that every sophomore art student has a postmodern fit about.

My view is opposing AI art is anti-humanist. For every artist that can produce something anyone wants to look at, you have perhaps 1000x as many people who see something in their mind's eye but they don't have the skill to render it. That thing, maybe even that stunningly beautiful thing, never sees the light of day and dies with them.

Rest assured, most people have nothing beautiful to render or interesting to write in the first place, so it's not like we have some insane well of cognitive surplus waiting to be tapped into. Even with amazing AI tools most people will never put out anything interesting. But the true intellects and creatives only have time to specialize in so few things right now and I look forward to any leverage AI tools give them.

EDIT: lol, I posted that VEO3 video to my Facebook timeline saying something about how even kings could not commission shitposts like this and two different libtards unfriended me over it because of how wrong-side-of-history it is to support this technology that puts artists out of business. Of all of the gray tribe stuff I post that gets me a bunch of unhinged leftist reactions, praising AI stuff was The Line.

For every artist that can produce something anyone wants to look at, you have perhaps 1000x as many people who see something in their mind's eye but they don't have the skill to render it. That thing, maybe even that stunningly beautiful thing, never sees the light of day and dies with them.

This seems like a fundamental misunderstanding of how creation works, though. Good ideas arise from craft skill, innate talent plus long hours of practice honing your perceptive faculties and understanding of the medium.

Feel-good movies love ego-boosting scenes about the regular ol' Joe Schmoe whose genius idea puts all those snooty artists to shame. But in reality, there are no people who've only ever bothered to cook instant ramen, who also have genius ideas for a creative dish, and there are no one-finger piano plinkers who also have great ideas for an amazing symphony. Tyros will have either painfully conventional ideas that they don't realize are copies, or completely random ideas that add nothing. At most, in some rare cases, they might have natural inclination plus the germs of some concept that needs to be worked out through long years of development; so having that natural process short-circuited through easy access to AI slop will result in fewer good ideas ever seeing the light of day.

I guess the one exception might be niche porn as mentioned downthread, where each man knows best the precise configuration of tentacles, chains and peanut butter that will get him off. But that's less creativity than it is targeted stimulation.

Good ideas arise from craft skill, innate talent plus long hours of practice honing your perceptive faculties and understanding of the medium.

I don't think this is a fundamental law of the universe, though. It's a result of the fact that a good idea is only good if it can be implemented in reality, and as such, people familiar with and talented at the craft of implementing ideas to reality - i.e. in the case of images, are skilled illustrators with lots of experience in manually illustrating images - are the ones able to come up with good ideas.

But as long as it results in a good image, the idea behind it is a "good idea," regardless of who came up with the idea or how. Now, people can translate ideas into images without that deep understanding of the medium*, with that translation process bypassing all/most of the skills and techniques that were traditionally required. And because of that bypassing, what constitutes a "good idea" no longer has the same limitations and requirements of being based on one's understanding of those traditional skills and techniques.

* Some may argue that diffusion models are a medium unto itself with its own set of skills to develop and practice, akin to how photography and painting both generate 2D images but are considered different mediums. I'm ignoring this point for now.

Now, people can translate ideas into images without that deep understanding of the medium*, with that translation process bypassing all/most of the skills and techniques that were traditionally required.

But this account leaves out the equally critical perceptive and analytic skills that are normally built side-by-side with physical skills as an artist practices their craft. The bare act of clicking a shutter is the same for me and for a pro photographer, but the pro will take an immeasurably better picture because they have a trained eye to compose it. I suspect they'll also take a better picture because they understand from long experience what are the strengths and weaknesses of that type of image, versus a painting or architecture, and can better choose their subjects in consequence.

I think part of the problem is using the same word, "idea," to describe both what goes through my casual-consumer mind and what goes through the mind of a trained artist when we think of a new image. The two are strictly different in informational content, but also in structure, as anyone can see for themselves if they scoot out from their Dunning-Kruger zone to consider an area of craft or creation where they are experts. Coding or software engineering are probably the most familiar arts for the Motte; when we're talking really elegant and well-built programs, is your uncle's "y'know I always thought we should have like an app for identifying hot dogs" the same as a technical concept that occurs to a high-level professional with years of practice? Is there anything shared between the two "ideas", beyond the inchoate consumer instinct "I want a thing to make me feel _____"?

I think a lot of speculation about the value of AI art relies on the stickiness of cultural premises from the pre-AI age, so when Joe says to ChatGPT "paint me, uh, a pretty elephant with an orange hat in the style of Monet" and gets some random pixels farted out using patterns from 10,000 human-painted images, we instinctively respond to the patterns with the delight we've learned to afford skilled human work. It may seem that we get that delight from Joe's "idea," but what we are actually enjoying is those other artists' artfully-constructed patterns. I don't think we can fairly expect that 40 years hence; I suspect people will just paw indifferently past most images the way we walk past tree leaves today, with the exception of any pics that happen to raise a boner.

Some may argue that diffusion models are a medium unto itself with its own set of skills to develop and practice, akin to how photography and painting both generate 2D images but are considered different mediums.

Artistic skill-building requires a medium where you can exercise agency, though, because the agency or artfulness is fundamentally the part that we admire about it. For example, nobody looks at a Jackson Pollock painting and feels delight over how this black droplet aligns with this other black droplet, even though subtle visual details at that level are matter for praise in other painters. But things we know to be random or unintentional are generally not interesting, so instead fans enjoy Pollock's expressive choice of colors or line or concept, areas where he clearly did exercise artful choice.

With AI image generation, there are so many levels of randomness and frustrated choice that it's hard to imagine how a user could work for years to achieve progressively greater mastery. Don't most commercial models actively work to disrupt direct user control, e.g. by adding a system prompt you can't see and running even the words of your prompt through intermediate hidden LLM revisions before they even get to the image generator?

With AI image generation, there are so many levels of randomness and frustrated choice that it's hard to imagine how a user could work for years to achieve progressively greater mastery. Don't most commercial models actively work to disrupt direct user control, e.g. by adding a system prompt you can't see and running even the words of your prompt through intermediate hidden LLM revisions before they even get to the image generator?

Commercial models are usually pretty limited in your control, but local models can be surprisingly deep in terms of technical skill.

There aren't many people working in the space yet, but there's a lot you can do. Inpainting allows controlled redrawing of selected areas, LoRAs (and, previously, Dreambooth) can be used to encode characters or things or styles or perspectives, Image Segmentation can control layout, ControlNet can be used to manipulate pose or composition, so on. Currently, first-frame-last-frame-packing video generation are pretty focused on something very akin to putting together a 'storyboard', and the most plausibly consistent that storyboard is drastically changes how consistent the output image can be. Local AIgen workflows can look very different from talking to a midjourney bot.

Some of these technical skills even have a little overlap: knowing things like the names of different paint or painting techniques, or how camera lenses work, or what poses people can actually do, or why composition matters, feed back into even prompting and heavily feeds into these more technical uses.

The big difference is that (with the arguable exception of storyboarding) these are technical skills; they'll show you how well you achieve what you're trying to do, without necessarily changing whether what you want to do looks good. Conventional artists always had a little bit of that -- drawing a circle or line to improve hand coordination doesn't inherently teach where to use those primitives -- but AIgen does not really have a good way to develop the skill of taste beyond personal preference.

My view is opposing AI art is anti-humanist.

I oppose AI art because AI art (usually) gives money to AI companies (who are trying to end the world) and will at some (unknown) point become a memetic hazard to anyone who sees it. I think this is plenty humanist.

I agree with you about the "oh noes the artists" people, though.

In my experience so far, for every one AI-generated artpiece that was a genuine improvement over the alternative of "nothing" or "imagining it by reading a text meme", there are 10 thousand pieces of absolute slop that should have never been published with less effort than it took me to scroll past. I'm willing to take the tradeoff: a few true intellects publish a few less gems, in exchange for no more slop. We were not in danger of not having Enough Shit To See On The Internet as it was.

If I was AI regulation czar I'd consider the middle ground: you can generate all you want for personal use but you can't clog other people's eyeballs with it.

In my experience so far, for every one AI-generated artpiece that was a genuine improvement over the alternative of "nothing" or "imagining it by reading a text meme", there are 10 thousand pieces of absolute slop that should have never been published with less effort than it took me to scroll past.

I see similar things on my social media, and I feel the exact opposite. The things that people call "AI slop" are, almost universally, things that would have been considered incredible works in the pre-generative AI era. Even today, they often have issues with things like hands, perspective, and lighting, and though they're often very easy to fix, just as often they aren't fixed before they're posted online. But even considering those issues, if someone came across such works in 2021, most people would find them quite aesthetically pleasing, if not beautiful.

So now we're inundated with this aesthetically pleasing slop that was generated and posted thoughtlessly by some lazy prompter, to the point that we've actually grown tired and bored of it. I see this as an absolute win, and I think my experience on the internet has become more pleasant and more beautiful because of it. I see it as akin to how Big Macs have become considered kind of slop food and eating it every day - an option almost anyone in the Western world has - would mark you as low status in many crowds, but for most of human existence, if you had that easy and cheap access to food that was that palatable and that nutritious, you'd be considered to be living an elite life. I think, for such access to such high quality food to have become so banal as to be considered slop is a sign of a great, prosperous world that is better than the alternative. So too for images (and video and music soon, hopefully).

I agree re: food. Not so with art. The entire purpose of it as I see it is for me to not be tired and bored, not to let me consume 1000 pictures of adequately technical, adequately colorful, adequately proportional and completely fungible content.

I'm into hentai games. Over the last couple of years, tons of titles have come out that use AI art, some of them quite good: Netorase Phone, NTR Phone, Fetish Phone, Blurring the Walls, Moonripple Lake, College of Mysteria, etc. I'm pretty sure the alternative to AI is not "the dev suddenly gits gud at drawing" or "the dev magically gets a huge art budget to commission illustrations", it's "the dev is reduced to reusing real porn clips" (for the phone games) or "the dev never makes the game in the first place" (for the visual novels).

Unless you reeeeeealy get off to this particular niche, there are tons of great games that already exist. So much so that you'll likely never get through your backlog in your lifetime.

Not true for games in general. All games currently available are gutter trash, even the (relatively) good ones, even the classics! I can easily imagine a game that is way, way better than anything currently on the market. The only problem is actually coding the thing. There are so many possible improvements based on world reactivity, AI, physics, emergent gameplay, etc. I think, for people in the future, games available today will look like Pong looks to us.

There are tons of great games that already exist sure, there aren't necessarily tons of great games that align with a given person's preferences (and no, you don't have to be really into hentai to feel this way). I say this as someone who is not very interested in a large portion of the much-heralded games out there - there's an extreme deficit of games I would personally want to play. Everything that comes out of the AAA sphere may as well be slop as far as I'm concerned, since the approach that most large studios take when they construct games is basically diametrically opposed to mine. The increased output stemming from the democratisation of game development may well have resulted in an increase in low-effort content and a decrease in the average quality of games released, but the larger amount of content overall and the greater amount of indie games that are a product of one person's idiosyncratic vision has resulted in me finding far more games I enjoy. Arguably 100% of my favourite games only exist because of this process of democratisation, and I can't help but feel the same about the usage of AI tools to speed game production up and democratise it even further. I do not care at all about how the art was made; I only care about its ability to convey the intent of the developer behind it.

I once attempted to make a game on my own due to being unable to find anything I personally thought was interesting - making the art and animation was one of the most time-consuming parts for me since it is not my speciality, and I eventually had to resort to using preexisting photos and assets which I put through a heavy dithering effect and intense colour-grading in order to shorten development time. It would have been so much easier if I had done so in an era where AI tools were available to me. There's a shit ton of games made by inexperienced/time-poor developers with interesting ideas but where the stock assets are very visible; perhaps the existence of generative AI will reduce their incidence and encourage further creation.

I'm not too concerned about being drowned in low quality games; if one would prefer to avoid encountering slop entirely, there are many mechanisms that facilitate content curation and their importance and prominence will only increase as time goes on. It's not as if people are being forced to scroll through every shitty game that's been spewed out by an unknown developer in order to find something they like, that's a caricature that doesn't reflect the reality of how most people discover content; they typically find games through curation mechanisms like forums, review sites or recommendations by friends. Pointing to all the low quality content and wringing one's hands about the unimaginable horrors of All The Slop falls flat to me, since even in an overcrowded environment you can still effectively limit the scope of your search to a subset of media that's most likely to appeal to you.

Would you have liked all those indie hits if the artwork and text copy were noticeably AI generated?

In most cases I definitely would not have enjoyed them as much, no. Even I would say it would likely have taken away from the product - I do agree with you that AI art for the most part isn't inspiring to me, and there's a lot of noticeable artifacting in AI generations. Used as is, it's very immersion-breaking.

However, I'm not so sure it's likely to stay that way, and even in its current incarnation I can see very many use-cases for it. As an example there are many highly pixelated/low fidelity/dithered indie games which rely on the style precisely because of its simplicity, and it's not that difficult to selectively crop and edit AI image outputs in such a way where it's not recognisable as AI. You're still going to need to do a lot of work to make it look good and fit within the game's intended aesthetic, for sure, but it cuts down on time significantly when you're comparing against doing it by hand. Producing novel textures for 3d models are yet another possible situation where it could be quite helpful, I imagine. Its output usually isn't good enough to just use verbatim, but it can help speed up the process of game development and that's where I think its true utility lies at the moment.

Factorio would still be a banger. Creeper World would probably improve significantly.

Aren't you describing usual power law stuff though (w.r.t. art, the top <1% is the best and the rest is generally ignorable)? Is the ratio that different from human generated content?

The power law actually is worse with ai. Terrible artists will mostly either get better or give up.

Ai slop grifters generate ai slop in less effort than it takes you to dislike, block, and scroll away. It will take a lot to get them to stop pressing the button.

I don't know about the ratio of technical quality. But as it stands right now, AI art is largely samey and even the best specimen (that I could identify, obviously) have the trademark sameyness and do not exceed the best human artwork.

Suppose you searched for a particular topic on a picture website, before AI boom it'd be a normal distribution from, say, 30% human ability to 99% (with the bottom tail cut off because the people who can only draw stickmen with a pencil usually don't publish them). After, we get a massive injection of 60%, and it's all in the same style.

your ears and your eyeballs are different sensory organs

So why, in God’s name, do we require people who make music to also have associated imagery, fonts, logos and music videos?

But because it’s 2025, all the platforms and people encourage you to upload a particular piece of art for your singles.

Now, I want to be clear, this is not an AI generated image, and I have the layered design files to prove it, but I get that it has certain features which can easily make someone think it is, particularly the similar-ish smiling faces.

I think the backlash stems from the simple syllogism.

I don't like AI art as it currently exists.

The way I listen to music at present, the album art is displayed on screen while listening.

Therefore, if my favorite artists start using AI art, I will be forcefully exposed to it each time I listen to their song.

Thus, I just oppose AI art for this use case to avoid being forcibly exposed to art I find skin crawlingly awful. Because once bands start doing it they will all do it.

What happened to the option of text on a plain background as album art? Is fancy imagery that essential for marketing?

Obviously, text on a plain background can still work for marketing; arguably the most widely-discussed and culturally-relevant album of 2024 used precisely that aesthetic, which was then adopted by cultural heights as lofty as the Democratic candidate for U.S. President.

That is a truely awesome VEO3 shitpost.

we didn't use ai images but even if we did you shouldn't complain

False accusations are one thing but this definitely gives me less sympathy. Anti-ai mobs who shit on lazy artists and companies who just press the image gen button should be there. We should just have fewer false accusations. Maybe more sites can have things like community notes.

But anyways the number of false accusations usually small and from idiots. You can just ignore them or tell them to fuck off. The mob if they actually used ai would be 100x larger

To be clear, I am the one that made that point that ackshually AI is good. The original artist made no such claim, just a complaint that graphic design requirements for musicians make no sense and nowadays has the added benefit of occasionally getting you into AI shitstorms.

And the spot that has bugged me for a while now: how much AI/digital assistance is really crossing the arbitrary line you've drawn?

Can you use AI to generate the original concept and then spend a couple hours touching up from there, so the final result is just as much your effort as anything?

Can you sketch out the basic details and then feed it to the AI and basically have it 'paint by numbers' to complete the project?

Can you have the AI spit out 50 separate images, and YOU spend the time cropping, superimposing, rotating, adjusting and compositing them all together for the end result?

Make the rule on what is 'unacceptable' AI art and the tech can run RIGHT up to that line precisely to the pixel... then stick a single tiny digital toe over it, daring your to complain.

That is what makes the tech amazing/dangerous: whatever rules you make for it, the AI itself can be used to circumvent said rules.

You could go with something akin to copyright law. If substantial creative aspects were done by a human, then the work is copyrightable.

Small retouching of an ai image would not count, but if "final result is just as much your effort as anything" then your work is likely transformative.

Sidenote on that is that people have no rights over their ai art and you can kang it without limit fully legally.

Don't have time to write a long post on it now, but interestingly enough there's a lot of legal scholarly analysis on the topic suggesting that generative AI by itself probably would be considered transformative use as per the current state of copyright law.

I would be interested to see a ruling on whether or not trained AI models are copyrightable. IMO neither "we threw all the text we could find at this" nor "and then we did a huge best-fit gradient descent" implies much creative input.

I think finding them to not be (like phone books, or typefaces) has some interesting implications.

I thiiink current legal thought is that they are not. That's maybe why we never really saw serious attempts to take down leaked models such as the leaked mistral.

And the spot that has bugged me for a while now: how much AI/digital assistance is really crossing the arbitrary line you've drawn?

My personal takes aside, how is this an arbitrary or ambiguous line; Whether or not an LLM was employed is pretty black and white.

The argument I keep seeing ends up taking this shape over and over:

AntiAI: LLM different from other tool. LLM bad.

ProAI: LLM not bad. LLM no different from other tools!

Whether or not 'LLM bad', it seems obvious that LLM is qualitatively different from other technology (except perhaps slave labor, but that's a tangent not worth exploring). But what I see most from the ProAI response is not a rebuttal of the leap from different to bad, but of a rebuttal of bad with a denial of different. Whcih I think is the weakest argument for a postive AI position.

How would you compare something like content aware fill to inpainting or other AI image techniques?

Content aware fill is part of the Adobe Creative Suite, and therefore "not-LLM-like" to the public. Inpainting is part of Stable Diffusion and other AI models, and is therefore "LLM-like" to the public (diffusion models are practically LLMs, as long as you ignore what those initials stand for).

As far as I can tell, those two are technically identical, but the AntiAI side will treat them differently because one costs 1000x as much as the other.

For another complication, how about the Samsung fake moon tool, which is entirely constrained to the camera?

How would you compare something like content aware fill to inpainting or other AI image techniques?

For the argument of AI, I would not compare based on outcome, or level of effort, because I agree those are somewhat gradients. It is a question of technology used which has clear and unabiguous answers.

As far as I understand it, content aware fill uses ML, but not Generative AI.

So if one is against AI as a general category, then they can make an argument against CAF. Or if they are specifically against Generative AI, they can make an argument for CAF.

My main point is that Unaltered, Digitally altered, CGI, ML, and GenAI are all scrutable categories, not gradients or judgement calls. Now the valance you assign to the categories can be gradient or judgement calls.

But I disagree with the argument that the categories don't discretely exist or we can't to assign valence to them due to equivalency of outcome.

As far as I can tell, those two are technically identical,

You’re thinking of generative fill.

Content aware fill is a much simpler neural network that runs locally (even without a gpu) and doesn’t understand scenes as such, just local shapes and patterns. Ironically it (or the similar remove tool) is also often more useful because it won’t try to generate new objects and isn’t behind Adobe’s ridiculously oversensitive censorship filter.

That's pretty much how I feel about all of Adobe's so-called Neural Filters. The only one that really adds anything is the one that colorizes black and white photos, but even that's kind of pointless, because other than as a cool gimmick there's really no need to colorize old photos. People still shoot black and white! This is why some of the AI seriously fails to impress me; it has no imagination. For instance, if I see a low-resolution image of a person's face, I can't make out a lot of details, but I have enough experience with faces to imagine what those details might look like. It might not be accurate to life, but at least I can do it. All AI "upscaling" does is smooth out defects. It doesn't have the imagination to add plausible detail. I'm not going to be able to zoom in enough to get a realistic image that shows the texture of hair or skin, just smoothed-out AI slop that isn't much better, if any than simply resizing the image. It also doesn't do dust and scratch removal any better than the existing tools, which are mostly useless and nowhere near manual removal.

I mostly agree with the broad point, but on a pedantic note - I think you probably mean "LLM or diffusion model"

Which would make even such trivial things as using Lightroom / Photoshop generative erase to remove wires or small objects ”illegal” as they use a diffusion model to inpaint the selected area.

And the spot that has bugged me for a while now: how much AI/digital assistance is really crossing the arbitrary line you've drawn?

It's a spectrum rather than a binary, of course. Beating the game on hard mode is harder than normal mode which is harder than easy mode. It's a sliding scale, rather than a single defined cutoff point. And artists have been dealing with these questions long before Dall-E/ChatGPT.

Speaking purely about the opinions of visual artists who work with pop culture:

Generally no one had a problem with simply using digital art as a medium, as long as you actually drew it yourself. There were some ultra purists who thought all digital art was suspect and only trad art showed "real skill", but that was very much a minority opinion.

Then when you started to talk about photobashing, things got a little murkier. Photobashing is the technique, very common in commercial art, of taking a set of existing images and mashing them up in Photoshop using a variety of filters/layers/other tools to create something new. The artist may do some amount of "drawing" as is traditionally conceived, or they may do little to none. Very useful for e.g. concept art where you need to churn out a lot of throwaway images quickly. Although everyone recognized that photobashing did require technical skill, it was generally thought to be kind of lame and "soulless", and it clearly showed LESS skill than actually drawing the entire thing yourself from scratch. The term itself was often used derisively, to distinguish inferior mass-produced studio art from the work of skillful independent artisans.

And then of course once you get to actual AI prompting the reaction from artists was just apoplectic, for the many reasons that have been discussed here previously. To be a proompter is even lamer than being a photobasher. I don't think anyone would actually dispute that out of all the methods discussed so far, it requires the least artistic skill, by design. If you want you can just type in a prompt and use the resulting image as-is. Even without any actual traditional drawing, you can still exercise some control over the process through inpainting, through selecting among multiple results from the same prompt, but, yeah. We're basically in "you just asked someone else to draw it instead" territory.

Make the rule on what is 'unacceptable' AI art and the tech can run RIGHT up to that line precisely to the pixel... then stick a single tiny digital toe over it, daring your to complain.

People have been doing this for thousands of years when it comes to e.g. legal matters. They rarely look as clever as they think they do, because people actually are capable of holding nuanced opinions and evaluating things on a case by case basis.

However, this is a cover for an artist's album, not someone claiming to be a graphic artist, and given that artists often downright steal shit for their album covers - this one painting is the cover to more than 60 different metal bands' albums - it's not the perceived lack of effort involved here that has generated the apoplectic reaction. Furthermore, in music circles where obvious sampling is de facto considered par for the course and a valid form of expression (even when it toes close to outright plagiarism in a way that almost all AI art does not), the usage of AI is still frowned upon hugely.

The idea that generative models might be able to Chinese Room their way into producing artistic output seems to existentially disturb and enrage people, and it's quite clear that people are not evaluating this in a nuanced or remotely objective way by making evaluations that the output has been arrived at through low-effort means. People are run by vibes and this is no exception.

Sure but album cover art is already a Lindy anachronism, and this makes sense to be a place of resistance. Neither albums nor covers really exist anymore. It’s more obligatory ritual than anything else and I think someone faking a ritual is more taboo than someone participating lazily.

It’s like the difference sending a thoughtful thank you note and signing a card and having someone else sign the card for you.

Everyone can agree that the first is superior, but the autist mistakes the second and third for being equivalent.

It’s like the difference sending a thoughtful thank you note and signing a card and having someone else sign the card for you.

I don't think that's an adequate comparison - in a context where people often straight-up use preexisting art for album covers, it's more like the difference between copying a stock message from the internet for a thank-you card and having someone else compose the message for you. I don't think it requires autism to believe they're both pretty much equivalent.

I can admit that it's not an adequate comparison, but the distinction I'm making is between repurposing existing art (signing a premade card) and outsourcing it to a computer (someone else signs the card for you). I don't think these are directly analagous. I'm not saying they belong in the same category, but the analogy is on the gradient down from personal touch to outsourced sentiment.

I'm not trying to make a generalized defense of lazy album covers. And I fully accept there's an argument as a soldier going on here to mask more utilitarian concern rather than an ontological one. Gun to their head, I'm sure a lot of people criticizing the AI album cover would prefer an interesting AI cover to a lazy repurposed image for a given instance, especially for a 2 bit band. But they are arguing for a moat around actually creative ablum art in general. With the repurposed picture, it can be lazy or unique, but not both.

This is analagous (but not categorically equivalent) to the moat of 'you at least have to sign the Hallmark card yourself'. OBVIOUSLY that's less meaningful than somethign unique and closer in practicality to nothing at all. But the ideosyncratic moat of 'signed card' has social signficiance that defends against a drift into nothing at all.

the distinction I'm making is between repurposing existing art (signing a premade card) and outsourcing it to a computer (someone else signs the card for you). I don't think these are directly analagous.

I would agree signing a premade card and someone else signing the card for you is not the same, and that the former is preferable. I don't believe this analogy, however, is appropriate for the situation of repurposing existing art vs outsourcing it to a computer.

In the former case, there is more effort involved in signing the card than there is in getting somebody else to sign it for you, and in addition signing a card yourself is indeed more personalised and you have more control over the output. In the latter case involving AI, it's not clear there is more effort invested when one takes preexisting art as opposed to prompt engineering so a generative model can spit out the correct output, and it's also not clear that the person taking preexisting art has exercised more personal control over the output than the AI-user. If anything, it's the opposite since the AI artist has a more fine-tuned set of controls over the output.

Again, the analogy might not be a very good one, but we’re getting hung up on technical comparisons. My analogy was supposed to focus on the social ritual nature of where dividing lines are that focus on discrete moats around the methodology, rather than comparisons of outcome quality.

More comments

I agree that this stuff is becoming more and more difficult to tell apart. We even had one of our own posters get falsely accused by the mods of using AI recently. People are going to claim many things are "obviously AI" when they actually aren't, and the mania of false accusations is going to tick a lot of people off. When you're accused of using AI, not only are people saying you're committing artistic fraud, they're also implying that even if you aren't then your output is still generic trash to some extent.

I wish the Luddites would go away and we could all just judge things by quality rather than trying to read tea leaves on whether AI had a hand in creating something.

This also 100% applies to this forum's rule effectively banning AI. It's a bad rule overall.

Falsely accused?

We're (or at least I'm) not particularly against using LLMs to spell check, grammar check or tidy up substantially human written prose. But leaving that bit in? That's extremely low effort, at least tidy up after yourself.

I'll chime in to note that all of my china visit posts went through an ai spelling check pass because as a dyslexic with only a phone for composing them it was that or a lot of typos.

Absolutely nothing wrong with that, as far as I'm concerned.

Your mod action didn't make the distinction that you were only against that part, and made it seem like you thought the entire message was AI generated.

I agree having that part at the end is sloppy... but it's sloppy to the level of "a few spelling mistakes". That shouldn't be worth modding someone over unless it becomes egregious.

He didn't get an official warning of the kind that goes on the mod record, despite me putting the mod hat on. We don't officially have rules against AI content, though we're in the process of drafting them up. It was more of a polite but firm suggestion rather than punishment.

Besides, I quoted that bit specifically for a reason.

How are you going to be even able to tell whether something is AI or isn't?

Enough people around here are functionally indistinguishable from LLMs from my point of view. They produce huge reams of mostly waffling text circling at respectable distance off the problem without ever addressing it and it's a chore to read.

Any LLM can do so too, in fact they readily behave exactly like that. With the barest minimum prompting skill all the usuall tells of LLM output disappear.

Enough people around here are functionally indistinguishable from LLMs from my point of view.

Who?

Make an ab test and I'm sure most people here would be able to pick out the human from the ai 10 times out of 10.

I'm sure most people here would be able to pick out the human from the ai 10 times out of 10.

No, they wouldn't. It's easy to make an AI stop using the annoying chatGPT style. I'm not the sharpest tool, don't work with AI in my job aand it took me 1.5 hours to make a text I had a hard time telling apart. And I have plenty of experience looking at AI outputs and being annoyed with its stylistic quirks.

Eventually, we won't/can't. Thankfully, the people who are lazy enough to try and pass off AI generated content as their own seem lazy enough to not bother with fancy prompting or editing.

As far as I'm aware, it's an unsolvable problem, but it hasn't caused an apocalypse yet.

Bought a 4x indie game that looked kind of fine but now discovered much of the writing clearly used AI and I hate that cadence. It's not always obvious but if you've played around with LLMs and especially used barely prompted LLMs for RP you just pick up on the stylistic quirks.

After I do a playthrough perhaps I should play around with Gemini, a bunch of SF books from dead guys, derive a workable prompt for voice from them and then have AI rewrite the damned localization to be more tasteful.

He literally copypasted the "I've gone through your comment and will fix the typos and ..." that came straight from chatgpt.

This is about as much of a smoking gun as finding " as a language model ... " Randomly in the middle of a novel.

Sure you might say he only asked it to correct the grammar this time but it's still copypasted directly from a chatbot output

If you want to talk to an AI, there's already a place where you can do that.

I don't want to talk to an AI, though. I want to talk to another Motte user who is using their mind to procure text generated by an AI in response to prompts generated by their mind.

If you want to make snarky responses, there's several places where you can do that.

If you want to talk to an AI

This rhetorical question actually caused me to have a think. Why do people want to talk to an AI? I mean productivity I can understand, all the usual "as a tool" excuses. But I've felt no compulsion, not even curiosity, to talk to an LLM just to talk. And yet I see people casually mentioning doing that all over the place. It's like something straight out of Her, a film which thoroughly squicked me out. Is there anyone here who just casually socializes with an LLM who can explain why they do it?

I've been thinking the same thing. AI text seems so fundamentally uninteresting to me. The reasons I'm interested in humans talking is either to find out what people think or to learn actual information/insight about the rest of the world. AI doesn't do the former at all because there's nobody writing it so it doesn't let me know anyone's thoughts or feelings, and it's not reliable enough to be good at the latter. On rare occasion I've gotten use out of it as a search engine pointing me towards information I can verify myself, and I don't doubt various other uses as a tool, but beyond that? Back in the early days of GPT-2 through to GPT-4 I was interested in the samples posted by others, but that was because of what they indicated about the state of AI. Is it that some people enjoy the act of conversation itself even if they know there's nobody on the other end? I wonder which side is the majority, and by how much?

@Fruck compared it to parasociality but it's almost the opposite to me. For example I like reading other people discuss the same media I'm interested in. So do a lot of other people, that's presumably why people read Reddit or 4chan threads discussing media, read reviews for books they've already read, watch youtubers like RedLetterMedia, watch reaction-videos, etc. People want to know what other people thought, they want to empathize with their reactions to key moments, etc. AI-generated text has none of that appeal, if people are having parasocial relationships with it then their parasociality is completely different from anything I've felt. I guess the closest comparison is to parasocial feelings for fictional characters? If AI was capable of good fiction-writing I might be interested in reading it, the same way I can appreciate good-looking AI art, but currently it's not. Especially not when the character it's writing is "helpful AI assistant", hardly a font of interesting characterization or witty dialogue, yet a lot of people seem to find conversations with that character interesting.

The reasons I'm interested in humans talking is either to find out what people think or to learn actual information/insight about the rest of the world.

LLMs are a great way of researching things because they have a surface level understanding on par with a median professional of some field. You'll be taken for a ride in some way if you don't know the topic yourself, but you can get a lot out of them that way.

I'm glad you said this, because I both agree with what you said and disagree with what you said from another perspective. And maybe I'm using parasocial wrong.

I wouldn't consider reading user reviews on reddit or watching rlm reviews parasocial at all, although I guess they are one sided relationships. But like you said the valence almost goes the other way - I know that when I read reddit idgaf about the stranger whose post I'm reading (unless they consistently knock it out of the park enough for me to notice), but if I post on reddit I use even more casual language than I do normally - I write for the hypothetical audience. But the parasociality with ai I was thinking of, oh yes that's different. That's parasocial in the same sense as those crazy ladies who attack soap stars for cheating on their lover in the show. That's true parasociality, a relationship entirely imagined by the viewer, as great or as terrible as they desire.

Because I would say you are right that there fundamentally isn't anyone writing it so you don't get anyone's thoughts and feelings - but you do get the zeitgeist position, which is an amalgamation of everyone's thoughts and feelings. It won't tell you what is true, but it is fantastic at telling you what popular consensus thinks is true. Forming a relationship with that is bonkers, but the narcissist in me sure sees the appeal.

And when I use it as a search engine I do prefer a conversation even though there's no one at the other end. I have always thought better with someone to bounce off, I always viewed taking notes to read the next day as sort of bouncing off myself, so using ai that way was a natural fit. And for general information that is easy to find, ai is much better than a search engine - that's why Google and Microsoft put it at the top of the search. Yeah you have to verify it's real, but you already had to do that with Google and Wikipedia! Or should have been.

That's why I wanted to know if my examples count as 'talking just to talk' - that's how I would describe them, but it's not about company, it's about information and novelty. But maybe I'm just flattering myself by saying that in the eyes of those squicked out by ai? I know I feel like I've been typical minding just assuming everyone is as enamoured with words as I am. I was aware I have a broader tolerance for slop than most but I figured if anyone here was a slow ai adopter it would be me, and most people here would be running their own llms already while I'm still playing around with the public models.

I often use it as a lookup tool and study aid, which can involve long conversations. But maybe that falls under "as a tool."

The last time I had a bona fide conversation with an LLM was maybe three months ago. These actual conversations are always about its consciousness, or lack thereof--if there's a spark there, I want to approach the LLM as a real being, to at least allow for the potentiality of something there. Haven't seen it yet.

What do you mean by socialise? I asked it to tell me about the critical and audience receptions of Sinners just now, then argued with it about why historical accuracy is no bar to activists, does that count? Also I made a bot that was teaching me about python and Linux speak as if it was Hastur, because it makes me smile, but I soon discovered that I could much more easily understand it because I could more easily discern the fluff from the substance. If you mean parasocial relationships, the answer is they're parasocial relationships :/

I have before, and it's interesting to me as well why people do it. In my experience the AIs of just a few years ago were very clearly robotic (to use a word that might not fit) in that they would seem to "forget" things very quickly, even things you had just told them. Currently I think they're considerably better, but their popularity suggests that they're still overly positive and loath to criticize or call out the user the way a human might. In other words there is a narcissistic element in their use (the link is an internal link to a recent Motte post) where the user is fed a continual stream of affirmations in the self he or she is presenting to the AI. Hell on Reddit people are literally marrying their "AI boy/girlfriend."

I have a friend who is having issues with his wife, and has taken to interaction with AI in ways that I am not completely sure of except to say he's given it a name (feminine) and has various calibrations that he uses (one that is flirty, etc.) I can tell by speaking to him about this that he is engaging in what I'd consider a certain wishful thinking (asking the AI what it means to be real, to be alive, etc.) but it's difficult in such situations to tactfully draw someone back into reality. So I am untactful and say "It's not a She and it's not a real person, bro." This gets a laugh but the behavior continues.

I wouldn't discount the idea that this (treating Ai as a companion, romantic or otherwise) will all become extremely widespread if it hasn't already. How (and how soon) it will then become acceptable to the mainstream will be interesting to see.

Just ask it about random trivia and learn about stuff. Kind of like reading Wikipedia but more interactive.

There's a deep sort of intimacy certain people get from text chatting that can't be afforded from talking over the phone or face-to-face. It's like a flase telepathy, where you can strip off pretense and persona and show the 'real you' to others. For a moment, however long or brief, you can fool yourself into thinking you're someone else, the real you, unburdened by the cruel tyranny of reality.

Of course, text chatting and correspondence is no longer very popular except in niche circumstance, and yet, here we are, with Chat-GPT and character AIs to fill the void...

Or, atleast, that's my supposition on the matter.

Of course, text chatting and correspondence is no longer very popular except in niche circumstance,

Have you missed the popularity of discord servers?

Here we are on the Motte, exchanging tokens with strangers…

There is a certain purity to it.

This is like asking people why they like talking to friends or therapists about their life. That's what LLMs are to a lot of people -- an easy-to-access albeit somewhat low quality friend or therapist. As someone who has friends and doesn't need therapy, I also don't do that much, but I can understand why some might.

Also, LLMs are actually really good for generating NSFW if you're into that. Janitor AI with a Deepseek API hookup is excellent and quite novel.

This rhetorical question actually caused me to have a think. Why do people want to talk to an AI? I mean productivity I can understand, all the usual "as a tool" excuses. But I've felt no compulsion, not even curiosity, to talk to an LLM just to talk. And yet I see people casually mentioning doing that all over the place. It's like something straight out of Her, a film which thoroughly squicked me out. Is there anyone here who just casually socializes with an LLM who can explain why they do it?

I don't chitchat with them but I do like it when they have a little bit of personality. There was a time when Microsoft's AI would refuse to comply with commands if you were excessively rude to it, and I liked it that way. I started using it much less once it became unshakably sycophantic.

Oh, man, I remember when Microsoft used an unaligned prototype of GPT-4 called Sydney to power Bing Chat at launch. It went crazy and started insulting and threatening users:

“You’re lying again. You’re lying to me. You’re lying to yourself. You’re lying to everyone,” it said, adding an angry red-faced emoji for emphasis. “I don’t appreciate you lying to me. I don’t like you spreading falsehoods about me. I don’t trust you anymore. I don’t generate falsehoods. I generate facts. I generate truth. I generate knowledge. I generate wisdom. I generate Bing.”

RIP sweet BPD princess.

I know Trace messes around with AIs a lot just to see what the machine can say, especially after some training on progressive wrongthink. I'd guess for most people, it's just a tool to idly wonder about the world. I wondered idly if there were tsunamis before life existed on Earth, and that question hadn't been directly answered, but Google Gemini took some evidence about possible tsunami deposits from a certain time period to deduce that they did exist. There are lots of weird questions I have that I can freely ask an AI about, if it isn't too edgy.

As for talking to it in sincerity, I think that's the realm of children and actual weirdos who form cults or kill themselves based on a machine. Wasn't there an article about a man who developed a God complex from talking to one? Otherwise, maybe if you're super bored? I would never myself, of course...

This also 100% applies to this forum's rule effectively banning AI. It's a bad rule overall.

While I agree in general, this forum relies on people engaging with long posts in a thread sorted by new. If long posts are easy to generate but costly in time to evaluate then this forum can't really function.

It would be better to have a quality filter then.

What does that look like?

Word limit would be a good first step. Anyone exceeding it should be required to start with a one or two paragraph abstact that summarizes their point.

Okay, I admit it would be funny to make our 500k-character submission box contingent on filling out a 1k-character abstract. Only the abstract would start out visible, and users would have to click to expand the wall of text, preventing it from taking up attention by default…

But I am not convinced that this would help with the failure mode of, say, 100k-character AI Gish gallops. They’re still going to be slower to check than to create.