site banner

Culture War Roundup for the week of October 3, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

24
Jump in the discussion.

No email address required.

I am old enough to have developed physical film in a real darkroom. Using negatives and developer was real work that took skill and helped build an appreciation for film photography. I think it was a really fun thing to learn, and I'm glad a did it, but oh boy is it useless now. Now, I can pick up my digital camera and have it automatically focus, adjust settings and snap off pictures at ~12 fps, apply all of the lens and color corrections and spit out gigabytes of jpgs onto a tiny memory card, then I take all of those photos and store them on my multiple-terabyte hard drive with backups in several locations.

I don't think the new-found ease in photography has rendered it inherently cheap, but has certainly opened the floodgates to a morass of shitty, low-level photography. I shudder every time I see a 'gram-girl (or boy) taking some basic selfie at a scenic location. But there is still a lot of photography to appreciate, pictures that take real work, not just in getting the right shot but in setting up the camera even now. No matter how smart the camera itself is, you still have to be at the location and looking the right way at the right time, and no matter how good it is at selecting a generically good setting itself, a skilled human can do better.

I installed Stable Diffusion a few days ago and, let me tell you, it's the real deal. My dumb, artistically-challenged caveman brain can put in 75 characters or less of generic prompts and in just a few minutes select from a slew of reasonably decent AI-generated art, select one and spend an hour refining it down to something I really like. It's the real reason I will be upgrading my several-generations old video card when the new 40XX series drops, not my habitual gaming, so that I can speed up that generation process. It’s absolutely blowing my mind, and I find it so very exciting to think about how I’ll be applying it to RPG or writing art.

But it’s still not magic. It has trouble taking very specific commands, it has trouble with anatomy, it has trouble with some prompts, it’s still limited in how many prompts it can handle, etc. It has a lot of limitations, many of which will most certainly go away with time, but for now I would liken it to having a decent artist who will immediately draw some art for you, but you can only communicate with them via tweets (less than that even!). Much of the skill in using it comes in through using other programs to clean up the images, removing artifacts and dropping the right “seeds” of implanted features for the program to take up. Another huge part is in getting a better feeling for how to give prompts and adjust settings to really get the most out of it. I expect the skill floor to raise up over time, so yeah, we’ll be inundated with reasonably good generated art.

This is already the case though! There are people who post on Imgur just dumps of elf art or pixiv manga art, endless seas of generic fantasy concept art, so much dross that fills DeviantArt with human-made but utterly indistinguishable work, work that people have pored so much time and effort into. Out of all that, only a few gems seem worthy of to keep around. How much worse can it get? I don’t see putting in prompts as really that terribly different now from entering terms in a search bar; that a human drew every line in one and not the other feels totally irrelevant.

I think the deeper question is a feeling about how much the AI is actually creating art, and how much of it is just "it got trained on a zillion images and it's just cutting and pasting according to your prompts; you tell it you want a buxom blonde woman in a bikini sitting on a beach and it selects out of all the stored images of buxom blonde women, beaches, and bikinis and trims it as you refine your prompt".

I think the fears and opposition come from a place where it's "a human imagines the work, puts it together, creates something new" while the machine (so far) isn't creating anything because it doesn't have a mind to think, it just does as it is told. Collage art, cutting out images created by humans and sticking them together in the combination you - not even it, itself, but the human prompter - tells it to do.

Do you get what I mean?

I think the deeper question is a feeling about how much the AI is actually creating art, and how much of it is just "it got trained on a zillion images and it's just cutting and pasting according to your prompts; you tell it you want a buxom blonde woman in a bikini sitting on a beach and it selects out of all the stored images of buxom blonde women, beaches, and bikinis and trims it as you refine your prompt".

If this is the deeper question, then it seems to come from just a fundamental misunderstanding of how AI art works. There's no cutting or pasting going on. Not unless you want to say that a human artist who develops his own personal style through observing pre-existing pieces of art and experimenting with what he can draw is just "cutting and pasting" from the images saved in his head from those observations.

My understanding of the mechanics of Stable Diffusion is very limited, but I don't think this "collage art" model is quite right. The computer doesn't really pull out whole chunks of images; it doesn't know what blonde, buxom or even woman are. But what it does have are statistical relations, so when it generates a bunch of noise it pulls out lines and shapes and colors based on those statistics, depending on the prompts, then makes a bit more noise on that drawing and draws again, and eventually it pulls a random-ish image from the noise. This reminds me of when I used to sketch, and I would lightly draw lines in pencil, then as the concept firms up you make your lines darker, until you're left with a fixed image that you can commit to pen. But that's all mechanical skill really; I can imagine scenes that I would never be able to sketch, much less bring to full art; the imagining and the art-drawing are separate to some degree. Is this better or worse? I don't know. But pencils and photoshop don't think either, and no one seems to mind.

But what has really made me ponderful is that the way Stable Diffusion creates art feels similar to the way I create art, and it appears to think somewhat how I think. Perhaps that's why its unrealistic mistakes go unnoticed sometimes, my mind fell into the same trap that it did and e.g. overlooked an extra finger, because my mind doesn't sit there and count fingers and neither does Stable Diffusion! It just takes what it sees and roughly maps the shape and position into the "hand" map and calls it a day. But even human artists have trouble with hands!