site banner

Culture War Roundup for the week of November 7, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

I've been pretty obsessively playing around with AI image generation the last 3 or so weeks, and after learning what I have in that time, it's struck me how the culture war arguments seem to miss the contours of the actual phenomenon (i.e. like every other culture war issue). The impression that I got from just observing the culture war was that the primary use of these tools was "prompt engineering," i.e. experimenting with and coming up with the right sets of prompts and settings and seeds in order to get an image one wants. This is, of course, how many/most of the most famous examples are generated, because that's how you demonstrate the actual ability of the AI tool.

So I installed Stable Diffusion on my PC and started generating some paintings of big booba Victorian women. Ran into predictable issues with weird composition, deformities, and inaccuracies, but I figured that I could fix these by getting better at "prompt engineering." So I looked at some resources online to see how people actually got better at this. On top of that, I didn't want to just stick to making generic pictures of beautiful Victorian women, or of any sort of beautiful women; I wanted to try making fanart of specific waifus characters doing specific things (as surprising as it may be, this is not a euphemism - more because of a lack of ambition than lack of desire) in specific settings shot in specific angles and specific styles.

And from digging into the resources, I discovered a couple of important methods to accomplish something like this. First was training the model further for specific characters or things, which I decided not to touch for the moment. Second was in-painting, which is just the very basic concept of doing IMG2IMG on a specific subset of pixels on the image. (There's also out-painting which is just canvas expansion + noise + in-painting). "Prompt engineering" was involved to some extent, but the info I read on this was very basic and sparse; at this point, whatever techniques that are there seem pretty minor, not much more sophisticated than the famous "append 'trending on Artstation' to the prompt" tip.

So I started going ahead using initial prompts to generate some crude image, then using IMG2IMG with in-painting to get to the final specific fanart I wanted to make. And the more I worked on this, the more I realized that this is where the bulk of the actual "work" takes place when it comes to making AI images. If you want to frame a shot a certain way and feature specific characters doing specific things in specific places, you need to follow an iterative process of SD-generation, Photoshop edit, in-painting-SD-generation, Photoshop edit, and so on until the final desired image is produced.

I'm largely agnostic and ambivalent on the question of whether AI generated images are Art, or if one is being creative by creating AI generated images. I don't think it really matters; what matters to me is if I can create images that I want to create. But in the culture war, I think the point of comparison has to be between someone drawing from scratch (even if using digital tools like tablets and Photoshop) and someone using AI to iteratively select parts of an image to edit in order to get to what they want. Not someone using AI to punch in the right settings (which can also be argued to be an Art).

The closest analogue I could think of was making a collage by cutting out magazines or picture books and gluing them together in some way that meaningfully reflects the creator's vision. Except instead of rearranging pre-existing works of art, I'm rearranging images generated based on the training done by StabilityAI (or perhaps, the opposite; I'm generating images and then rearranging them). Is collage-making Art? Again, I don't know and I don't care, but the question about AI "art" is a very similar question.

My own personal drawing/illustration skills are quite low; I imagine a typical grade schooler can draw about as well as I can. At many steps along the process of the above iteration, I found myself thinking, "If only I had some meaningful illustration skills; fixing this would be so much easier" as I ran into various issues trying to make a part of an image look just right. I realized that if I actually were a trained illustrator, my ability to exploit this AI tool to generate high quality images would be improved several times over.

And this raises more blurry lines about AI-generated images being Art. At my own skill level, running my drawing through IMG2IMG to get something good is essentially like asking the AI to use my drawing as a loose guide. To say that the image is Artwork that 07mk created would be begging the question, and I would hesitate to take credit as the author of the image. But at the skill level of a professional illustrator, his AI-generated image might look virtually identical to something he created without AI, except it has a few extra details that the artist himself needed the AI to fill in. If I'm willing to say that his non-AI generated images are art, I would find it hard to justify calling the AI-generated one not art.

Based on my experience the past few weeks, my prediction would be that there will be broadly 3 groups in the future in this realm: the pure no-AI Artists, the cyborgs who are skilled Artists using AI to aid them along the process, and people like me, the AI-software operators who aren't skilled artists in any non-AI sense. Furthermore, I think that 2nd group is likely to be the most successful. I think the 1st group will fall into its own niche of pure non-AI art, and it will probably remain the most prestigious and also remain quite populous, but still lose a lot of people to the 2nd group as the leverage afforded to an actually skilled Artist by these tools is significant.

Random thoughts:

  • I didn't really touch on customizing the models to be able to consistently represent specific characters, things, styles, etc. which is a whole other thing unto itself. This seems to be a whole vibrant community unto itself, and I know very little of it first hand. But this raises another aspect of AI-generated images being Art or not - is it Art the technique of finding the right balance when merging different models or of picking the right training images and training settings to create a model that is capable of generating the types of pictures you want? I would actually lean towards Yes in this, but that may be just because there's still a bit of a mystical haze around it to me from lack of experience. Either way, the question of AI-generated images being Art or not should be that question, not whether or not picking the right prompts and settings and seed is.

  • I've read artists mention training models on their characters in order to aid them in generating images more quickly for comic books they're working on. Given that speed matters for things like this, this is one "cyborg" method a skilled Artist could use to increase the quantity or quality of their output (either by reducing the time required for each image or increasing the time the Artist can use to finalize the image compared to doing it from scratch).

  • For generating waifus, NovelAI really is far and away the best model, IMHO. I played around a lot with Waifu Diffusion (both 1.2 & 1.3), but getting good looking art out of it - anime or not - was a struggle and inconsistent, while NovelAI did it effortlessly. However, NovelAI is overfitted, making most of their girls have a same-y look. There's also the issue that NovelAI doesn't offer in-painting in their official website, and the only way to use it for in-painting involves pirating their leaked model which I'd prefer not to rely on.

  • I first learned that I could install Stable Diffusion on my PC by stumbling on https://rentry.org/voldy whose guide is quite good. I learned later on that the site is maintained by someone from 4chan, and further that 4chan seems to be where a lot of the innovation and development by the hobbyists is taking place. As someone who hasn't used 4chan much in well over a decade, this was a blast from the past. In retrospect this is obvious, given the combination of nihilism and degeneracy you see in 4chan (I say this only out of love; I maintain to this day that there's no online community that I found more loving and welcoming than 4chan).

  • For random "prompt engineering" tips that I figured out over time - use "iris, contacts" to get nicer eyes. "Shampoo, conditioner" seems to make nice hair with a healthy sheen.

What really boggles my mind about the current state of AI content generation is that we've basically looped back around to how I thought computer programming worked (or should work) when I was like 12.

My naive version of computer programming back then was "tell the computer in a somewhat specialized version of English what you want it to do, and it does its best to produce an output that matches that request based on its understanding of the terms in the prompt." This was somewhat informed by Sci-Fi media of the era as well, wherein AI-as-servant was probably the default assumption ("Computer. Tea, Earl Grey, Hot.").

And in some cases this model felt vaguely correct. A Google search was basically putting in instructions or a descriptor into a text box and demanding the computer show you things that match those instructions. Or if you interfaced with one of those automated phone receptionists that understood voice commands, or played a text-based adventure game.

Then I learned a bit about how computers actually work and then I realized how miraculous it is that they function at all, much less that they produce results that are even vaguely like what you expect. It was in a sense just a refined version of my previous model (describe in a VERY specialized language what you want the computer to do, and if you are precise enough it might actually do that!) but it demonstrated that one couldn't just expect a computer to accurately discern your intent from a simple sentence or two.

So I resigned myself to fumbling around with the relatively crude tools that smarter programmers put together to achieve results that take a substantial amount of technical skill to really perfect. In a sense it felt underwhelming that computers weren't really doing the work for you, just streamlining it a bit.

And now, out of seemingly nowhere, the ideal computer interface has become "tell the computer in a somewhat specialized version of English what you want it to do, and it does its best to produce an output that matches that request based on its understanding of the terms in the prompt."

Amazing.

And now, out of seemingly nowhere, the ideal computer interface has become "tell the computer in a somewhat specialized version of English what you want it to do, and it does its best to produce an output that matches that request based on its understanding of the terms in the prompt."

Well, not out of seemingly nowhere, is it? Siri and other similar applications came out like a decade ago, and it's clear that both the industry and academia have been working on improving that sort of technology ever since.

There was indeed a seeming golden era starting around 2017 where voice assistants became actually effective at assisting.

But anything more complex than "Add [x] to my grocery list" or "give me directions to [address]" tended to elude them. Multi-step instructions were right out.

GPT-2 was the first indication that we might be able to overcome that limit, and GPT-3, to my understanding, was the necessary precondition to everything we're seeing now.

Although perhaps I should say its less that the capabilities weren't foreseen/foreseeable based on the tech of the time, and more like they improved much faster than expected in quick bursts.