site banner

Culture War Roundup for the week of September 5, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

105
Jump in the discussion.

No email address required.

To which tribe shall the gift of AI fall?

In a not particularly surprising move, FurAffinity has banned AI content from their website. Ostensible justification is the presence of copied artist signatures in AI artpieces, indicating a lack of authenticity. Ilforte has skinned the «soul-of-the-artist» argument enough and I do not wish to dwell on it.

What's more important, in my view, is what this rejection means for the political future of AI. Previous discussions on TheMotte have demonstrated the polarizing effects of AI generated content — some are deathly afraid of it, others are practically AI-supremacists. Extrapolating outwards from this admittedly-selective community, I expect the use of AI-tools to become a hotly debated culture war topic within the next 5 years.

If you agree on this much, then I have one question: which party ends up as the Party of AI?

My kneejerk answer to this was, "The Left, of course." Left-wingers dominate the technological sector. AI development is getting pushed forward by a mix of grey/blue tribers, and the null hypothesis is that things keep going this way. But the artists and the musicians and the writers and so on are all vaguely left-aligned as well, and they are currently the main reactionary force against AI.

One possible model of the situation is that AI will be so disruptive that it should be thought of as being akin to an invading alien force. If the earth was under attack from aliens, we wouldn't expect one political party to be pro-alien and one to be anti-alien. We would expect humanity to unite (to some degree) against their common enemy. There would be some weirdos who would end up being pro-alien anyway, but I wouldn't expect them to be concentrated particularly on either the left or the right.

In the short- and medium-term, your views on AI will be largely correlated with how strongly your personal employment prospects are impacted. As you point out, left-aligned artists and journalists aren't going to be too friendly to AI if it starts taking their jobs (especially if it leaves many right-coded industries unaffected), regardless of what other political priors they might have.

I wrote an essay on the old site about how techno-optimism and transhumanism fit more comfortably in a leftist worldview than a rightist worldview, and I still think there's some truth to that. But people can be quick to change their views once their livelihoods are on the line.

I don't think this is going to be that big of a bane on the average artist. In fact, I think this will be much like other digital tools, which have allowed below-average artists to punch above their weight. AI will be quickly adopted by these folks. Their overall art will improve, and they'll be able to pump out a lot more content. But they'll likely suck at doing revisions, as the AI probably isn't going to be built with that in mind. So the average artist will be able to step in, using AI to create ideas and starting points, and then build off of that. AI will be the go to for reference images.

And you'll have AI whisperers who are incredibly good at constructing prompts to get great results from AI.

I think artists largely fall into two camps. One are people who produce things that appeal to others, and another is people who produce things that appeal to themselves. Sometimes, in rare cases, the people who do their own art are able to appeal to the masses; and truly great artists can influence what appeals to the masses. When it comes to dealing with clients who are commissioning a work, some artists are trying to shove their vision on their client, while others are able to take what their clients want and replicate it perfectly. But the great artist is able to take what a client wants, filter it through themselves, and produce something the client didn't explicitly ask for, but really wanted. Or something like that.

Anyways, over the course of the next few years, I imagine there will be a few scandals, from niche to mainstream, of artists using AI but representing it as human-made. What I'm really looking forward to is a scandal of a web personality turning out to be a complete fabrication, and all their art/work being produced by AI. Because at the end of the day, most of the artists online are only popular because of the work they put into creating a name for themselves, cultivating an audience. It's largely marketing, with a small amount based on skill. Some of it, to be honest, is a woman having a pretty face and a prettier body. And so the real threat isn't a computer that can make great art; it's a computer that can connect with an audience in the same way an 'influencer' or 'content creator' can. The social skill needed to amass an audience, and retain them, is something that is far more valuable than drawing or any other skill. An AI that can replicate that is a direct threat to every 'influencer', whether they be an artist, streamer, Twitter journalist, etc. Though that will open the door for people with fewer social skills to do well, since they could leverage AI to create a social identity, but even if not, their inept social skills will come across as more 'authentic'.

Imagine if that happened with acting. Movies in a couple decades, the ones made with actual human actors in front of a camera, could end up with atrocious acting just so it seems more authentic..

Anyways, over the course of the next few years, I imagine there will be a few scandals, from niche to mainstream, of artists using AI but representing it as human-made.

Already here, technically:

https://www.washingtonpost.com/technology/2022/09/02/midjourney-artificial-intelligence-state-fair-colorado/

So the average artist will be able to step in, using AI to create ideas and starting points, and then build off of that. AI will be the go to for reference images.

The problem with this reasoning is that AI capabilities scale up FAST. Just a year ago the predecessors of the current models were barely passable at art. One year from now, they could be exponentially better still.

And artists who use it as a tool are actually helping it learn to replace them, eventually! So this isn't like handing someone a tool which will make their life easier, its hiring them an assistant who will learn how to do their job better and more cheaply and ultimately surpass them.

Just a year ago the predecessors of the current models were barely passable at art. One year from now, they could be exponentially better still.

https://xkcd.com/605/

Here's another relevant XKCD:

https://xkcd.com/1425/

8 years ago when this comic was published the task of getting a computer to identify a bird in a photo was considered a phenomenal undertaking.

Now, it is trivial. And further, the various art-generating AIs can produce as many images of birds, real or imagined, as you could possibly desire.

So my point is that I'm not extrapolating from a mere two data points.

And my broader point, that AI will continue to improve in capability with time, seems obviously and irrefutably true.

How does this work? My understanding was that the only "learning" that took place is when the model is trained on the dataset (which is done only once, requiring a huge amount of computational resources), and any subsequent usage of the model has no effect on the training.

I'm far from an expert here.

If they want to make the AI 'smarter' at the cost of longer/more expensive training, they can add parameters (i.e. variables that the AI considers when interpreting an input and translating it into an output), and more data to train on to better refine said parameters. Very roughly speaking, this is the difference between training the AI to recognize colors in terms of 'only' the seven colors of the rainbow vs. the full palette of Crayola crayons vs. at the extreme end the exact electromagnetic frequency of every single shade and brightness of visible light.

My vague understanding is that the current models are closer to the crayola crayons than to the full electromagnetic frequency.

Tweaking an existing model can also achieve improvements, think in terms of GANs.

If the AI produces an output and receives feedback from a human or another AI as to how well the output satisfices the input, and is allowed to update its own internals based on this feedback, it will become better able to produce outputs that match the inputs.

This is how a model can get refined without needing to completely retrain it from scratch.

Although with diffusion models like DallE, outputs can also be improved by letting the model take more 'steps' (i.e. run it through the model again and again) to refine the output as far as it can.

As far as I know there's very little benefit to manually tweaking the models once they're trained, other than to e.g. implement a NSFW filter or something.

And as we produce and concentrate more computational power, it becomes more and more feasible to use larger and larger models for more tasks.