site banner

Culture War Roundup for the week of October 10, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

23
Jump in the discussion.

No email address required.

In support of mistake theorists and against outgroup reification

There's even more drama in the AI art sphere than before. Actually multiple dramas, all around Stable Diffusion, as is customary now. Artists on Twitter are threatening class action, half-seriously rallying around Greg "by greg rutkowsky, trending on artstation, 8K" Rutkowsky, Palmers and others; Danbooru is being slowly purged of content on takedown requests, in the wake of NovelAI SD-based image generator release (drastically superior to earlier stuff, and allegedly tracing over whole pictures); StabilityAI Discord has banned Automatic111, a hero and pillar of the community, the developer of the leading UI, due to him being an asocial Russian asshole implementing means to handle the recently leaked (by a third party) NovelAI models and allegedly stealing like 20 lines from their proprietary code, apparently to placate NAI and send a message to future pirates and business partners about Emad's priorities; Voldy denies the theft and counter-accuses NAI of copying his prompt-weights code without license; Stability staff/mods, it is claimed, have taken over a community subreddit by guile, to no comment from higher-ups (update: Emad magnanimously agreed to concessions); Emad keeps postponing the release of an improved 1.5 citing great responsibility and «tweaks to handle extreme cases» which is taken to mean «castration on dataset level». It looks like another Open-for-profit company has been revealed as an embrace-extend-extinguish scheme, and we will have to learn, after all, to pool resources on our own. Or maybe it's all catastrophizing by excitable FOSS fanatics. The situation develops rapidly.

…But I'd rather leave the drama discussion to @Porean, seeing as he's posted it first, and talk about something less spicy. After interactions with Hlynka here, I want to share an observation about conflict theoretic lens and unhelpful labeling of the outgroup – such as artists on 4chan /ic board calling AI users «pajeet».

This has to do with Emad's origins and the racism-tinged contempt for «soulless tech bro nerds» pervasive among the 4chan creative intelligentsia, of course (the Twitterati attack soulless tech bro nerds as such, without racial qualifiers). No equivalent prejudice against South Asians exists in Russia. So, there AI users and people arguing in favor of this tech are labeled «neuroschizo». I wonder about other sectors of the Internet.

Yes, singular «pajeet», singular «schizo». It's not just a meme. They (well, many of them) report it to mods as such, they whine about a sleep-deprived obsessive fanatic who keeps pestering them in their strongholds (a not-so-implausible scenario, but clearly wrong in this case). And I posit that this is Indicative Of A General Failure-Prone Tendency Of Conflict Theorists, who have a powerful presence here. I have ridiculed Scott's Conflict-Mistake framing myself. But the core objection – namely that the Conflict model is evidently true, as well as people's willingness to lie in service of their terminal goals – may be a cognitive poison pill.

Anonymous imageboards have been a mighty forge of internet culture. What is often underappreciated is how strong they are pound for pound, in terms of active user or post count – even the biggest 4chan boards are like a middling subreddit or Discord/Telegram chat. Why is that? Freedom of expression, lack of reputation and all that jazz, you know it. But I think that they're a bit similar to latent diffusion models: they are more efficient, due to compressing the trappings of a social network into a lower-dimensionality space. By stripping identification and the complexity it can carry, they allow – nay, force – individual voices to be associated instead with archetypes of groups, in every individual interaction. You cannot be a token, like on Facebook or what have you: everyone is a type, but which type, depends on the mood and the topic. This inflates the effective population size to parity with a big society where full connectedness is impossible and knowledge about others must spread by rumors and vague stereotypes. It makes boards a self-running, accelerated social experiment. (admittedly this theory needs more polish)

Anons know, of course, that they are few in number (although a Legion). And they can see that people are pretty stable in their quirks. And they are aware that people can lie when it serves them. So they overcorrect into reifying recognizable opinions as marks of an individual or at most a small coherent group. Someone sneering at «Chuds» comes, in /pol/ mythology, from some specific discord – and needs to be reminded that he «will never be a woman». On /ic/, someone explaining how latent diffusion works is… «cool story pajeet».

It's an experiment that represents, at small scale, the superstitious nature of network age paranoia. In larger communities, the same assumptions are applied on group level. Everyone who disagrees with me is a Russian bot! Except if I'm a Putin loyalist, then it's gotta be ЦІПСО, Ukrainian propaganda division (that's what I am these days, according to many of my compatriots). If you're an American Right-Winger, it's some sort of GloboHomo WEF talking head. If you're a good progressive, it's probably a Fascist representing a unified anti-Lib front.

This is psychologically comforting for a few simple reasons.

First, such a problem is very legible and simple. There's no structural reason for the thing that oppresses you to exist and be the way it is, no grandiose system of incentives, just malign will of a finite set of human actors, fallible flesh and blood with a self-serving narrative.

Second, it's surmountable. Calling the enemy out is half the battle; getting him banned is another one fourth, after that you just gather up the boys and attack his turf. The hostile group is not representing the majority of the population (usually is puny), is easily identifiable and kinda ridiculous. Maybe just one weirdo, a «pajeet» or «chud» at that.

Third, and most importantly, it excuses ignorance. You can plug bananas in your ears because conflict theory predicts that the enemy will lie, or try to deceive you while not technically lying, to demoralize you. And why would he keep investing effort into that, coming up with arguments tailor-made for you? Of course because his onslaught isn't really going according to plan, in part, precisely because people are not falling for it! That's what those artists think too. AI proponents are lying collectively to break their spirit; they just need to wait it out while the pajeet runs out of steam; they don't need to adapt.

They're wrong.

It's unhelpful to have a dead wrong model of the conflict they really are in. One big and obvious reason: because it precludes communication with people who are different but not the enemy and are trying to help – or at least negotiate. In trying to not be a 0HPLovecraft-style quokka, such a conflict theorist ends up being simply a myopic rat, destined for being made marginal and obsolete. The great discovery that counteragents might lie is a point of a wholly unreasonable pride of a certain brand of reactionaries. It's also a source of a delusion as damaging as the inability to conceive of bad faith.

Tl;DR: Conflict theories have a failure mode or reifying the opposition, that can lead to cognitive closure and lack of response to arguments. The very assumption of bad faith, without extra precaution, bootstraps the development of theories on how bad faith propaganda is being delivered to you, for what reason and by whom. From then on, truth is ever harder to find. People should remember that, when assuming they can see through the opponent and dismissing opponents out of hand.

Okay, you say the penny's gonna drop and it will be illegal to train (and sell) art generators on copyright protected data. But what then about other types of AI, can you train language translation AI on parallel corpora consisting of copyrighted books and other texts? Can Google crawl the web and train image captioners on any data they find? How about their search engine using language models?

It will be interesting where the line gets drawn.