site banner

Culture War Roundup for the week of September 5, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

105
Jump in the discussion.

No email address required.

To which tribe shall the gift of AI fall?

In a not particularly surprising move, FurAffinity has banned AI content from their website. Ostensible justification is the presence of copied artist signatures in AI artpieces, indicating a lack of authenticity. Ilforte has skinned the «soul-of-the-artist» argument enough and I do not wish to dwell on it.

What's more important, in my view, is what this rejection means for the political future of AI. Previous discussions on TheMotte have demonstrated the polarizing effects of AI generated content — some are deathly afraid of it, others are practically AI-supremacists. Extrapolating outwards from this admittedly-selective community, I expect the use of AI-tools to become a hotly debated culture war topic within the next 5 years.

If you agree on this much, then I have one question: which party ends up as the Party of AI?

My kneejerk answer to this was, "The Left, of course." Left-wingers dominate the technological sector. AI development is getting pushed forward by a mix of grey/blue tribers, and the null hypothesis is that things keep going this way. But the artists and the musicians and the writers and so on are all vaguely left-aligned as well, and they are currently the main reactionary force against AI.

I think it will depend mainly on how the issues of "AI racism" and "AI profits going to top 1%" end up playing out. The left is the party of regulation, and there is plenty that they'd like to regulate here. Generally the left's stance towards things they want to regulate is not especially friendly.

I just see AI as perniciously resistant to regulation, unless you have near-unanimous buy-in from all the other countries too.

It's already proven impossible to regulate 3D printed weapons. I'm sincerely doubting we'll be able to regulate all the compute on the planet to prevent someone, somewhere, from training up and distributing new machine learning models.

StableDiffusion is an example of a group very explicitly releasing a powerful model for the purpose of preventing it from being centralized and regulated.

I just see AI as perniciously resistant to regulation

People said that about the internet too.

Hasn’t helped out Kiwifarms that much.

Last I was aware, Kiwifarms is still operational, using a protocol and software created and funded by the US Government with this use-case as one of its objectives. It ain't exactly normie-compatible since the URL is some long base-64 abomination only a Linux user would think was acceptable, but you can still talk there with sufficient motivation.

The problem with subversivity is that you can't be subversive without a value-add (something that gender politics mirrors very well with respect to men). As far as I'm aware, Kiwifarms doesn't actually have a value-add; it's just a place to sneer at people. Twitter, and those who would like to be employed there, are interested in socio-regulatory capture to enforce its monopoly on being the place you go to sneer at people (with "the only acceptable sneering is leftist sneering" being the subtext).

Contrast SomethingAwful, being the place a few cornerstones of current Internet culture had their beginnings (most notably, the entire concept of the "Let's play", being a multi-billion dollar industry today), or 4chan, whose unique mode of operation enabled its users to be the leaders in meme-creation for many years, spawned a few games, and whose stream-of-consciousness format lends itself to a wide variety of topics and subtopics not properly serviceable by any other forum. They don't exist solely to sneer, whether by happy accident (4chan sucked up all the non-sneering SomethingAwful users; if they hadn't been so Mean Girls, 4chan wouldn't exist in the first place!), or they only had the sneering take over after the fact (SA and Twitter).

Kiwifarms itself may die, but there will be (already are?) plenty of sites that will carry on the torch as before because the userbase still exists in physical reality and still wants a place to congregate.

I mean... that's why we have this site? To stave off a reddit ban and ensure we continue to have a forum for our purposes?

I don't think those two things are at all alike in relevant aspects, though. If people in China invent an alternative internet or kiwifarms, I can't just run it on my own machine.

A sufficiently general interpretation of the argument ("people were calling X resistant to regulation but it turned out to not be, so if people call Y resistant to regulation, it will also turn out to not be") proves way too much though; the exercise of finding historical patterns that were broken is trivial.

A sufficiently general interpretation of the argument ("people were calling X resistant to regulation but it turned out to not be, so if people call Y resistant to regulation, it will also turn out to not be") proves way too much though; the exercise of finding historical patterns that were broken is trivial.

I think this is a very good point. This is a fully general argument about regulation being capable of adapting to whatever technology it wants to regulate. The logistics of some forum website running versus being taken down is sufficiently different from the logistics of a piece of software being run on an individual PC (sans any online requirements) that we can't generalize the experience of one to the other.

Still, I must admit that I personally can't help but feel that we will see history repeat here. Much like, say, KF, AI image-generation software seems likely to piss off a sufficiently sympathetic and loud group of people such that people will find a way to clamp down on it. Maybe it will be death by a thousand cuts by censoring the research that goes into and the distribution of and the results of such software. Maybe it will be more overt political action of just men with guns preventing people from producing independent personal computers and/or using them. Maybe it will be some new creative way of regulation that will have been invented by some AI software that no human could have come up with today. It just seems that when it comes to this stuff, where there's a will there's a way, and there seems to be a lot of will to prevent people from generating arrangements of pixels that one finds objectionable.

They make GPUs that have intentionally crippled crypto currency mining. Could they do something similar for AI? If it's technically feasible I could see that happening, with only universities and tech giants having access to the good stuff.

This only works because cryptocurrency mining has minimal margins (so top-of-the-line mining hardware is barely profitable, and slightly gimped top-of-the-line hardware is not profitable at all). ML computations are ultimately similar enough to general-purpose computing that you couldn't intentionally cripple them by more than some small constant factor without also crippling games (I've written an ML paper myself where we accelerated the training using graphics-only stone age shader operations, because the deadline was near and we couldn't get our hands on modern GPUs fast enough), but universities and tech giants with 10x faster hardware don't categorically win against a horde of tech-savvy internet users with the 1x version.

The crux of machine learning is matrix multiplication, which is a very fundamental operation. It would be damn hard to make a GPU that can do anything useful, without being able to multiply matrices. "Only have access to the good stuff" is probably best accomplished by limiting access to GPUs at all.

"Only have access to the good stuff" is probably best accomplished by limiting access to GPUs at all.

This is already happening. The US government has already banned Nvidia from selling high-end chipsets to customers in China. One important point about the bans is that this not only bans the current top-end chips but also anything they develop in the future with similar capabilities - so in a few years it will cover high-end gaming cards too, and gradually extend lower down the range as time goes on.

That's currently in the geopolitics sphere, but it's easy to see it being rolled out to other customers that the people in charge don't want to have unfiltered access to modern AI tools. If the masses want powerful GPUs they can use an online service like GeForce Now or Dall-E that restricts any sort of dangerous/undesirable behavior.

I’m not sure if you can prove too much here. There is nothing that floats totally free of all regulation (understood in a sufficiently broad sense). You can’t say “well, it’s technology, and technology is above such petty concerns”. Technology gets regulated all the time: nukes, guns, etc.