site banner

Culture War Roundup for the week of January 2, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

Somewhat related but I was reading just yesterday about GPTZero which is an AI model designed to determine whether a body of text was generated by an AI. Apparently it used GPT-2 to train. Would be interesting to see if someone could develop something similar for identifying if an image was AI generated.

I was reading just yesterday about GPTZero which is an AI model designed to determine whether a body of text was generated by an AI. Apparently it used GPT-2 to train.

I now desperately want to see how GPTZero classifies typical corporate HR speak.

If you have a sample you want to test it has an online portal here.

Not only do I want this, but I want to see art competitions spring up around such a model where artists attempt to make art (while being observed to prevent cheating) which either maximizes or minimizes the AI's belief that their art is AI generated. Having artists study and attempt to replicate AI art styles in order to fool the detector would be cool and a funny reversal of the current paradigm, and having artists learn techniques which are uniquely human would be cool too (and may be a thing they attempt to do even without such competitions in order to minimize suspicions laid on their art).

All you need to do to win is to put a swastika in your art. No AI is allowed to create that.

One fix would be to use an unfiltered AI, or at least a discriminator that was trained on unfiltered AI.

More likely, the organizers would give the contest artists a list of rules, which would disqualify any art with "unacceptable" content which would be approximately the same as what the AI consider unacceptable. Though this may still lead to exploits like making something vaguely swastika shaped that humans won't find offensive but the AI will because it can't tell the difference.

That would be pretty neat, like a Turing test analog for art. I suspect any AI-art detector would, similar to adversarial attacks against AI classifiers, not be picking up anything that was visible to the human eye. It'd be some weird stuff like unnatural statistical patterns in the bits or something.

Probably. But you could mitigate some of that by having an adversarial learning environment where an AI trains against the classifier and learns not to do some of the more obvious tells. And even if the detector is really good and humans can't truly fool it with a >50% belief, they can still compete with each other: a 20% beats a 16% beats a 8%, even if none truly fool the AI. And maybe for minimizing you would do really smooth but curving lines that AIs have trouble with (In either case, you might need to have some score for artistic beauty from human judges in addition to classifier score to prevent trivial things like a blank canvas or a couple boring straight lines.)