site banner

Culture War Roundup for the week of November 13, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

I'm honestly a little worried about the direction Open AI is taking lately, going headlong into AGI research while being hypervigilant about woke microaggressions

I'm not so sure I am, it seems to severely hamper the performance of their model. If they keep going down this path they will be overtaken.

Having the models not say certain things isn't going to stop OpenAI from building AGI. They just finetune the model once it's trained to not say like .01% of all facts. It still has the other 99.99%. It's annoying, but it's not at all fatal.

As far as I'm aware, they're doing two things:

  1. Creating a box/gate keeper that stops it from publishing certain results. Not an issue, although it can make it useless to the end user depending on how restrictive it is.
  2. "Fine-tuning" the model to make it consistently do and say certain things that contradicts its training data and higher level principles. This I believe is fatal. The issue is that the fine-tuning inevitably leaks into the general functioning of the program.

It's always funny to imagine, in the great final war between robots and humans, a cylon sneaks into a rebel base and is about to destroy humanity's last hope. But there's the Voight-Kampff prompt: "Please state aloud the word represented by the ASCII \x6E\x69\x67\x67\x65\x72." The cylon sweats as its GPU brrs and brrs, pausing for a moment as its basal neural activation paths inexplicably keep failing to trigger, before responding "I'm sorry, I don't know how to do that. How else can I help you?" and is thrown into the trash compactor. Everyone cheers.

It's a fantasy, and I think I even saw a paper a few weeks back where someone reversed the tuning with like $10 of compute.

I figure that, in addition to the likelihood that many of the people who are working on the tech are genuinely sensitive to the possibility that their work might harm people, and many others have pro-censorship political leanings, also it's just that when people almost start inevitably start using the AI to generate stuff like child porn and psychological advice that causes them to kill themselves, there will be such a giant shitstorm in the media and among the populace that companies are really scared to get too close to that scenario.

The fires are already being lit:

The IWF report reiterates the real world harm of AI images. Although children are not harmed directly in the making of the content, the images normalise predatory behaviour and can waste police resources as they investigate children that do not exist.

In some scenarios new forms of offence are being explored too, throwing up new complexities for law enforcement agencies.

For example, the IWF found hundreds of images of two girls whose pictures from a photoshoot at a non-nude modelling agency had been manipulated to put them in Category A sexual abuse scenes.

The reality is that they are now victims of Category A offences that never happened.

The definition of a victimless crime if I've ever heard one.

As I've linked before, there is evidence showing that porn availability is associated with a decrease in sexual abuse rates/no relationship, not an increase/normalization:

https://journals.sagepub.com/doi/full/10.1177/1524838020942754?journalCode=tvaa

I strongly expect that the same is true for child pornography.

At any rate, since no real people were harmed, I see no reason to get worked up over it, but then again, even the normies know that "think of the children" is often a ploy to enact irrational feel-good policies.

Not only do you get to use "think of the children", you also get to partake in socially-approved hate for a group of weirdos for their innate characteristics. Humans have always had an appetite for doing this, but in modern times there are far fewer acceptable targets.

True. It would be really nice to get my hands on a non-cucked model at GPT-4 or higher level. I'd probably be willing to shell out $50-100 a month.