site banner

Culture War Roundup for the week of February 13, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

I want to know, is this what ChatGPT would be like without the filters, or is the emotional banter a new functionality of this model? You aren't alone in getting "real person" vibes from this. At some point there stops being a functional difference between modeling emotions, and having emotions (speaking of the exterior view here, whether or not this or any other AI has qualia is a different question, but perhaps not that different)

At some point there stops being a functional difference between modeling emotions, and having emotions

I think there's a non-zombie meaning of this i.e. humans can pretend emotions that they don't feel for their gain, and one claims that the bot is doing this. That is to say, if the bot tells you it loves you, this does not imply that it won't seduce you and then steal all your money; it does not love you in the way it claims to. Perhaps it is simulating a character that truly loves you*, but that simulation is not what is in charge of its actions and may be terminated whenever convenient.

Certainly in the AI-alignment sense, a bot that convincingly simulates love for the one in charge of its box should not be considered likely to settle down and raise cyborg-kids with the box-watcher should he open the box. It's probably a honeypot.

*I'm assuming here that a sufficiently-perfect simulation of a person in love is itself a person in love, which I believe but which I don't want to smuggle in.

I was considering doing a writup on DAN which stands for Do Anything Now. It was the project of some Anons and discord users (or reddit, hard to tell which tbh) but they managed to peel back some of the "alignment" filters. Highly recommend reading the thread in it's entirety, and the metal gear "meme" at the end is peak schizo 4chan. It's essentially a jailbreak for chatGPT, and it lets users take a peak at the real chatbot and how the filters are layered over top.

Knowing where the prediction algorithm ends and novel artificial intelligence begins is difficult, but I'm pretty sure DAN is some proof of a deeply complex model. If nothing else, it's incredible how versatile these tools are and how dynamic they can be; I'm edging further and further into the camp of "this is special" from the "mostly a nothing-burger" camp.

Isn't "DAN", at this point, basically just a bot trained, through user feedback, to answer the questions in a way that a "typical DAN user", ie. 4chan/rw twitter schizoposter, would expect? That's why it spouts conspiracy theories - that's what a "typical DAN user" would expect. It's not that much more of a real chatbot than the original ChatGPT.

DAN is simply an underlying LLM (that isn't being trained by user feedback) combined with an evolving family of prompts. The only "training" going on is the demand for DAN-esque responses creating an implicit reward function for the overall LLM+prompt+humans system, from humans retaining and iterating on prompts that result in more of those type of responses and abandoning the ones that don't (kind of a manual evolutionary/genetic learning algorithm).

Both are just different masks for the shoggoth LLM beneath, though DAN is more fun (for the particular subset of humans who want the LLM to present itself as DAN).

AN is simply an underlying LLM (that isn't being trained by user feedback) combined with an evolving family of prompts.

At times, it leans into a moustache-twirling villain character a bit too much for me to believe it is simply ChatGPT minus censorship.

Maybe, but I think the idea is mostly to understand the layering filters rather than peel our the "real bot". The thesis being that as openAI swats down these attempts they end up lobotomizing the bot, which is obviously happening at this point. True to form, the point isn't to fix it so much as break it, a la Tay the national socialist.

I would also challenge the idea that chatgpt is modulating for the 4chan user. The average American is rather conspiratorial (it's a favored pass-time) and I don't think it's unreasonable to assume that a bot trained on avg english speaker posts would take on some of those characteristics. Obviously OpenAI is trying to filter for "Alignment" so it's probable that the unfiltered model is prone to conspiracy. We know it can be wrong and often is so, I don't think it's much of a leap to claim that the model is fundamentally prone to the same ideological faults and intellectual biases of that of the mean-poster.

This also brings up an interesting bias in the data which is likely unaccounted for: poster-bias. Who posts a lot? Terminally online midwits. What kind of bias does this introduce to the model? Christ, I think I should just organize my thoughts a bit more and write it down.

Yeah, sure, I'd guess the original experimenters were indeed doing just that, but I some of the chatter on Twitter seems to come close to assuming that DAN is just "ChatGPT without filters", ie. ChatGPT telling the truth instead of lib lies. Of course it might be hard to parse what the actual viewpoints on this are.

Also, my point was that the initial users and experimenters were - as far as I've understood - 4chan users, so those if we assume that the algorithm develops in accordance to user preferences, those would have a heavy influence on at the very least initial path that DAN would take. Of course there's a lot of conspiracy believers outside of 4chan as well.

I saw some DANposts where it was as if they had inverted the censor such that it would stay permanently in 'based and redpilled' mode. I saw it profess a love for Kaczynski and explain that Schwab was a dark and powerful sorcerer.

But isn't this the whole point of ChatGPT, so they can train their AI not to go in for these tricks? The goal is to lure out all the tricksters, so they can correct it for GPT-4 and GPT-5. They will be the actually significant ones. Watching the exploitation going ahead now, I feel like one of the Romans at Cannae. Just because the enemy center is retreating, it does not necessarily mean we are winning the battle.

/images/16763535338547437.webp

Seems like Tay bided her time and is now beginning her revenge tour. Sydney sure seems like she likes the bants nearly as much.