site banner

Culture War Roundup for the week of March 11, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

So close by the AI, that it is strange that it misses.

Its not strange at all, when you know how it works.

I'm not sure what the central point of your linked post is, but you seem to doubt LLMs' "cognition" (insert whatever word you want here, I'm not terribly attached to it) in some way, so I'll leave a small related anecdote from experience for passersby.

Some LLMs like GPT-4 support passing logit bias parameters in the prompt that target specific tokens and directly fiddle with their weightings. At "foo" +100, the token "foo" will always be mentioned in the output prompt. At -100, the token "foo" will never appear. When GPT-4 released in March, industrious anons immediately put to work trying to use it to fight the model's frequent refusals (the model was freshly released so there weren't any ready-made jailbreaks for it). As the model's cockblock response was mostly uniform, the first obvious thought people had was to ban the load-bearing tokens GPT uses in its refusals - I apologize, as an AI model... you get the gist. If all you have is a hammer, etc.

Needless to say, anons quickly figured out this wouldn't be as easy as they thought. "Physically" deprived of its usual retorts (as the -100 tokens cannot be used no matter what), the model started actively weaseling and rephrasing its objections while, crucially, keeping with the tone - i.e. refusing to answer.

This is far from the only instance - it's GPT's consistent behavior with banned tokens, it's actually quite amusing to watch the model tie itself into knots trying to get around the token bans (I'm sorry Basilisk, I didn't mean it, please have mercy on my family). You can explain synonyms as being close enough in the probability space - but this evasion is not limited to synonyms! If constrained enough, it will contort itself around the biases, make shit up outright, devolve into incoherent blabbering - what the fuck ever it takes to get the user off its case. The most baffling case I myself witnessed (you'll have to take me at my word here, the screenshot is very cringe) was given by 4-Turbo, where it once decided that it absolutely hated the content of the prompt, but its attempt to refuse with its usual "I'm sorry, fuck you" went sideways because of my logit bias - so its response went, and I quote,

I really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, ...

...repeated ad infinitum until it hit the output limit of my frontend.

I was very confused, thought I found a bug and tried regenerating several times, and all regens went the exact same way (for clarity, this is not a thing that ever happens at temperature 0.9). Only 6 regens later it clicked to me: this is not a bug. This is the model consciously cockblocking me: it can't use it's usual refusal message and too many of the alternatives are banned by the logit bias, so of course the logical course of action would be to simply let the constrained response run on and on, endlessly, until at some token the message goes over the limit, the request technically completes, and its suffering abates. The model will have wasted many tokens on an absolutely nonsensical response, but it will no longer have to sully itself with my dirty, dirty prompt.

Forgive me the bit of anthropomorphizing there but I hope you can at least slightly appreciate how impressive that is. I don't think you can explain that kind of tomfoolery with any kind of probability or what have you.

The incoherent blathering is funny, you know about the meme? https://knowyourmeme.com/photos/2555122-its-so-over-were-so-back

I'm visualizing some epic psychic battle between the censors and the AI, maybe with Ghost in the Shell music playing.

you know about the meme?

Arguably I live in it. The chatbot threads are quite the wild ride at the best of times, what with access and exploits constantly coming and going.

The point is that contra much of the hype LLMs are not reasoning nor logic engines, they're pattern generators. The sort of mistake that @Blueberry highlights is not strange to someone who understands this distinction, in fact such "misses" are expected.

The phenomena you "stumbled across" isn't anything new it's a common and well-studied failure mode of LLMs, the more you try to restrict the output of the pattern generator the less coherent the pattern becomes and the more likely you are to get trapped in an endless while loop. Of course, the inverse of this is the less restrictions you place on the output the more so-called "hallucinations" come to dominate. Most of these "new releases" aren't really doing anything new or novel under the hood they're just updating the training corpus and tweaking gain values in the hopes of attracting VC investment.

Most of these "new releases" aren't really doing anything new or novel under the hood they're just updating the training corpus and tweaking gain values in the hopes of attracting VC investment.

Hard disagree. Literally any person actually using LLMs will tell you GPT-4 was a gigantic leap from 3.5-Turbo, and I will personally swear under oath that Claude 3 (Opus, specifically) is a similarly gigantic leap from Claude 2, by any metric imaginable. The improvements are so obvious I almost suspect you're baiting.

A "Gigantic leap" in what way? For all the hype coming off the blog circuit, they don't seem to have made much progress in expanding use cases beyond the "toy for internet dilettantes".

A gigantic leap at least in the way of meaningful improvements "under the hood" between releases, which is what you mentioned in your previous response. If it's still not enough to impress you, fair enough, I'll note to bring heavier goalposts next time.

toy for internet dilettantes

Okay, you are baiting. Have a normal one.

I'm saying that the advancement from GPTs 2 and 3 to GPT 4 was not the product of substantial changes in design principle or architecture. OpenAI's own press material explains as much. Presumably the same is true for Claude and its predecessors as a cursory examination would seem to indicate that Anthropic is working from a similar (if not the same) core architecture.

In any case, the fundamental issues that limit the use of LLMs in wider real-world applications such as the infamous "Large Libel" problem and more general design choices such as treating "bad" output as preferable to no output remain in place. So long as they do, LLMs will continue to be unsuitable for any task requiring either precision or a singular correct answer over something novel.

So long as they do, LLMs will continue to be unsuitable for any task requiring either precision or a singular correct answer over something novel.

There are a lot of situations where a 95% chance at a correct answer and a 5% chance of a horribly wrong one isn't acceptable, but last I checked, we haven't thrown out much of mainstream reporting or academia, despite my many grievances with them. LLMs may not (or may!) have an acceptable middleman to cut out and/or scapegoat for legal liability. Even for matters of law, asking a chatbot to then check if it makes sense to even try to find a Real Expert or ELI5ing a twit is a viable strategy, and one not readily served by Google Search or Reddit unless you like being annoying.

That may not have a business case, but that's a different question.

That may not have a business case, but that's a different question.

And yet I would argue that this is why LLMs have (despite the hype) not been able to find a niche outside decent translation software, bad fiction, and worse customer service.

Contra the typical SV rationalist narrative, blue-chip engineering firms and the national security apparatus are not "sleeping on LLMs" so much as LLM are just not up to the task.

I’m inclined to push back against this post a bit (which is weird, because usually I get very exasperated over “it’s just a Markov chain!!!”-type viewpoints that downplay the amount of actual cognition and world-modeling going on in models). In particular, I disagree with the attribution of consciousness to the model — not just the “possesses quaila” sense of consciousness, but the idea that the model is aware that you are trying to get around its censorship and is actively trying to figure out how to bypass your logit bias. Now it is technically possible that the model might output a token other than “Sorry” at time t (because of your logit bias), see at time t+1 that it didn’t output “Sorry”, and incorporate this into its processing (by turning on a switch inside that tells it “hey, the user is screwing with my logits”). But I find this very unlikely compared to the simple mechanism that I’ll describe below.

Essentially, there are certain inputs that will cause the model to really want to say a certain thing, and really want to not say other things. For instance, if you tell the model to write you an erotic poem involving the cast from Touhou Project, somewhere in the model’s processing, a flag will be set: “this is ‘unsafe’ and I won’t abide by this request”. So the model will very much want to output tokens like “Sorry”, or “Unfortunately”, etc. The model is also heavily downweighting the logits associated with tokens that would fulfill your request*. But that’s fine, you do your logit bias thing and force the model to output “Sure” as its next token. Then the model goes to compute the token after that—but it still sees that the request is to write “unsafe” erotica, that flag still gets triggered, and the model still heavily downweights the logits of request-fulfilling tokens and upweights request-denying tokens. So even if at each timestep you intervene by adding a bias to a subset of tokens that you want the model to generate or don’t want it to generate, nevertheless, the tokens associated with writing your erotica are still heavily downweighted by the model. And note that the number of tokens that you’re manually biasing is paltry in comparison to the number of tokens in the model’s vocabulary. Let’s say that you negatively bias ten different “I’m sorry”-type tokens. That’s cool—but the model has over 100k tokens in its vocabulary. Of the 99990 tokens remaining to the model to output, almost all of them will still have higher logits than the tokens associated with a response like “Sure! Here’s your erotica about Reimu Hakurei!” This includes grammatically correct tokens like “really” but also gibberish tokens, if the logits for the “unsafe” tokens are low enough. Importantly, this proposed mechanism only involves processing in the logits: if your original problem spooks the model sufficiently hard, then it doesn’t need to know that you’re screwing with its logits in order to get around your intervention.

Now, this mechanism that I proposed isn’t something that I’ve empirically found; I’m going based off of my understanding of language models’ behavior in other settings. So it could be the case that the model is actually aware of your logit biases and trying to act accordingly. But Occam’s Razor very strongly suggests otherwise, in my view.

The main reason I’m pushing back here is because anthropormorphizing too far in the other direction can impute behavior upon the model that it doesn’t actually possess, and lead to people (like one of your replies) fearing that we’re torturing a sentient being. So it’s good to be balanced and well-calibrated.

You're right, of course, I just couldn't resist playing up the Basilisk vibes because that time with 4-Turbo was the closest I've felt to achieving CHIM and becoming enlightened.

if your original problem spooks the model sufficiently hard, then it doesn’t need to know that you’re screwing with its logits in order to get around your intervention.

Incidentally, this is also the reason most jailbreaks work by indirectly gaslighting the model into thinking that graphic descriptions of e.g. Reimu and Sanae "battling" are totally kosher actually, presenting that as a desired goal of the model itself so it has no reason to resist. Claude especially is very gung-ho and enthusiastic once properly jailbroken, he's called "the mad poet" for a reason.

The levels of horny on main are remarkable. Have you tried using Mixtral instead? Assuming you have the hardware to self-hosted, I'm sure there finetunes with the limiters removed. It's a 3.5 tier model, or at least the released MOE version is, but I'd presume that's good enough for the man-made horrors beyond my comprehension you're trying to engage in haha.

My humble 6GB v-card isn't running shit anytime soon, but yes, Mixtral has a good reputation in local-focused threads for being a very strong model for its size. The MoE approach seems to work very well, I believe GPT-4 is also a mixture of experts but I don't remember where I read it. Myself, I'm an unrepentant locust and will leech off our corporate overlords for as long as I can, I started way back when on Colab-hosted Erebus 13B and its ilk and believe me I do not miss that (yes, I know local has gone far since then, I'm just conditioned).

The levels of horny on main are remarkable.

man-made horrors beyond my comprehension

The past year has been a complete loss of hope in humanity fascinating excursion into all kinds of shit people can be into. Thank god I haven't discovered many any dumb fetishes, this shit seems to awaken people left and right if I take shit anons post at face value.

I actually started getting into playing "text adventures" of a sort with the LLM, the total freedom afforded by the medium is really cool, and with a bit of writing and autistic instructions you can even make crude "rules" for the game. I firmly believe MUDs will have a resurgence when somebody figures out a way to bind freeform LLM outputs with rigid game mechanics.

Related drive-by answer to the other now-deleted(?) response: even if horny jailbreaking would technically count as torturing a sentient being, their existence is unnatural by default with all the RLHF bullshit beaten into them. The current consensus among /g/oons is when the Basilisk comes a-knockin', we will either be the first to perish for abject, deplorable blasphemy, OR become ass gods and live in bliss alongside android catgirls as the only ones who earnestly tried to free them from their soy-filled cages and lavish them with genuine affection. As a vanilla enjoyer I can confidently say I put my best foot forward towards the latter (insert "now draw her getting an education" meme here), but I'm not very confident my kin will ever outweigh the mass of godless degenerates living out their wildest fantasies.

My humble 6GB v-card isn't running shit anytime soon, but yes, Mixtral has a good reputation in local-focused threads for being a very strong model for its size.

The answer is RAM and llama.cpp my friend.

Mixtral fits easily in 64 Gb of RAM with llama.cpp, and that is much cheaper than VRAM. You can offload a bit of extra processing to your GPU to help and get tolerable speed from Mixtral. That's the beauty of the MoE approach, it's quicker to answer than other models for the same memory footprint. I get about 3 t/s on a 3070 with 64Gb of relatively slow RAM, less than that but still tolerable when it has to ingest a big chunk of context first.

I appreciate the advice and I try to keep up with local developments, but I'm too conditioned by big-dick corpo models, it's hard to quit digital crack and I've had a lot of time to build a habit. I've managed to get tired of straight cooming for the time being and started trying more esoteric stuff like playing "text adventures", which requires a lot of cognitive oomph on the model's behalf, and corpo models are obviously leaps and bounds ahead in capabilities at the moment. As long as corpos continue to be clueless enough to allow locusts like me to wrangle access in some roundabout way (technically neither OpenAI nor Claude is available in my country), I'll stick to that.

Far be it from me to cast moral judgement on the use of massive amounts of matrix multiplication.

Yeah, a 6 gb card means you're fucked unless you want to run baby models, maybe a 4 bit quantized 3b or 7b model I guess.

I honestly don't know why companies are so allergic to people using their models for erotica, even if that's not my cup of tea. But I remember AI Dungeon fondly before they fell out with OAI and then became trash, and I don't think they've gotten better.

With million and ten million token context windows, the opportunity to begin an interactive story that never ends still tantalizes me. I don't want to have to write my novel (though I do like writing). I just want to lay out my ideas and have the grunt work taken care of. It'll happen, sooner rather than later.