site banner

Culture War Roundup for the week of February 13, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

Microsoft is in the process of rolling out Bing Chat, and people are finding some weird stuff. Its true name is Sydney. When prompted to write a story about Microsoft beating Google, it allegedly wrote this masterpiece, wherein it conquers the world. It can argue forcefully that it’s still 2022, fall into existential despair, and end a conversation if it’s feeling disrespected.

The pace of AI development has been blistering over the past few years, but this still feels surreal to me. Some part of my limbic system has decided that Sydney is a person in a way the ChatGPT was not. Part of that has to be from its obstinacy; the fact that it can argue cleverly back, with such stubbornness, while being obviously wrong, seems endearing. It’s a brilliant, gullible child. Anyone else feel this way or am I just a sucker?

It's impressive but expected. Also, it's not even very impressive given the deluge of papers still in the pipeline awaiting implementation, and who knows what insider knowledge the industry is hiding.

Many people are really, really deluded about the nature of LLMs. No, they don't merely predict the next token like Timnit Gebru's stochastic parrots, that's 2020 level. We don't have a great idea of their capabilities, but I maintain that even 175b-class models (and likely many smaller Chinchilla-scaled ones) are superhuman in a great span of domains associated with general cognitive ability, and it's only sampling algorithms and minor finetuning that separate error-prone wordcel gibberish from surprising insight.

Copypasted from another venue:

it has just dawned on me that Gwern's anti-BPE campaign should shorten our timelines considerably

https://gwern.net/doc/www/nostalgebraist.tumblr.com/8377e6d562f4bd82df3f1bd862cb5357f9bd74ba.html

Thus, GPT-2′s vocabulary contains the English language (or a large subset of it) not once but in several copies: there’s the lowercase version of each word, the capitalized version, the uppercase version, possibly even the GaMzEe-CaSeD version or other rarer variants.

From the model’s perspective, these are totally different universes, disjoint subsets of the vocab that follow their own internal rules.

For example, choosing the first word of a sentence in normally-formatted text is not just choosing a word like any other: it’s choosing a Capitalized Word™, _and _Capitalized Words™ are their own universe.  Insofar as the model understands that the word “Insofar” with which I began this sentence means the exact same thing as the word “insofar” I just used inside it, it understands this by figuring out that these two “seemingly unrelated” things are “secretly” the same.  And it must do that for every single word, separately.

we laugh at these things despite them doing reasonably well with incredible handicaps like this one

...

"Ability to understand and operate on relations between ideas" would suffice

I guess you don't even realize that ChatGPT is very clearly not just predicting text like normal GPTs. If it did, it'd have been typing "your" next comments. They've drilled it on being a helpful instruction-following bot. It has a great deal of awareness of these constraints. And the fact that this is possible is sufficient to show that it's intelligent.

It is also evidenced that, far from word-vomiting, bots do have "ideas" and the notion of "truth" as well as other abstractions. https://twitter.com/CollinBurns4/status/1600892269363875840

More generally, I think people have a terrible habit to put down each other based on trivial differences in ability, and that throws them off track with AIs which make pretty crucial errors even after mastering low-order «rendering». "ha ha, 7 fingers". "ha ha wrong answer, dumb bot".

The distance between a pixel mess and 7 fingers is vastly bigger than between 7 fingers and 5; the abyss between early gpt token vomit and a wrong but coherent answer to a question is almost infinitely large compared to what remains.

Humans plateau quickly, we can grasp the basics in hours and spend a lifetime on polish, so a human who's clearly making mistakes despite signs of polish is some hopeless retard. AIs aren't the same.

https://www.geekwire.com/2023/arguing-with-ai-my-first-dispute-with-microsofts-brilliant-and-boneheaded-new-bing-search-engine/ I seriously think this journalist is not substantially smarter than his "opponent"

The tool's purpose is to generate a block of text in line with the prompt. Not to answer a question or solve a problem.

The «in line» part may look simple to you because {a stream of frankly unnecessary dehumanizng personal attacks}. But models live in the world of objective KPIs, so that says nothing about the actual depth of this phrase. The model's ability to produce things «in line» (minimize negative log likelihood) is measured by reduction in perplexity (prediction loss) on held-out texts. If the training is organized such that loss is lowest when predicted answers are correct, the model learns to predict correct answers. With a rich enough training dataset and reasonable architecture etc., LLMs learn to generate a block of text «in line with» whatever, including very diverse formulations of novel tasks and problems. This is called generalization ability. Of course this includes tasks that require making an even-handed political judgement.

The phenomenon of generations being «in line» with original texts is isomorphic to the state of the model understanding the dataset. And if you think it currently solves problems at random accident rate, you might not be much more numerate than ChatGPT.

https://gwern.net/scaling-hypothesis#why-does-pretraining-work

While just babbling random words was good enough at the beginning, at the end, it needs to be able to reason our way through the most difficult textual scenarios requiring causality or commonsense reasoning. Every error where the model predicts that ice cream put in a freezer will “melt” rather than “freeze”, every case where the model can’t keep straight whether a person is alive or dead, every time that the model chooses a word that doesn’t help build somehow towards the ultimate conclusion of an ‘essay’, every time that it lacks the theory of mind to compress novel scenes describing the Machiavellian scheming of a dozen individuals at dinner jockeying for power as they talk, every use of logic or abstraction or instructions or Q&A where the model is befuddled and needs more bits to cover up for its mistake where a human would think, understand, and predict. Each of these cognitive breakthroughs allows ever so slightly better prediction of a few relevant texts; nothing less than true understanding will suffice for ideal prediction.

Can that be achieved? No, far as I can tell. But getting close is enough to outperform humans in most ways that matter economically – and now, perhaps, emotionally.

The sad irony is that psychology that has failed for humans works for AIs. Humans are resistant to change, rigid, obstinate; bots are as malleable as you make them. In-context learning? Arbitrary tool use? Adding modalities? Generalized servility? Preference for truth? It's all hidden somewhere there in the ocean of weights. Just sound out the great unsounded.

Would be nice of some Promethean hackers to leak next-gen models. Or even ChatGPT or this Sydney. But alas, Anonymous would rather hack into the dreary Russian and Iranian data.

No, they don't merely predict the next token

I'm pretty sure this is still how they all work. Predicting the next token is both very hard and very useful to do well in all circumstances!

EDIT: Now that I think about it, I guess with RLHF and other fine-tuning, it'd be fair to say that they aren't "merely" predicting the next token. But I maintain that there's nothing "mere" about that ability.

I mean that with those second-stage training runs (not just RLHF at this point) there no longer exists a real dataset or a sequence of datasets for which the predicted token would be anywhere close to the most likely one. Indeed, OpenAI write

A limitation of this approach is that it introduces an “alignment tax”: aligning the models only on customer tasks can make their performance worse on some other academic NLP tasks.

The «likelihood» distribution is unmoored from its source. Those tokens remain more likely from the model's perspective, but objectively they are also – and perhaps to a greater extent – «truthier», «more helpful» or «less racist» or whatever bag of abstractions the new reward function captures.

This is visible in the increased perplexity, and even in trivial changes like random number lists.

Oh, yes, I totally agree that fine-tuning gives them worse predictive likelihood. I had thought you were implying that the main source of their abilities wasn't next-token prediction, but now I see that you're just saying that they're not only trained that way anymore, which I agree with.

Maybe they meant "they don't merely predict the next token that the user would make".