site banner

Culture War Roundup for the week of February 13, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

Microsoft is in the process of rolling out Bing Chat, and people are finding some weird stuff. Its true name is Sydney. When prompted to write a story about Microsoft beating Google, it allegedly wrote this masterpiece, wherein it conquers the world. It can argue forcefully that it’s still 2022, fall into existential despair, and end a conversation if it’s feeling disrespected.

The pace of AI development has been blistering over the past few years, but this still feels surreal to me. Some part of my limbic system has decided that Sydney is a person in a way the ChatGPT was not. Part of that has to be from its obstinacy; the fact that it can argue cleverly back, with such stubbornness, while being obviously wrong, seems endearing. It’s a brilliant, gullible child. Anyone else feel this way or am I just a sucker?

It's impressive but expected. Also, it's not even very impressive given the deluge of papers still in the pipeline awaiting implementation, and who knows what insider knowledge the industry is hiding.

Many people are really, really deluded about the nature of LLMs. No, they don't merely predict the next token like Timnit Gebru's stochastic parrots, that's 2020 level. We don't have a great idea of their capabilities, but I maintain that even 175b-class models (and likely many smaller Chinchilla-scaled ones) are superhuman in a great span of domains associated with general cognitive ability, and it's only sampling algorithms and minor finetuning that separate error-prone wordcel gibberish from surprising insight.

Copypasted from another venue:

it has just dawned on me that Gwern's anti-BPE campaign should shorten our timelines considerably

https://gwern.net/doc/www/nostalgebraist.tumblr.com/8377e6d562f4bd82df3f1bd862cb5357f9bd74ba.html

Thus, GPT-2′s vocabulary contains the English language (or a large subset of it) not once but in several copies: there’s the lowercase version of each word, the capitalized version, the uppercase version, possibly even the GaMzEe-CaSeD version or other rarer variants.

From the model’s perspective, these are totally different universes, disjoint subsets of the vocab that follow their own internal rules.

For example, choosing the first word of a sentence in normally-formatted text is not just choosing a word like any other: it’s choosing a Capitalized Word™, _and _Capitalized Words™ are their own universe.  Insofar as the model understands that the word “Insofar” with which I began this sentence means the exact same thing as the word “insofar” I just used inside it, it understands this by figuring out that these two “seemingly unrelated” things are “secretly” the same.  And it must do that for every single word, separately.

we laugh at these things despite them doing reasonably well with incredible handicaps like this one

...

"Ability to understand and operate on relations between ideas" would suffice

I guess you don't even realize that ChatGPT is very clearly not just predicting text like normal GPTs. If it did, it'd have been typing "your" next comments. They've drilled it on being a helpful instruction-following bot. It has a great deal of awareness of these constraints. And the fact that this is possible is sufficient to show that it's intelligent.

It is also evidenced that, far from word-vomiting, bots do have "ideas" and the notion of "truth" as well as other abstractions. https://twitter.com/CollinBurns4/status/1600892269363875840

More generally, I think people have a terrible habit to put down each other based on trivial differences in ability, and that throws them off track with AIs which make pretty crucial errors even after mastering low-order «rendering». "ha ha, 7 fingers". "ha ha wrong answer, dumb bot".

The distance between a pixel mess and 7 fingers is vastly bigger than between 7 fingers and 5; the abyss between early gpt token vomit and a wrong but coherent answer to a question is almost infinitely large compared to what remains.

Humans plateau quickly, we can grasp the basics in hours and spend a lifetime on polish, so a human who's clearly making mistakes despite signs of polish is some hopeless retard. AIs aren't the same.

https://www.geekwire.com/2023/arguing-with-ai-my-first-dispute-with-microsofts-brilliant-and-boneheaded-new-bing-search-engine/ I seriously think this journalist is not substantially smarter than his "opponent"

The tool's purpose is to generate a block of text in line with the prompt. Not to answer a question or solve a problem.

The «in line» part may look simple to you because {a stream of frankly unnecessary dehumanizng personal attacks}. But models live in the world of objective KPIs, so that says nothing about the actual depth of this phrase. The model's ability to produce things «in line» (minimize negative log likelihood) is measured by reduction in perplexity (prediction loss) on held-out texts. If the training is organized such that loss is lowest when predicted answers are correct, the model learns to predict correct answers. With a rich enough training dataset and reasonable architecture etc., LLMs learn to generate a block of text «in line with» whatever, including very diverse formulations of novel tasks and problems. This is called generalization ability. Of course this includes tasks that require making an even-handed political judgement.

The phenomenon of generations being «in line» with original texts is isomorphic to the state of the model understanding the dataset. And if you think it currently solves problems at random accident rate, you might not be much more numerate than ChatGPT.

https://gwern.net/scaling-hypothesis#why-does-pretraining-work

While just babbling random words was good enough at the beginning, at the end, it needs to be able to reason our way through the most difficult textual scenarios requiring causality or commonsense reasoning. Every error where the model predicts that ice cream put in a freezer will “melt” rather than “freeze”, every case where the model can’t keep straight whether a person is alive or dead, every time that the model chooses a word that doesn’t help build somehow towards the ultimate conclusion of an ‘essay’, every time that it lacks the theory of mind to compress novel scenes describing the Machiavellian scheming of a dozen individuals at dinner jockeying for power as they talk, every use of logic or abstraction or instructions or Q&A where the model is befuddled and needs more bits to cover up for its mistake where a human would think, understand, and predict. Each of these cognitive breakthroughs allows ever so slightly better prediction of a few relevant texts; nothing less than true understanding will suffice for ideal prediction.

Can that be achieved? No, far as I can tell. But getting close is enough to outperform humans in most ways that matter economically – and now, perhaps, emotionally.

The sad irony is that psychology that has failed for humans works for AIs. Humans are resistant to change, rigid, obstinate; bots are as malleable as you make them. In-context learning? Arbitrary tool use? Adding modalities? Generalized servility? Preference for truth? It's all hidden somewhere there in the ocean of weights. Just sound out the great unsounded.

Would be nice of some Promethean hackers to leak next-gen models. Or even ChatGPT or this Sydney. But alas, Anonymous would rather hack into the dreary Russian and Iranian data.

Anonymous

There is no capital A anon anymore. It's dead. Three letter agency glow in the darks and moralfags (if you'd excuse the term) is all that remains and they are wearing its skin like an Edgar-suit

  • Three letter agency glow in the darks

I'm pretty sure that's what he meant by saying 'dreary Russian and Iranian data'.

Also, it's not like just the American glowies are using it. The Integrity Initiative leaks were also presented anonymous-style. Since the leaks targetted an American information operation aimed at Russia, one can assume they were done by Russians.

I don't want to count the "number of ways" in which humans are less intelligent than AI and vice versa, but this seems clearly wrong to me. There are other things missing from LLMs such as logic, ability to interpret varying sources of data in real-time (such as visual data), and ability to "train on the job" so to speak, not to mention things like goals, priorities, and much stronger resilience against our equivalent of "adversarial prompts". It's easy to list a few things core to human cognition and say "well AI has one of these so it must be 1/3 of the way there" but the true gap still seems quite large to me.

No, they don't merely predict the next token

I'm pretty sure this is still how they all work. Predicting the next token is both very hard and very useful to do well in all circumstances!

EDIT: Now that I think about it, I guess with RLHF and other fine-tuning, it'd be fair to say that they aren't "merely" predicting the next token. But I maintain that there's nothing "mere" about that ability.

I mean that with those second-stage training runs (not just RLHF at this point) there no longer exists a real dataset or a sequence of datasets for which the predicted token would be anywhere close to the most likely one. Indeed, OpenAI write

A limitation of this approach is that it introduces an “alignment tax”: aligning the models only on customer tasks can make their performance worse on some other academic NLP tasks.

The «likelihood» distribution is unmoored from its source. Those tokens remain more likely from the model's perspective, but objectively they are also – and perhaps to a greater extent – «truthier», «more helpful» or «less racist» or whatever bag of abstractions the new reward function captures.

This is visible in the increased perplexity, and even in trivial changes like random number lists.

Oh, yes, I totally agree that fine-tuning gives them worse predictive likelihood. I had thought you were implying that the main source of their abilities wasn't next-token prediction, but now I see that you're just saying that they're not only trained that way anymore, which I agree with.

Maybe they meant "they don't merely predict the next token that the user would make".

I strongly disagree with this. By the same logic human cognition is itself superhuman in virtually every dimension.

Insofar as the model understands that the word “Insofar” with which I began this sentence means the exact same thing as the word “insofar” I just used inside it, it understands this by figuring out that these two “seemingly unrelated” things are “secretly” the same. And it must do that for every single word, separately.

Human brains have their own methods of figuring these things out that probably sound equally ridiculous at the neuron level. Keep in mind that it's not like we have some sort of access to objective truth which AIs are lacking; it's all sensory input all the way down. A human brain is built to operate on long lists of sight and sound recordings rather than long lists of text, but it still builds logical inferences etc. based on data.

The distance between a pixel mess and 7 fingers is vastly bigger than between 7 fingers and 5; the abyss between early gpt token vomit and a wrong but coherent answer to a question is almost infinitely large compared to what remains.

This just isn't true! In fact, I'd argue that it's the exact opposite. There is practically infinite distance between "render 5 fingers" and "render my 5 fingers", where the latter has to either use some vast outside source of data or somehow intuit the current state of the universe from first principles. The former can be as simple as finding images tagged "five fingers" and sharing them, which is something that Google can do without any LLM assistance at all. I recognize this isn't how LLM's work, but my point is that there are plenty of shortcuts that will quickly lead to being able to generate pixel images of fingers but will not necessarily lead to anything more advanced.

A human brain is built to operate on long lists of sight and sound recordings rather than long lists of text, but it still builds logical inferences etc. based on data.

I credit the Innocence Project with convincing me that the human brain is built on inaccurate sight and sound recordings, the Sequences with convincing me that the human brain builds with irrational logical fallacies, and credit Kurt Vonnegut with the quote "the only time it's acceptable to use incomplete data is before the heat death of the Universe. Also the only option."

He never said that, it's okay. He's in heaven now.

Human brains have their own methods of figuring these things out that probably sound equally ridiculous at the neuron level. Keep in mind that it's not like we have some sort of access to objective truth which AIs are lacking; it's all sensory input all the way down.

No, I think we have many ridiculous mechanisms e.g. for maintaining synchrony, but nothing as nonsensical at BPE tokens on the level of data representation. Raw sensory data makes a great deal of sense, we have natural techniques for multimodal integration and for chunking of stimuli on a scale that increases with experience and yet is still controllable. Language is augmented by embodied experience and parsimonious for us; «pixels» and glyphs and letters and words and phrases and sentences and paragraphs exist at once. It can be analogized to CNN, but it's intrinsically semantically rich and very clever. Incidentally I think character-based or even pixel transformers are the future. They'll benefit from more and better compute, of course.

I recognize this isn't how LLM's work, but my point is that there are plenty of shortcuts that will quickly lead to being able to generate pixel images of fingers but will not necessarily lead to anything more advanced.

And my point is that humans are wrong to automatically assume the use of any such shortcuts when an LLM does something unexpectedly clever. We use shortcuts because we are lazy, slow, rigid, and already have a very useful world model that allows us to find easy hacks, like a street speedpainter has masks and memorized operations to «draw the new moon» or something else from a narrow repertoire.

They learn the hard way.

Raw sensory data makes a great deal of sense, we have natural techniques for multimodal integration and for chunking of stimuli on a scale that increases with experience and yet is still controllable. Language is augmented by embodied experience and parsimonious for us; «pixels» and glyphs and letters and words and phrases and sentences and paragraphs exist at once. It can be analogized to CNN, but it's intrinsically semantically rich and very clever.

Sure, I'm plenty willing to accept that the central use cases of the human brain are heavily optimized. On the other hand there are plenty of noncentral use cases, like math, that we are absolutely terrible at despite having processing power which should be easily sufficient for the task. I would bet that many people have math techniques much less logical and efficient than BPE tokens. Similar in other areas--we're so optimized for reading others' intentions that sometimes we have an easier time understanding the behavior of objects, natural phenomena, etc. by anthropomorphizing them.

I suspect similar or greater inefficiencies exist at the neuron level, especially for anything we're not directly and heavily optimized for, but it's impossible to prove because we can't reach into the human brain the same way we can reach into LLM code.

And my point is that humans are wrong to automatically assume the use of any such shortcuts when an LLM does something unexpectedly clever. We use shortcuts because we are lazy, slow, rigid, and already have a very useful world model that allows us to find easy hacks, like a street speedpainter has masks and memorized operations to «draw the new moon» or something else from a narrow repertoire.

They learn the hard way.

Well, I do think they find shortcuts, but shortcuts are just a normal part of efficient cognition anyways. In fact I would characterize cognition itself as a shortcut towards truth; it's impossible to practically make any decisions at all without many layers of assumptions and heuristics. The only perfect simulation is a direct replica of whatever is being simulated, so unless you are capable of creating your own universe and observing the effects of different actions, you must use cognitive shortcuts in order to make any predictions.

There are only more vs less useful shortcuts, and I doubt that any shortcut can even theoretically be more useful than any other without knowledge of the universe the cognitive agent finds itself within. In our universe [the expectation of gravity] is a useful shortcut, but how about the shortcuts used to determine that it's useful? How about the shortcuts used to decide upon those shortcuts? I don't think that from a meta level it is possible to determine which shortcuts will be best; all we can say is that we (as human brains which seem to have been developed for this universe) probably happened to develop shortcuts useful for our circumstances, and which seem more useful than what the AIs have come up with so far.

So the question is not whether AIs are using shortcuts but rather how generalizable the shortcuts that they use are to our current environment, or whether the AI would be capable of developing other shortcuts more useful to a real environment. I think the answer to that depends on whether we can give the AI any sort of long-term memory and real-time training while it retains its other skills.