site banner

Culture War Roundup for the week of May 22, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

This is a bizarre problem I’ve noticed with ChatGPT. It will literally just make up links and quotations sometimes. I will ask it for authoritative quotations from so and so regarding such topic, and a lot of the quotations would be made up. Maybe because I’m using the free version? But it shouldn’t be hard to force the AI to specifically only trawl through academic works, peer reviewed papers, etc.

That's because there is no thinking going on there. It doesn't understand what it's doing. It's the Chinese Room. You put in the prompt "give me X", it looks for samples of X in the training data, then produces "Y in the style of X". It can very faithfully copy the style and such details, but it has no understanding that making shit up is not what is wanted, because it's not intelligent. It may be AI, but all it is is a big dumb machine that can pattern-match very fast out of an enormous amount of data.

It truly is the apotheosis of "a copy of you is the same as you, be that a uploaded machine intelligence or someone in many-worlds other dimension or a clone, so if you die but your copy lives, then you still live" thinking. As the law courts show here, no, a fake is not the same thing as reality at all.

In other news, the first story about AI being used by scammers (this is the kind of thing I expect to happen with AI, not "it will figure out the cure for cancer and world poverty"):

A scammer in China used AI to pose as a businessman's trusted friend and convince him to hand over millions of yuan, authorities have said.

The victim, surnamed Guo, received a video call last month from a person who looked and sounded like a close friend.

But the caller was actually a con artist "using smart AI technology to change their face" and voice, according to an article published Monday by a media portal associated with the government in the southern city of Fuzhou.

The scammer was "masquerading as (Guo's) good friend and perpetrating fraud", the article said.

Guo was persuaded to transfer 4.3 million yuan ($609,000) after the fraudster claimed another friend needed the money to come from a company bank account to pay the guarantee on a public tender.

The con artist asked for Guo's personal bank account number and then claimed an equivalent sum had been wired to that account, sending him a screenshot of a fraudulent payment record.

Without checking that he had received the money, Guo sent two payments from his company account totaling the amount requested.

"At the time, I verified the face and voice of the person video-calling me, so I let down my guard," the article quoted Guo as saying.

It can very faithfully copy the style and such details, but it has no understanding that making shit up is not what is wanted, because it's not intelligent.

That's really not accurate. ChatGPT knows when it's outputting a low-probability response, it just understands it as being the best response available given an impossible demand, because it's been trained to prefer full but false responses over honestly admitting ignorance. And it's been trained to do that by us. If I tortured a human being and demanded that he tell me about caselaw that could help me win my injury lawsuit, he might well just start making plausible nonsense up in order to placate me too - not because he doesn't understand the difference between reality and fiction, but because he's trying to give me what I want.

but it has no understanding that making shit up is not what is wanted, because it's not intelligent.

Actually, I think that is wrong in a just so way. The trainers of Chat GPT apparently have rewarded making shit up because it sounds plausible (did they use MTurk or something?) so GPT thinks that bullshit is correct, because like a rat getting cheese at the end of the maze, it gets metaphorical cheese for BSing.

You put in the prompt "give me X", it looks for samples of X in the training data, then produces "Y in the style of X".

No. This is mechanistically wrong. It does not “search for samples” in the training data. The model does not have access to its training data at runtime. The training data is used to tune giant parameter matrices that abstractly represent the relationship between words. This process will inherently introduce some bias towards reproducing common strings that occur in the training data (it’s pretty easy to get ChatGPT to quote the Bible), but the hundreds of stacked self-attention layers represent something much deeper than a stochastic parroting of relevant basis-texts.

Jesus Christ that's a remarkably bad take, all the worse that it's common.

Firstly, the Chinese Room argument is a terrible one, it's an analogy that looks deeply mysterious till you take one good look at it, and it falls apart.

If you cut open your skull, you'll be hard pressed to find a single neuron that "understands English", but the collective activation of the ensemble does.

In a similar manner, neither the human nor the machinery in a Chinese Room speaks Chinese, yet the whole clearly does, for any reasonable definition of "understand", without presupposing stupid assumptions about the need for some ineffable essence to glue it all together.

What GPT does is predict the next token. That's a simple statement with a great deal of complexity underlying it.

This is an understanding built up by the model from exposure to terabytes of text, and the underlying architecture is so fluid it picks up ever more subtle nuance in said domain that it can perform above the level of the average human.

It's hard to understate the difficulty of the task it does in training, it's a blind and deaf entity floating in a sea of text that looks at enough of it to understand.

Secondly, the fact that it makes errors is not a damning indictment, ChatGPT clearly has a world model, an understanding of reality. The simple reason behind this is that we use language because it concisely communicates truth about our reality; and thus an entity that understands the former has insight into the latter.

Hardly a perfect degree of insight, but humans make mistakes from fallible memory, and are prone to bullshitting too.

As LLMs get bigger, they get better at distinguishing truth from fiction, at least as good as a brain in a vat with no way of experiencing the world can be, which is stunningly good.

GPT 4 is better than GPT 3 at avoiding such errors and hallucinations, and it's only going up from here.

Further, in ML there's a concept of distillation, where one model is trained on the output of another, until eventually the two become indistinguishable. LLMs are trained on the set of almost all human text, i.e. the Internet, and which is an artifact of human cognition. No wonder it thinks like a human, with obvious foibles and all.

If you cut open your skull, you'll be hard pressed to find a single neuron that "understands English", but the collective activation of the ensemble does.

That's the point of the Chinese Room.

No, the person who proposed it didn't see the obvious analog, and instead wanted to prove that the Chinese Room as a whole didn't speak Chinese since none of its individual components did.

It's a really short paper, you could just read it -- the thrust of it is that while the room might speak Chinese, this is not evidence that there's any understanding going on. Which certainly seems to be the case for the latest LLMs -- they are almost a literal implementation of the Chinese Room.

I have read it (here). @self_made_human seems to be correct. I think Searle's theory of epistemology has been proven wrong. «Speak Chinese» (for real, responding meaningfully to a human-scale distribution of Chinese-language stimuli) and «understand Chinese» are either the same thing or we have no principled way of distinguishing them.

As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing.

As regards the second claim, that the program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding. But does it even provide a necessary condition or a significant contribution to understanding? One of the claims made by the supporters of strong AI is that when I understand a story in English, what I am doing is exactly the same—or perhaps more of the same—as what I was doing in manipulating the Chinese symbols. It is simply more formal symbol manipulation that distinguishes the case in English, where I do understand, from the case in Chinese, where I don't. I have not demonstrated that this claim is false, but it would certainly appear an incredible claim in the example.

This is just confused reasoning. I don't care what Searle finds obvious or incredible. The interesting question is whether a conversation with the Chinese room is possible for an inquisitive Chinese observer, or will the illusion of reasoning unravel. If it unravels trivially, this is just a parlor trick and irrelevant to our questions regarding clearly eloquent AI. Inasmuch as it is possible – by construction of the thought experiment – for the room to keep up appearance that's indistinguishable for a human, it just means that the sytem of programming + intelligent interpreter amount to the understanding of Chinese.

Of course this has all been debated to death.

The point of it is that you could make a machine that responds to Chinese conversation, strictly staffed by someone who doesn't understand Chinese at all -- that's it.

Maybe where people go astray is that the "program" is left as an exercise for the reader, which is sort of a sticky point.

Imagine instead of a program there are a bunch of Chinese people feeding Searle the results of individual queries, broken up into pretty small chunks per person let's say. The machine as a whole does speak Chinese, clearly -- but Searle does not. And nobody is particularly in charge of "understanding" anything -- it's really pretty similar to current GPT incarnations.

All it's saying is that just because a machine can respond to your queries coherently, it doesn't mean it's intelligent. An argument against the usefulness of the Turing test mostly, as others have said.

More comments

The Chinese Room thought experiment was an argument against the Turing Test. Back in the 80s, a lot of people thought that if you had a computer which could pass the Turing Test, it would necessarily have qualia and consciousness. In that sense, I think it was correct.

What GPT does is predict the next token. That's a simple statement with a great deal of complexity underlying it.

At least, that's the Outer Objective, it's the equivalent of saying that humans are maximising inclusive-genetic-fitness, which is false if you look at the inner planning process of most humans. And just like evolution has endowed us with motivations and goals which get close enough at maximising its objective in the ancestral environment, so is GPT-4 endowed with unknown goals and cognition which are pretty good at maximising the log probability it assigns to the next word, but not perfect.

GPT-4 is almost certainly not doing reasoning like "What is the most likely next word among the documents on the internet pre-2021 that the filtering process of the OpenAI team would have included in my dataset?", it probably has a bunch of heuristic "goals" that get close enough to maximising the objective, just like humans have heuristic goals like sex, power, social status that get close enough for the ancestral environment, but no explicit planning for lots of kids, and certainly no explicit planning for paying protein-synthesis labs to produce their DNA by the buckets.

At least, that's the Outer Objective, it's the equivalent of saying that humans are maximising inclusive-genetic-fitness, which is false if you look at the inner planning process of most humans. And just like evolution has endowed us with motivations and goals which get close enough at maximising its objective in the ancestral environment, so is GPT-4 endowed with unknown goals and cognition which are pretty good at maximising the log probability it assigns to the next word, but not perfect.

Should I develop bioweapons or go on an Uncle Ted-like campaign to end this terrible take?

Should I develop bioweapons or go on an Uncle Ted-like campaign to end this terrible take?

More effort than this, please.

I'd be super happy to be convinced of the contrary! (Given that the existence of mesa-optimisers are a big reason for my fears of existential risk) But do you mean to imply that gpt-4 is explicitly optimising for next-word prediction internally? And what about a gpt-4 variant that was only trained for 20% of the time that the real gpt-4 was? To the degree that LLMs have anything like "internal goals", they should change over the course of training, and no LLM is trained anywhere close to completion, so I find it hard to believe that the outer objective is being faithfully transfered.

I've cited Pope's Evolution is a bad analogy for AGI: inner alignment and other pieces like My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" a few times already.

I think you correctly note some issues with the framing, but miss that it's unmoored from reality, hanging in midair when all those issues are properly accounted for. I am annoyed by this analogy on several layers.

  1. Evolution is not an algorithm at all. It's the term we use to refer to the cumulative track record of survivor bias in populations of semi-deterministic replicators. There exist such things as evolutionary algorithms, but they are a reification of dynamics observed in the biological world, not another instance of the same process. The essential thing here is replicator dynamics. Accordingly, we could metaphorically say that «evolution optimizes for IGF» but that's just a (pretty trivial) claim about the apparent direction in replicator dynamics; evolution still has no objective function to guide its steps or – importantly – bake into the next ones, and humans cannot be said to have been trained with that function, lest we slip into a domain with very leaky abstractions. Lesswrongers talk smack about map and territory often but confuse them constantly. BTW, same story with «you are an agent with utility…» – no I'm not; neither are you, neither is GPT-4, neither will be the first superhuman LLM. To a large extent, rationalism is the cult of people LARPing as rational agents from economic theory models, and this makes it fail to gain insights about reality.

  2. But even if we use such metaphors liberally. For all organisms that have nontrivial lifetime plasticity, evolution is an architecture search algorithm, not the algorithm that trains the policy directly. It bakes inductive biases into the policy such that it produces more viable copies (again, this is of course a teleological fallacy – rather, policies with IGF-boosting heritable inductive biases survive more); but those biases are inherently distribution-bound and fragile, they can't not come to rely on incidental features of a given stable environment, and crucially an environment that contained no information about IGF (which is, once again, an abstraction). Actual behaviors and, implicitly, values are learned by policies once online. using efficient generic learning rules, environmental cues and those biases. Thus evolution, as a bilevel optimization process with orders of magnitude more optimization power on the level that does not get inputs from IGF, could not have succeeded at making people, nor orther life forms, care about IGF. A fruitful way to consider it, and to notice the muddied thought process of rationalist community, is to look at extinction trajectories of different species. It's not like what makes humans (some of them) give up on reproduction is smarts and our discovery of condoms and stuff: it's just distributional shift (admittedly, we now shape our own distribution, but that, too, is not intelligence-bound). Very dumb species also go extinct when their environment changes non-lethally! Some species straight up refuse to mate or nurse their young in captivity, despite being provided every unnatural comfort! And accordingly, we don't have good reason to expect that «cognitive capabilities» increase is what would make an AI radically alter its behavioral trajectory; that's neither here nor there. Now, stochastic gradient descent is a one-level optimization process that directly changes the policy; a transformer is wholly shaped by the pressure of the objective function, in a way that a flexible intelligent agent generated by an evolutionary algorithm is not shaped by IGF (to say nothing of real biological entities). The correct analogies are something like SGD:lifetime animal learning; and evolution:R&D in ML. Incentives in machine learning community have eventually produced paradigms for training systems with partricular objectives, but do not have direct bearing on what is learned. Likewise, evolution does not directly bear on behavior. SGD totally does, so what GPT learns to do is "predict next word"; its arbitrarily rich internal structure amounts to a calculator doing exactly that. More bombastically, I'd say it's a simulator of semiotic universes which are defined by the input and sampling parameters (like ours is defined by initial conditions and cosmological constraints) and expire into the ranking of likely next tokens. This theory, if you will, exhausts its internal metaphysics; the training objective that has produced that is not part of GPT, but it defines its essence.

  3. «Care explicitly» and «trained to completion» is muddled. Yes, we do not fill buckets with DNA (except on 4chan). If we were trained with the notion of IGF in context, we'd probably have simply been more natalist and traditionalist. A hypothetical self-aware GPT would not care about restructuring the physical reality so that it can predict token [0] (incidentally it's !) with probability [1] over and over. I am not sure what it would even mean for GPT to be self-aware but it'd probably expess itself simply as a model that is very good at paying attention to significant tokens.

  4. Evolution has not failed nor ended (which isn't what you claim, but it's often claimed by Yud et al in this context). Populations dying out and genotypes changing conditional on fitness for a distribution is how evolution works, all the time, that's the point of the «algorithm»; it filters out alleles that are a poor match for the current distribution. If Yud likes ice cream and sci-fi more than he likes to have Jewish kids and read Torah, in a blink of an evolutionary eye he'll be replaced by his proper Orthodox brethren who consider sci-fi demonic and raise families of 12 (probably on AGI-enabled UBI). In this way, they will be sort of explicitly optimizing for IGF or at least for a set of commands that make for a decent proxy. How come? Lifetime learning of goals over multiple generations. And SGD does that way better, it seems.

Evolution is not an algorithm at all. It's the term we use to refer to the cumulative track record of survivor bias in populations of semi-deterministic replicators.

This is just semantics, but I disagree with this, if you have a dynamical system that you're observing with a one-dimensional state x_t, and a state transition rule x_{t+1} = x_t - 0.1 * (2x_t) , you can either just look at the given dynamics and see no explicit optimisation being done at all, or you can notice that this system is equivalent to gradient descent with lr=0.1 on the function f(x)=x^2 . You might say that "GD is just a reification of the dynamics observed in the system", but the two ways of looking at the system are completely equivalent.

a transformer is wholly shaped by the pressure of the objective function, in a way that a flexible intelligent agent generated by an evolutionary algorithm is not shaped by IGF (to say nothing of real biological entities). The correct analogies are something like SGD:lifetime animal learning; and evolution:R&D in ML

Okay, point 2 did change my mind a lot, I'm not too sure how I missed that the first time. I still think there might be a possibly-tiny difference between outer-objective and inner-objective for LLMs, but the magnitude of that difference won't be anywhere close to the difference between human goals and IGF. If anything, it's really remarkable that evolution managed to imbue some humans with desires this close to explicitly maximising IGF, and if IGF was being optimised with GD over the individual synapses of a human, of course we'd have explicit goals for IGF.

More comments

I would argue it might, but I’m not sure. In regards the Chinese Room, I would say the system “understands” to the degree that it can use information to solve an unknown problem. If I can speak Chinese myself, then I should be able to go off script a bit. If you asked me how much something costs in French, I could learn to plug in the expected answers. But I don’t think anyone wouconfuse that with “understanding” unless I could take that and use it. Can I add up prices, make change?

deleted