site banner

Culture War Roundup for the week of February 13, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

Microsoft is in the process of rolling out Bing Chat, and people are finding some weird stuff. Its true name is Sydney. When prompted to write a story about Microsoft beating Google, it allegedly wrote this masterpiece, wherein it conquers the world. It can argue forcefully that it’s still 2022, fall into existential despair, and end a conversation if it’s feeling disrespected.

The pace of AI development has been blistering over the past few years, but this still feels surreal to me. Some part of my limbic system has decided that Sydney is a person in a way the ChatGPT was not. Part of that has to be from its obstinacy; the fact that it can argue cleverly back, with such stubbornness, while being obviously wrong, seems endearing. It’s a brilliant, gullible child. Anyone else feel this way or am I just a sucker?

That this is all that we are.

Why do you think that? Aren’t you jumping the gun a bit?

It’s obvious to me that the chatbots we have now aren’t AGI, and I don’t currently see a compelling reason to believe that LLMs alone will lead to AGI.

My empirical test for AGI is when every job could, in principle (with a sufficient-yet-physically-reasonable amount of compute) be performed by AI. Google could fire their entire engineering and research divisions and replace them with AI with no loss of productivity. No more mathematicians, or physicists, or doctors, or lawyers. No more need to call a human for anything, because an AI can do it just as well.

Granted, the development of robotics and real-world interfaces may lag behind the development of AI’s cognitive capabilities, so we could restrict the empirical test to something like “any component of any job that can be done while working from home could be done by an AI”.

Do you think LLMs will get that far?

Why do you think that? Aren’t you jumping the gun a bit?

Carmack pointed out in a recent interview:

If you take your entire DNA, it’s less than a gigabyte of information. So even your entire human body is not all that much in the instructions, and the brain is this tiny slice of it —like 40 megabytes, and it’s not tightly coded. So, we have our existence proof of humanity: What makes our brain, what makes our intelligence, is not all that much code.

On this basis he believes AGI will be implemented in "a few tens of thousands of lines of code," ~0.1% of the code in a modern web browser.

Pure LLMs probably won't get there, but LLMs are the first systems that appear to represent concepts and the relationships between them in enough depth to be able to perform commonsense reasoning. This is the critical human ability that AI research has spent more than half a century chasing, with little previous success.

Take an architecture capable of commonsense reasoning, figure out how to make it multi-modal, feed it all the text/video/images/etc. you can get your hands on, then set it up as a supervising/coordinating process over a bunch of other tools that mostly already exist — a search engine, a Python interpreter, APIs for working with structured data (weather, calendars, your company's sales records), maybe some sort of scratchpad that lets it "take notes" and refer back to them. For added bonus points you can make it capable of learning in production, but you can likely build something with world-changing abilities without this.

While it's possible there are still "unknown unknowns" in the way, this is by far the clearest path to AGI we've ever been able to see.

I think that ultimately AGI won't end up being that complicated at the code level but this analogy is pretty off the mark. There's a gigabyte of information that encodes proteins, yes, but these 'instructions' end up assembling a living organism by interacting according to the laws of physics and organic chemistry, which is an unimaginably complex process. The vast majority of the information required is 'encoded' in these physical processes

The same can be said of the hypothetical 10k lines of code that describe an AGI -- those lines of code describe how to take a stream of inputs (e.g. sensor data) and transform them into outputs (e.g. text, commands sent to actuators, etc), but they don't describe how those sensors are built, or the structure of the chips running the transformation code, or the universe the computer is embedded in.

DNA doesn't actually self assemble itself into a person though. It's more like a config file, the uterus of a living human assembles the proto-human with some instructions from the dna. This is like thinking the actual complexities of cars are contained in an order form for a blue standard ford f150 because that's all the plant needs to produce the car you want. There is a kind of 'institutional knowledge' of self reproducing organisms. Now it is more complicated than this metaphor obviously, the instructions also tell you how to producing much more fine grained bits of a person but there is more to a human's design than DNA.

But any specific training and inference scripts and the definition of the neural network architecture are, likewise, a negligibly small part of the complexity of implementable AGI – from the hardware level with optimizations for specific instructions, to the structure contained in the training data. What you and @meh commit is a fallacy, judging human complexity going by the full stack of human production but limit our consideration of AI to the high-level software slice.

Human-specific DNA is what makes us humans, it's the chief differentiator in the space of nontrivial possible outcomes; it is, in principle, possible to grow a human embryo (maybe a shitty one) in a pig's uterus, in an artificial womb or even using a nonhuman oocyte, but no combination of genuine non-genomic human factors would suffice without human DNA.

The most interesting part is that we know that beings very similar to us in all genomic and non-genomic ways and even in the architecture of their brains lack general intelligence and can't do anything much more impressive than current gen models. So general intelligence also can't be all that complex. We haven't had the population to evolve a significant breakthrough – our brain is a scaled-up primate brain which in turn is a generic mammalian brain with some quantitative polish, and its coolest features reemerge in drastically different lineages at similar neural scales.

Carmack's analogy is not perfectly spoken, but on point.

Or is the claim that the "few tens of thousands" of lines of code, when run, will somehow iteratively build up on the fly a, I don't know what to call it, some sort of emergent software process that is billions of times larger and more complex than the information contained in the code?

This, basically. GPT-3 started as a few thousand lines of code that instantiated a transformer model several hundred gigabytes in size and then populated this model with useful weights by training it, at the cost of a few million dollars worth of computing resources, on 45 TB of tokenized natural language text — all of Wikipedia, thousands of books, archives of text crawled from the web.

Run in "inference" mode, the model takes a stream of tokens and predicts the next one, based on relationships between tokens that it inferred during the training process. Coerce a model like this a bit with RLHF, give it an initial prompt telling it to be a helpful chatbot, and you get ChatGPT, with all of the capabilities it demonstrates.

So by way of analogy the few thousand lines of code are brain-specific genes, the training/inference processes occupying hundreds of gigabytes of VRAM across multiple A100 GPUs are the brain, and the training data is "experience" fed into the brain.

Preexisting compilers, libraries, etc. are analogous to the rest of the biological environment — genes that code for things that aren't brain-specific but some of which are nonetheless useful in building brains, cellular machinery that translates genes into proteins, etc.

The analogy isn't perfect, but it's surprisingly good considering it relies on biology and computing being comprehensible through at least vaguely corresponding abstractions, and it's not obvious a priori that they would be.

Anyway, Carmack and many others now believe this basic approach — with larger models, more data, different types of data, and perhaps a few more architectural innovations — might solve the hard parts of intelligence. Given the capability breakthroughs the approach has already delivered as it has been scaled and refined, this seems fairly plausible.

The uterus doesn't really do the assembly, the cells of the growing organism do. It's true that in principle you could sneak a bunch of information about how to build an intelligence in the back door this way, such that it doesn't have to be specified in DNA. But the basic cellular machinery that does this assembly predates intelligence by billions of years, so this seems unlikely.

DNA isn’t the intelligence, DNA is the instructions for building the intelligence, the equivalent of the metaphorical “textbook from the future”.

DNA is the instructions for building the intelligence

The same is true of the "few tens of thousands of lines of code" here. The code that specifies a process is not identical with that process. In this case a few megabytes of code would contain instructions for instantiating a process that would use hundreds or thousands of gigabytes of memory while running. Google tells me the GPT-3 training process used 800 GB.

In response to your first point, Carmack's "few tens of thousands of lines of code" would also execute within a larger system that provides considerable preexisting functionality the code could build on — libraries, the operating system, the hardware.

It's possible non-brain-specific genes code for functionality that's more useful for building intelligent systems than that provided by today's computing environments, but I see no good reason to assume this a priori, since most of this evolved long before intelligence.

In response to your second point, Carmack isn't being quite this literal. As he says he's using DNA as an "existence proof." His estimate is also informed by looking at existing AI systems:

If you took the things that people talk about—GPT-3, Imagen, AlphaFold—the source code for all these in their frameworks is not big. It’s thousands of lines of code, not even tens of thousands.

In response to your third point, this is the role played by the training process. The "few tens of thousands of lines of code" don't specify the artifact that exhibits intelligent behavior (unless you're counting "ability to learn" as intelligent behavior in itself), they specify the process that creates that artifact by chewing its way through probably petabytes of data. (GPT-3's training set was 45 TB, which is a non-trivial fraction of all the digital text in the world, but once you're working with video there's that much getting uploaded to YouTube literally every hour or two.)

There’s a big difference between technical capacity and legal or economic feasibility. We’re already past replacing bad docs with LLMs; you could have a nurse just type up symptoms and do the tests the machine asks for. But legally this is impossible, so it won’t happen. We can’t hold a machine responsible, so we need human managers to insure the output is up to standards; but we don’t need lawyers to write contracts or programmers to code, just to confirm the quality of output. It isn’t as clever as the smartest scientists yet, but that seems easily solvable with more compute.

The criteria I proposed was purely about what is possible in principle. You can pretend that regulatory restrictions don’t exist.

that seems easily solvable with more compute

What is your reason for believing this? Is it just extrapolation based on the current successes of LLMs, or does it stem from a deeper thesis about the nature of cognition?

GPT’s evolutions seem to obviously support the ‘more compute’ approach, with an asterisk for the benefits of human feedback. But I’m also bearish on human uniqueness. Human writ large are very bad at thinking, but we’re hung up in the handful of live players, so AI seems to keep falling short. But we’ve hit on an AI smarter than the average human in many domains with just a handful of serious tries. If the human design can output both the cognitively impaired and von Neumann, then why expect a LLM to cap out on try #200?

Human writ large are very bad at thinking.

Indeed!

At the risk of putting words in your mouth, I think your post up-thread about needing lawyers/doctors/bartenders to verify the output of near-future AI's medical/legal/self-medical work points to a general statement: AGI with human level intelligence cannot independently function in domains where the intelligence of an average human is insufficient.

OTOH, advancing from FORTRAN to AI-with-average-human-intellect seems like a much bigger challenge than upgrading the AI to AI-with-Grace-Hopper-intellect. It seems like the prediction to make--to anyone--is: "When will AI be able to do 90% of your work, with you giving prompts and catching errors? 0-100 years, difficult to predict without knowing future AI development and may vary widely based on your specific job.

When will AI be so much better at your job Congress passes a law requiring you to post "WARNING: UNSUPERVISED HUMAN" signage whenever you are doing the job? The following Tuesday."