I'm not claiming all children are going to be murdered in less than 20 years. I also don't think I have tons of economic power right now, and I agree I already depend on complex machines that I already can't understand or control.
I'm saying we're probably giving up what little control we had over the future of human civilization. Maybe a good analogy is: we're inviting unlimited immigration from a country with unlimited population, willing to work 24h/d for cents per hour, and are far more capable, loyal, and dependable than almost any human. Once we start, we'll never be able to stop.
This sounds totally reasonable. We certainly could be in a world where there are threats that are too hard to distinguish from fake ones to respond to them without screwing ourselves in other ways.
I guess I won't try to further convince you here, other than to say "every time we were told something would surely kill everybody it didn't" is certainly a valid reason to discount doomsday prophets in general, but not a good reason to dismiss the possibility of doomsday.
"If our reality is truly so fragile that something as banal as an LLM can tear it asunder, then does it really deserve our preservation in the first place?"
How about: "If a baby is so fragile that it can't take a punch, does it really deserve our preservation in the first place?"
Sorry to speculate about your mental state, but I suggest you try practicing stopping between "This is almost inevitable" and "Therefore it's a good thing".
In any case, I do think there are good alternatives besides "Be Amish forever" and "Let AI rip". Specifically, it's to gradually expand human capabilities. I realize that doing this will require banning pure accelerationism, which will probably look like enforcing Potemkin fake tradition and arbitrary limitations. The stupidest version of this is a Jupiter brain celebrating Kwanzaa. Maybe a smarter version looks like spinning up ancestor simulations and trying to give them input into the problems of the day or something. I don't know.
These bans will also require a permanent "alignment" module or singleton government in order to avoid these affectations being competed away. Basically, if we want to have any impact on the far future, in which agents can rewrite themselves from scratch to be more competitive, I think we have to avoid a race to the bottom.
There will be no difference, for our meat brains, between having an AI family and children and grandchildren and having the real thing.
Just because we can't necessarily tell the difference, doesn't mean we can't care or try to avoid it. I for one would choose not to live out (most) of my days in an experience machine.
My impression is that people like Ngo are quietly pushing inside of OpenAI for slowdowns + coordination. The letter was basically a party-crashing move by those peripheral to the quiet, on-their-own-terms negotiations and planning by the professional governance teams and others who have been thinking about this for a long time.
I think most of the serious AI safety people didn't sign it because they have their own much more detailed plans, and also because they want to signal that they're not going to "go to the press" easily, to help them build relationships with the leadership of the big companies.
I still don't understand why you think the capabilities of current LLMs are an important factor in how scared we should be about AGI in the medium term. I also don't understand what threshold of capabilities you want to use where we could wait until we see it to coordinate a slowdown. The better these things get, the more demand there will be for their further development.
Thanks, that's a reasonable proposal and rationale. The thing is, it's not clear to me in which sense OpenAI, as an entity, effectively cares about X-risk. I say this knowing many OpenAIers personally, and that they certainly care about X risk. But what realistic options do they have for not always taking the next, most profit-maximizing step? I realize they did lots of safety-checking around their own release of GPT-4, but they also licensed it to Microsoft! I know they have a special charter that in principle allows them to slow down, but do you think their lead is going to grow with time?
I think you're being misled by a very specific failure mode of LLMs trained on tokenized input. The spelling and number of words is explicitly scrubbed from their input. Asking for word counts is like asking a blind person who reads and writes Braille about the shape of letters.
Good enough next-token prediction is, in principle, powerful enough to do anything you could ask someone to do using only a computer. I'm not claiming that this is a plausible route to super-powerful AI. But the "just" in "It just understands them as probabilistic sequences of tokens" seems totally unwarranted to me.
-
tens of billions of dollars of capital and lots of top talent spend the next 5-10 years making these systems more and more capable.
-
They get deployed everywhere because they are way easier to work with than humans.
-
Humans have little economic power.
-
The world becomes more complex, and full of agents smarter than humans, working full-time to manipulate them.
-
Humans are eventually stripped of power, just like we gradually came to dominate every species less smart than us.
I agree that totalitarianism is really, really scary and plausible.
But I'm saying that wrt AI, if you wait until you see something really scary, it'll probably be too late.
I'm not sure why you find that article reassuring. Wait until you hear about the shitty hardware that human brains run on, only 30 Watts! Yud isn't even saying that the current LLMs are all that dangerous, he's saying that we're pouring 10B/y and all the top talent into overcoming any limitations to making them as smart or smarter than humans. What would make you scared?
The CEO of megacorp doesn’t care about X risks.
Then why are you proposing to leave it up to Sam Altman?
I would really like to hear a better way to handle the risks if you have any ideas.
Okay, but smallpox is a good example - we keep it in an entirely disempowered state, as we do for almost all wildlife. But again, if we didn't already think something was worth keeping around in the state they want to be kept in (e.g. serial killers), what could they possibly say to change our minds?
I'm not sure what that "rational discussion about why human culture is worth preserving" would look like if one agent didn't already values those things more than almost any competing concern. How much of your time do you dedicate to preserving, idk, virus culture? Random rock pile culture? Could ghonorrea or tuberculosis convince you that its presence (and use of precious resources such as human bodies) was a net positive, if it were sufficiently eloquent?
Well if you're OK with the successor species taking over even if it's non-human, then I guess we're at an impasse. I think that's better than nothing, but way worse than humanity thriving.
I see what you mean about the possibility of a generous AI being more likely if it's not subject to competition. But I buy the argument that, no matter what it cares about, due to competing concerns, it probably won't be all that generous to us unless that's basically the only thing it cares about.
I'm dumb, Sandberg did sign it. But I also think he has pretty similar outlooks to those others.
People were doing online banking and shopping in 1984:
https://en.wikipedia.org/wiki/Telidon
People were writing about things like an all-consuming social media internet in 1909:
https://en.wikipedia.org/wiki/The_Machine_Stops
The fact that massive progress has recently happened, is continuing to happen, and now 10s billions of dollars of capital and much of the top young talent is working in an area is very strong evidence that we're going to continue to see major advances over the next decade.
I think you're right about the cringe, bad arguments, and false dichotomies. But unfortunately I do think there are strong arguments that humans will ultimately be marginalized once we're no longer the smartest, most capable type of thing on earth. Think the Trail of Tears, or all of humanity being a naive grandma on the internet - it's only a matter of time before we're disempowered or swindled out of whatever resources we have. And all economic and power incentives will point towards marginalizing us, just like wildlife is marginalized or crushed as cities grow.
Internet atheists were all the things that AI doomers are today, and they're both right, imo.
I think our only choices are basically either to uplift ourselves (but we don't know how yet) or, like a grandma, take a chance on delegating our wishes to a more sophisticated agent. So I'm inclined to try to buy time, even if it substantially increases our chances of getting stuck with totalitarianism.
I'd say that most conspicuously missing are the most serious and impressive AI safety people, such as Paul Christiano, Owain Evans, Dylan Hadfield-Menell, Jacob Steinhardt, (Edit: not Anders Sandberg), or Andrew Critch. Nor the most serious AI governance people. I'd say this letter is a bit of a midwit meme, in the sense that most of the experts aren't experts in AI safety (save Stuart Russel).
No, I'm saying it's mostly the opposite. For about the last 10 years, up until about a year ago, everyone (including OpenAI) was having their cream skimmed by Google. They + Deepmind still have about 1/4 of the really good people in DL.
Love this writeup. To be fair to Zoubin though, he was Geoff Hinton's postdoc in 1995, and worked on deep learning in way, way before it was cool. It's just that deep learning didn't really do anything on those tiny computers. You might say that this makes it all the more unforgivable that he slept on deep learning for as long as he did in the 2010s. But Gaussian processes are infinitely-wide neural networks! And the main sales pitch of Bayesian nonparametrics was that it was the only approach that could scale to arbitrarily large and complex datasets! Pitman-Yor processes and the sequence memoizer were also ultra-scalable, arbitarily-complex, generative unsupervised language models that came out of those approaches. But scale isn't all you need, you also need depth / abstraction. But before transformers, depth seemed to lead to only limited forms of abstraction, and doing something more like a search over programs seemed more promising.
The other thing that's already happened is that a bunch of the most talented DL researchers and engineers have already left Google + Deepmind. It's totally nuts.
I agree it's not very subtle, but it's confusing for a while.
Which impression?

Right, there never was, and never will be. But it's a matter of degree, we can reduce the chances.
I have no idea what you're arguing or advocating for in the rest of your reply - something about how if the world has surprising aspects that could change everything, that's probably bad and a stressful situation to be in? I agree, but I'm still going to roll up my sleeves and try to reason and plan, anyways.
More options
Context Copy link