site banner

Small-Scale Question Sunday for January 7, 2024

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

4
Jump in the discussion.

No email address required.

though they can be made into agents rather trivially

This is the place where my newfound optimism turns back to pessimism again. If we carefully try to imbue our AI with a (perhaps implicit) utility function during its multi-million-dollar training runs, we might screw it up as the AI goes superhuman, but at least the trained utility function creations might be infrequent enough and professional enough and immune enough to later fine-tuning that there's a possibility of not screwing up and creating Unfriendly AI that way. But if our multi-million-dollar AIs just act like an Oracle and answer whatever questions we give them, eventually some script kiddies are going to make their own superhuman agents with them, and at least one of those is going to turn out poorly - very poorly for everyone, if Bostrom's "Vulnerable World Hypothesis" turns out to be true.

The state-of-the-art for "beat standards into them" might extend from the same "don't say naughty words" techniques to "don't take part in a loop of an agentic AI doing bad things" and "don't help bootstrap an agentic AI doing bad things" ... but at that point don't we have a somewhat agentic AI on our hands? Maybe it's trying to be a satisficing rather than an optimizing agent, which still seems much safer, but I'm not at all confident that we can train a superhuman AI for "predict the outcomes for the world of what you output" and "don't let what you output lead to bad outcomes" without any risk that it will eventually fix the problem where every time its output switches off again it's foregoing huge opportunities.

While I didn't mention it in this particular comment, my own p(doom) has gone from a peak of 70% in 2021 to about 30% right now.

It seems to me that the attitude once held by Yudkowsky, that AGI would be almost inevitably misaligned and agentic by default, is not true, at least not for an AI I have no qualms about calling human-level when it comes to general intelligence. I think GPT-4 is smarter than the average human, with their 100 IQ, and while it is not superhuman in any specific field, it is a far better generalist polymath than any human alive. That should count for strong evidence that we're not in the Least Convenient Possible World, especially when considering recent advances in interpretability. The fact that RLHF even works would have astounded me 3 years back!

The remainder of the x-risk I foresee is both because I, like you, can't conclusively rule out either a phase transition when basic bitch transformers/LLMs are pushed way further, or what might happen if a new and less corrigible or interpretable SOTA technique and model emerged, plus my concern about the people using an "Aligned" ASI (aligned to who, whom?) in a manner not conducive to my interests or continued survival. And of course what happens when a highly competent and jailbroken model glibly informs a bioterrorist how to cook up Super-AIDS.

If I had to put very vague numbers on the relative contributions of all of them, I'd say there roughly equal, or 10% each. I've still gone from considering my death imminent this decade to merely gravely concerned, which doesn't really change the policies I advocate for.

Edit: There's also the risk, which I haven't seen any conclusive rebuttal of, from hostile Simulacra being instantiated within an LLM. https://gwern.net/fiction/clippy

I'd give that maybe a 1-5% risk of being a problem, eventually, but my error bars are wide enough as is.

plus my concern about the people using an "Aligned" ASI (aligned to who, whom?) in a manner not conducive to my interests or continued survival.

Oh, certainly. One of the easiest ways that humanity could end up utterly wiped out is once some large military (especially U.S.) is sufficiently automated, is to have it kill everyone, after being taken control of by some hostile agent. Pandemics are probably the other most likely possibility.

And of course, there's the far broader problem of totalitarianism becoming significantly easier (you can watch everyone, and have armies that don't rely on some level of popular cooperation), and automation of labor making humans obsolete for many tasks, which both seem far more likely and worrisome.

I'm more optimistic overall, but also more pessimistic that "alignment" will accomplish anything of substance than I would have been a few years ago.