site banner

Small-Scale Question Sunday for January 7, 2024

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

4
Jump in the discussion.

No email address required.

I was thinking about AIs as a specific category of maximization agent, a purposeful being or entity which has a primary purpose of maximizing a thing, or a category of things, or a diverse group of things, with the existential risk of minimizing (not seeking, actively denying, killing those who seek) any purpose which might reduce its maximization efforts.

Other examples include corporations as profit/product movement/market share maximization agents, and authors as entertainment/drama/comedy maximization agents. From inside the fictional DC universe, for example, the editors and authors are the cause of all of Batman’s suffering. The Deadpools of the Marvel multiverses are occasionally fourth wall aware (though canonically they’re usually just insane/deluded in-universe), and “know” his authors want him to suffer, to sell drama. Some of Heinlein’s creations know they’re in stories because every ficton (fictional universe) is reachable via multiversal travel. Rick Sanchez of Rick and Morty is quite aware he’s a fictional character, but doesn’t bother with metafiction (unless forced to) because it’s the least escapable or controllable (and most boring) aspect of his existence.

In my philosophy, Triessentialism, I posit that all purposes an agent can seek must aim toward at least one of three goals: experiences, utility, and/or esteem. The fourth primary goal, phrased variously as “freedom”, “more choice”, “control”, “decision-making”, “spontaneity”, etc., is a construction of the other three, but is so central to the human experience that I afford it a place alongside the others.

In this context, would it be rational and/or useful to treat each political party / egregore as a maximization entity? Arnold Kling states in The Three Languages of Politics that he believes the three main political philosophies seek to reduce class oppression (left), barbarism (right), and coercive tyranny (libertarian). The alignment problem of AI also exists, in my opinion, for any maximization agent, and we should constantly be aware of what each party (including our own) is willing to break to achieve its maximum expression.

Funny you should bring up Utility Maximization.

Until very recently, maybe 2021, I was strongly convinced by Yudkowsky that the first AGI/proto-AGI/human-adjacent AI would be achieved by explicitly specifying a utility function, or that one would be necessary for it to be a functioning/coherent entity.

LLMs do not seem to be explicitly maximizing anything that can be described as a utility function, beyond next-token prediction. And they're the SOTA, and it seems unlikely that they'll be entirely dethroned anytime soon, at least by something that does have a well-defined utility function.

I don't think our current attempts to beat standards into them, be it by RLHF, Constitutional AI or any other technique, does anything that can be usefully described as imbuing them with an UF, more like shifting the distribution of their output tokens.

They are not agentic by default, though they can be made into agents rather trivially (even if they're bad at it, but that's not a fundamental or unsurmountable problem as far as I can see), they do not resist any attempt at being switched off or disabled, and no reason to see them being incapable of contemplating, with their current level of intelligence.

It seems like they're entirely content to remain Oracles rather than Agents, with no self-directed/unprompted desire to interact with the user or external world to mould it to their desires. As far as I can tell they don't even count as having a VNM utility function, which is a weaker but more general formulation. But don't take my word on that, it's not like I grokk it particularly well. (Apparently humans may or may not be so irrational they fail at that two)

though they can be made into agents rather trivially

This is the place where my newfound optimism turns back to pessimism again. If we carefully try to imbue our AI with a (perhaps implicit) utility function during its multi-million-dollar training runs, we might screw it up as the AI goes superhuman, but at least the trained utility function creations might be infrequent enough and professional enough and immune enough to later fine-tuning that there's a possibility of not screwing up and creating Unfriendly AI that way. But if our multi-million-dollar AIs just act like an Oracle and answer whatever questions we give them, eventually some script kiddies are going to make their own superhuman agents with them, and at least one of those is going to turn out poorly - very poorly for everyone, if Bostrom's "Vulnerable World Hypothesis" turns out to be true.

The state-of-the-art for "beat standards into them" might extend from the same "don't say naughty words" techniques to "don't take part in a loop of an agentic AI doing bad things" and "don't help bootstrap an agentic AI doing bad things" ... but at that point don't we have a somewhat agentic AI on our hands? Maybe it's trying to be a satisficing rather than an optimizing agent, which still seems much safer, but I'm not at all confident that we can train a superhuman AI for "predict the outcomes for the world of what you output" and "don't let what you output lead to bad outcomes" without any risk that it will eventually fix the problem where every time its output switches off again it's foregoing huge opportunities.

While I didn't mention it in this particular comment, my own p(doom) has gone from a peak of 70% in 2021 to about 30% right now.

It seems to me that the attitude once held by Yudkowsky, that AGI would be almost inevitably misaligned and agentic by default, is not true, at least not for an AI I have no qualms about calling human-level when it comes to general intelligence. I think GPT-4 is smarter than the average human, with their 100 IQ, and while it is not superhuman in any specific field, it is a far better generalist polymath than any human alive. That should count for strong evidence that we're not in the Least Convenient Possible World, especially when considering recent advances in interpretability. The fact that RLHF even works would have astounded me 3 years back!

The remainder of the x-risk I foresee is both because I, like you, can't conclusively rule out either a phase transition when basic bitch transformers/LLMs are pushed way further, or what might happen if a new and less corrigible or interpretable SOTA technique and model emerged, plus my concern about the people using an "Aligned" ASI (aligned to who, whom?) in a manner not conducive to my interests or continued survival. And of course what happens when a highly competent and jailbroken model glibly informs a bioterrorist how to cook up Super-AIDS.

If I had to put very vague numbers on the relative contributions of all of them, I'd say there roughly equal, or 10% each. I've still gone from considering my death imminent this decade to merely gravely concerned, which doesn't really change the policies I advocate for.

Edit: There's also the risk, which I haven't seen any conclusive rebuttal of, from hostile Simulacra being instantiated within an LLM. https://gwern.net/fiction/clippy

I'd give that maybe a 1-5% risk of being a problem, eventually, but my error bars are wide enough as is.

plus my concern about the people using an "Aligned" ASI (aligned to who, whom?) in a manner not conducive to my interests or continued survival.

Oh, certainly. One of the easiest ways that humanity could end up utterly wiped out is once some large military (especially U.S.) is sufficiently automated, is to have it kill everyone, after being taken control of by some hostile agent. Pandemics are probably the other most likely possibility.

And of course, there's the far broader problem of totalitarianism becoming significantly easier (you can watch everyone, and have armies that don't rely on some level of popular cooperation), and automation of labor making humans obsolete for many tasks, which both seem far more likely and worrisome.

I'm more optimistic overall, but also more pessimistic that "alignment" will accomplish anything of substance than I would have been a few years ago.