site banner

Small-Scale Question Sunday for January 7, 2024

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

4
Jump in the discussion.

No email address required.

I was thinking about AIs as a specific category of maximization agent, a purposeful being or entity which has a primary purpose of maximizing a thing, or a category of things, or a diverse group of things, with the existential risk of minimizing (not seeking, actively denying, killing those who seek) any purpose which might reduce its maximization efforts.

Other examples include corporations as profit/product movement/market share maximization agents, and authors as entertainment/drama/comedy maximization agents. From inside the fictional DC universe, for example, the editors and authors are the cause of all of Batman’s suffering. The Deadpools of the Marvel multiverses are occasionally fourth wall aware (though canonically they’re usually just insane/deluded in-universe), and “know” his authors want him to suffer, to sell drama. Some of Heinlein’s creations know they’re in stories because every ficton (fictional universe) is reachable via multiversal travel. Rick Sanchez of Rick and Morty is quite aware he’s a fictional character, but doesn’t bother with metafiction (unless forced to) because it’s the least escapable or controllable (and most boring) aspect of his existence.

In my philosophy, Triessentialism, I posit that all purposes an agent can seek must aim toward at least one of three goals: experiences, utility, and/or esteem. The fourth primary goal, phrased variously as “freedom”, “more choice”, “control”, “decision-making”, “spontaneity”, etc., is a construction of the other three, but is so central to the human experience that I afford it a place alongside the others.

In this context, would it be rational and/or useful to treat each political party / egregore as a maximization entity? Arnold Kling states in The Three Languages of Politics that he believes the three main political philosophies seek to reduce class oppression (left), barbarism (right), and coercive tyranny (libertarian). The alignment problem of AI also exists, in my opinion, for any maximization agent, and we should constantly be aware of what each party (including our own) is willing to break to achieve its maximum expression.

Wait, if there are four unique essences, why do you call it triessentialism?

Anyway. I believe there’s a categorical difference between “not seeking, actively denying, killing those who seek” something. Those are the meaningful groups, not umbrella terms like “maximizer” or even “agent.”

Worrying that (insert political party) will go too far in service of its goals…that’s got to be one of the oldest arguments in politics.

Wait, if there are four unique essences, why do you call it triessentialism?

Why is the book called The Three Musketeers if there are four of them?

It’s been a while, but isn’t one of them the narrator?

IIRC, for most of the book D'Artagnan is a guard under the command of Desessart, rather than a musketeer under the command of de Treville (like Athos, Porthos, and Aramis), but he does eventually get a transfer.

That is correct. There are only three musketeers, until the very end.