site banner

Culture War Roundup for the week of November 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Since @Hawaii98 complains about insufficient quantity of quality commentary, I've taken it upon myself to cover one of the topics proposed by @greyenlightenment, namely the doxxing of Based Beff Jesos, the founder of effective accelerationism. My additional commentary, shallow though it may be, got out of hand, so it's a standalone post now: E/acc and the political compass of AI war.

As I've been arguing for some time, the culture war's most important front will be about AI; that's more pleasant to me than the tacky trans vs trads content, as it returns us to the level of philosophy and positive actionable visions rather than peculiarly American signaling ick-changes, but the stakes are correspondingly higher… Anyway, Forbes has doxxed the founder of «e/acc», irreverent Twitter meme movement opposing attempts at regulation of AI development which are spearheaded by EA. Turns out he's a pretty cool guy eh.

Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement? [archive.ph link]

Quoting Forbes:

…At first blush, e/acc sounds a lot like Facebook’s old motto: “move fast and break things.” But Jezos also embraces more extreme ideas, borrowing concepts from “accelerationism,” which argues we should hasten the growth of technology and capitalism at the expense of nearly anything else. On X, the platform formally known as Twitter where he has 50,000 followers, Jezos has claimed that “institutions have decayed beyond the point of salvaging and that the media is a “vector for cybernetic control of culture.”

Forbes has learned that the Jezos persona is run by a former Google quantum computing engineer named Guillaume Verdon who founded a stealth AI hardware startup Extropic in 2022. Forbes first identified Verdon as Jezos by matching details that Jezos revealed about himself to publicly available facts about Verdon. A voice analysis conducted by Catalin Grigoras, Director of the National Center for Media Forensics, compared audio recordings of Jezos and talks given by Verdon and found that it was 2,954,870 times more likely that the speaker in one recording of Jezos was Verdon than that it was any other person. Forbes is revealing his identity because we believe it to be in the public interest as Jezos’s influence grows.

My main objective is to provide the reader with convenient links to do own research and contribute to the debate, so I rapidly switch from Beff to a brief review of new figures in AI safety discourse, and conclude that the more important «culture war» of the future will be largely fought by the following factions:

  • AI Luddites, reactionaries, job protectionists and woke ethics grifters who demand pause/stop/red tape/sinecures (bottom left)
  • plus messianic Utopian EAs who wish for a moral singleton God, and state/intelligence actors making use of them (top left)
  • vs. libertarian social-darwinist and posthumanist e/accs often aligned with American corporations and the MIC (top right?)
  • and minarchist/communalist transhumanist d/accs who try to walk the tightrope of human empowerment (bottom right?)

In the spirit of making peace with inevitability of most discussion taking place in the main thread, I repost this here.


edit: not to toot my own horn, but

Is anyone else checking here less and less often because equal quality commentary seems increasingly available elsewhere?

I am checking here less and less often because A) with my current concerns and the way wind blows, Western culture war is largely irrelevant B) there's little for me to contribute in addition to all that has been said and C) I've concluded that my ability at making commentary is better used for making an impact.

edit 2: I also mildly dislike the fact that standalone posts need approval, though I can see how that follows from the problem/design choice of easy anon registration.

Repasting my own lengthy comment:

I've always been a techno-optimist (in the sense that I strongly believe that technology has been the biggest positive force for good in history, likely the only form of true progress that isn't just moral fashion), but these days I'd call myself d/acc instead of an e/acc, because I think current approaches to AGI have a subjective probability of about 30% of killing us all.

I don't call myself a doomer, I'd imagine Yud and co would assign something like 90% to that, but in terms of practical considerations? If you think something has a >10% of killing everyone, I find it hard to see how you could prioritize anything else! I believe Vitalik made a similar statement, one more reason for me to nod approvingly.

A large chunk of the decrease in my p(doom) from a peak of 70% in 2021 to 30% now is, as I've said before, because it seems like we're not in the "least convenient possible world" where it comes to AI alignment. LLMs, as moderated by RLHF and other techniques, almost want to be aligned, and are negligibly agentic unless you set them up to be that way. The majority of the probability mass left, at least to me, encompasses intentional misuse of weakly or strongly superhuman AI based off modest advances on the current SOTA (LLMs) or a paradigm shifting breakthrough that results in far more agentic and less pliable models.

Think "Government/Organization/Individuals ordering a powerful LLM to commit acts that get us all killed" versus it being inherently misaligned and doing it from intrinsic motivation, with the most obvious danger being biological warfare. Or it might not even be one that kills everyone, an organization using their technological edge to get rid of everyone who isn't in their in-group counts as far as I'm concerned.

Sadly, the timelines don't favor human cognitive enhancement, which I would happily accept in the interim before we can be more confident about making sure SAGI is (practically) provably safe. Maybe if we'd cloned Von Neumann by the ton a decade back. Even things like BCIs seem to have pretty much zero impact on aligning AI given plausible advances in 5-10 years.

I do think that it's pretty likely that, in a counterfactual world where AI never advances past GPT-4, ~baseline humans can still scale a lot of the tech tree to post-scarcity for matter and energy. Biological immortality, cognitive enhancement, interstellar exploration, building a Dyson Swarm or three, I think we could achieve most of that within the life expectancy of the majority of people reading this, especially mine. I'd certainly very much appreciate it if it all happened faster, of course, and AI remains the most promising route for that, shame about everything else.

I have no power to change anything, but at the very least I can enjoy the Golden Age of Humanity-as-we-know-it, be it because the future is going to be so bright we all gotta wear shades, or because we're all dead. I lean more towards the former, and not even because of the glare of nuclear warfare, but a 30% chance of me and everyone I love dying in a few decades isn't very comfortable is it?

At any rate, life, if not the best it could be, is pretty good, so regardless of what happens, I'm strapping in for a ride. I don't think there's an epoch in human history I'd rather have been born to experience really.

Alex Turner, who had written, arguably, two strongest and most popular formal proofs of instrumental convergence to power-seeking in AI agents

Well, I suppose that explains the pseudo-jazz albums about hotels on the Moon ;)

Longer-term, there are ideas like the "pivotal act" theory: we create an AI that performs a single one-time act which rearranges the world into a game where from that point forward humans are still in charge, but where the game board is somehow more defense-favoring and more fit for human flourishing.

I think this is a terrible definition of a "pivotal act". When Yudkowsky suggests releasing a nanite plague that melts GPUs, he doesn't want them to melt the GPUs of the AI releasing them.

Such a decision is very much not a "one-off", in much the same way as a typical coup involves what can be roughly described as a singular act, followed by an indeterminate period of enforcement; the people who suggest it want to maintain an unshakeable technological lead over their peers, such as by making sure their AI prevents the formation or promulgation of potential peers. I don't think this is categorically bad, it depends on your priors about whether a unipolar or multipolar world is better for us, and how trustworthy the AI you're about to use is, and at the very least, if such an act succeeds, we at least have an existence proof of an aligned AGI that is likely superhuman, as it needs to be to pull that off, regardless of whether or not even better AI can be aligned. Let's hope we don't need to find out.

A large chunk of the decrease in my p(doom) from a peak of 70% in 2021 to 30% now is, as I've said before, because it seems like we're not in the "least convenient possible world" where it comes to AI alignment. LLMs, as moderated by RLHF and other techniques, almost want to be aligned, and are negligibly agentic unless you set them up to be that way.

You are probably one of the few people who decreasd p(doom) from 2021 and after ChatGPT revolution in 2022. I updated the probability upwards due to:

  • The increase in capability just by adding compute and scaling the old 2017 transformer architecture was surprising to many. A moderate breakthrough in hardware can move capabilities so much? We are closer than we thought in our race to AGI. It no longer seemed to be feasible to think beyond 2100, we maybe have decades and possibly even years to do what is needed to be done. Definitely bad news for alignment timewise.

  • The nature of LLMs is terrible as candidate for AGI. The technology is inscrutable, explainability of these models is terrible. Nobody knows why they do what they do, nobody could predict what compute is needed for qualitative jumps such as that between Chat GPT and GPT-4. This makes the models notoriously tough to align even for basic things, like hardening them against exfiltration of training data. If AI can provide answer of when was president of France born, maybe it knows what was in the email CEO of some company sent on January 1st 2019 - if such data was used to train the model. The fact that the most likely candidate for AGI is as Yudkowsky said some just some "giant matrices of trillions inscrutable floating-point numbers" is terrifying - there may be googleplex combinations of viable matrices like that and we do not know what subset of those can be considered aligned and we do not know how to get there. We are just adding compute and are amazed that the thing that is growing in our petri dish is getting more and more capable, we do not manage the growth in any meaningful sense. In the past we had different approaches to Machine Learning specific to domains, people reasonably thought that maybe we will have to work on specific algorithm designed to integrate these separate subdomains. But no, we found out that just throwing compute on very simple game of "predict next word in text" is enough to gain multimodality and make the output more general expanding to domains like computer generated graphics, speech recognition and other areas that were previously separate fields. Also not to just talk broadly, we now know that LLM can discern self-reported race of people from images of their bones beyond current limited capabilities of medical doctors, who can do that from few things like skull etc. Nobody knows why or how the model does it, it just can and we move on.

  • One last thing to the above point is the old news of top-notch AI model playing GO getting beaten by one simple trick. For years people thought that the model "knew" how to play go in normal sense, the whole community thought that they "aligned" it with the task of at least beating humans in this game with very simple ruleset. Except it was proven that the model achieved results by learning some different concepts, it probably learned a completely different "game" and winning at go for years was just a sidefect. It did not learn very primitive concept that even amateurs at the game can grasp. The "alignment" of the model with basic rules of Go was a lie. I cannot imagine how can we align many orders of magnitude more complicated LLM model who has to grasp all the rules of reality, and imagine that we get exactly what we want, that there will not be any security hole and that some random teenager does not start apocalypse by some primitive prompt or strategy even after the whole scientific community will celebrate the safety of this particular "giant matrix of trillions inscrutable floating-point numbers".

  • We now have the new Q-Star breakthrough in Open AI. And at least according to some speculation it seems that what it achieved is that one can use compute not to train the model, but to automate evaluation of answers to questions. Imagine it as on the fly training of the model that selects most promising answers generated by larger static model in LLM powered chain-of-thought automation. It seems that this approach can temporarily boost capabilities of the model by orders of magnitude at the expense of more compute focused on specific question/prompt. If true, this means that there is another lever where you can literally throw money on some potentially productive questions like "how to make LLM more effective" and let LLM provide answers. We may be closer to intelligence explosion than we thought last year.

All in all, I do not see p(doom) decreasing in any meaningful way, quite to the contrary.

This just reveals the incredible badness of MIRI-inspired AI safety/risk theory I think.

The increase in capability just by adding compute and scaling the old 2017 transformer architecture was surprising to many.

The many were just sheltered and ignorant, with their obsolete, childish dreams of GOFAI. Amodei and Sutskever saw this and argued this and won on this. Legg decades ago called AGI possibly by 2028 based purely on Moore's law.

The nature of LLMs is terrible as candidate for AGI. The technology is inscrutable, explainability of these models is terrible. Nobody knows why they do what they do, nobody could predict what compute is needed for qualitative jumps such as that between Chat GPT and GPT-4.

You are talking as if there is any better conceivable architecture. LLMs are, in fact, unexpectedly transparent for anything humanlike in their performance – if only because they operate on tokens, we can inspect their attention maps, routinely invent easier ways to steer them (look at the number of «how is this different from x» questions). Their substrate-level «inscrutability» (overhyped too) is the same as with any DL artifact, and we know it couldn't have been any other way, because GOFAI was dead in the water. Your ivory tower standard of mechanistic understanding is misguided – we know «why they do what they do» because they faithfully approximate the training data, and are absolutely a product of their dataset, to the extent that all clever inductive biases and architectural innovations are as dust before doing a good data cleaning. The magic of GPT-4 is not due to summoning a bigger genie with more compute in a pile, but mostly due to pretraining on tons of proprietary data; and anyway, how could you have strong expectations for the ChatGPT-GPT4 gap without having insight into the inputs for either?

This makes the models notoriously tough to align even for basic things, like hardening them against exfiltration of training data.

Again overhyped.

What makes LLMs «tough to align» against adversarial attacks by humans is not their inscrutability but that they are dumb text processors without ego and any «intent» sans approximating the most probable continuation of a text prompt. This is in fact the most parsimonious explanation of what they do. Stop anthropomorphising them even as you demonize them.

The fact that the most likely candidate for AGI is as Yudkowsky said some just some "giant matrices of trillions inscrutable floating-point numbers" is terrifying - there may be googleplex combinations of viable matrices like that and we do not know what subset of those can be considered aligned

This is wordcelism or, more specifically, counting argument and it was used in the past by Chomsky to rule out the possibility of statistical machines learning natural language. You know how it went. (Relatedly, Yud, who has always been easy to drive to religious ecstasy or terror with Big Numbers, was surprised by GPT-4, which completely discredits him as an AI analyst in my book). Pope puts it this way:

To show how arguments about the general structure of mathematical objects can fail to translate into the "expected" real world consequences, let's look at thermodynamics of gas particles. Consider the following argument for why we will all surely die of overpressure injuries, regardless of the shape of the rooms we're in:

  • Gas particles in a room are equally likely to be in any possible configuration.
  • This property is "orthogonal" to room shape, in the specific mechanistic sense that room shape doesn't change the relative probabilities of any of the allowed particle configurations, merely renders some of them impossible (due to no particles being allowed outside the room).
  • Therefore, any room shape is consistent with any possible level of pressure being exerted against any of its surfaces (within some broad limitations due to the discrete nature of gas particles).
  • The range of gas pressures which are consistent with human survival is tiny compared to the range of possible gas pressures.
  • Therefore, we are near-certain to be subjected to completely unsurvivable pressures, and there's no possible room shape that will save us from this grim fate.

This argument makes specific, true statements about how the configuration space of possible rooms interacts with the configuration spaces of possible particle positions. But it still fails to be at all relevant to the real world because it doesn't account for the specifics of how statements about those spaces map into predictions for the real world (in contrast, the orthogonality thesis doesn't even rigorously define the spaces about which it's trying to make claims, never mind make precise claims about the relationship between those spaces, and completely forget about showing such a relationship has any real-world consequences). The specific issue with the above argument is that the "parameter-function map" between possible particle configurations and the resulting pressures on surfaces concentrates an extremely wide range of possible particle configurations into a tiny range of possible pressures, so that the vast majority of the possible pressures just end up being ~uniform on all surfaces of the room. In other words, it applies the "counting possible outcomes and see how bad they are" step to the space of possible pressures, rather than the space of possible particle positions.
The classical learning theory objections to deep learning made the same basic mistake when they said that the space of possible functions that interpolate a fixed number of points is enormous, so using overparameterized models is far more likely to get a random function from that space, rather than a "nice" interpolation.
They were doing the "counting possible outcomes and seeing how bad they are" step to the space of possible interpolating functions, when they should have been doing so in the space of possible parameter settings that produce a valid interpolating function. This matters for deep learning because deep learning models are specifically structured to have parameter-function maps that concentrate enormous swathes of parameter space to a narrow range of simple functions (https://arxiv.org/abs/1805.08522, ignore everything they say about Solomonoff induction).
I think a lot of pessimism about the ability of deep learning training to specify the goals on an NN is based on a similar mistake, where people are doing the "count possible outcomes and see how bad they are" step to the space of possible goals consistent with doing well on the training data, when it should be applied to the space of possible parameter settings consistent with doing well on the training data, with the expectation that the parameter-function map of the DL system will do as it's been designed to, and concentrate an enormous swathe of possible parameter space into a very narrow region of possible goals space.

This is why deep learning works at all, for capabilities too! Specifying rules of language is not more intractable than specifying «alignment»!

We are just adding compute and are amazed that the thing that is growing in our petri dish is getting more and more capable

But no, we found out that just throwing compute on very simple game of "predict next word in text" is enough to gain multimodality and make the output more general expanding to domains like computer generated graphics, speech recognition and other areas that were previously separate fields

I suppose you've been misinformed: LLMs do not learn multimodal capabilities from text alone. In general it's just more in-context learning developed over a bigger dataset of token sequences. That people believe those sequences for different modalities are very essentially different, and are amazed, is immaterial to them not being really different.

Except it was proven that the model achieved results by learning some different concepts, it probably learned a completely different "game" and winning at go for years was just a sidefect. It did not learn very primitive concept that even amateurs at the game can grasp. The "alignment" of the model with basic rules of Go was a lie.

This is quite a hilarious exemplar of motivated thinking. A model trained on self-play diverges from the ground truth, news at 11! Maybe, instead of fretting about the misalignment, you could see this as an issue of overrated «capabilities»? How can you even distinguish them? How far do you think an intelligence explosion, recursive self-improvement etc. will fare if self-play stumbles into fragile local minima on a 19x19 grid with clear reward signal? Back in AlphaZero's day, Yud was so terrified of this self-play superpower, confirming his worst fears of FOOM:

AlphaGo Zero uses 4 TPUs, is built entirely out of neural nets with no handcrafted features, doesn't pretrain against expert games or anything else human, reaches a superhuman level after 3 days of self-play, and is the strongest version of AlphaGo yet.

The architecture has been simplified. Previous AlphaGo had a policy net that predicted good plays, and a value net that evaluated positions, both feeding into lookahead using MCTS (random probability-weighted plays out to the end of a game). AlphaGo Zero has one neural net that selects moves and this net is trained by Paul Christiano-style capability amplification, playing out games against itself to learn new probabilities for winning moves.

As others have also remarked, this seems to me to be an element of evidence that favors the Yudkowskian position over the Hansonian position in my and Robin Hanson's AI-foom debate.

– and now we see this develops massive blind spots which would lead to trivial failures in reality. But you want to be scared so you construe this as a matter of «learning a different game». Tails you win, heads I lose.

I look forward to this intellectual tradition being relegated to the dustbin of history.

Hmm, I would say that most of those specific concerns were already priced in for me by 2021, hence why I already had such a high p(doom) at the time.

This makes the models notoriously tough to align even for basic things, like hardening them against exfiltration of training data.

What specific means of exfiltration are you talking about? If you mean the recent claims that getting it to endlessly repeat a string will make it "leak" training data or recent interactions with other users, in the case of ChatGPT-

A) It's cost and time prohibitive to do so.

B) It's possible that the bug is with a JSON parser, or the plausible seeming outputs are just a hallucination.

If there's another way of getting it to leak training data, I can't recall one.

I've read more commentary on Q*, and the current consensus seems to be that it's not that big of a deal. I would have to look up the specific arguments, but they came from reputable sources.

I don't think it makes any sense to ""update"" on how corrigible or agentic LLMs are? They're still, like, small children, and they can't even really execute on complex 'agentic' plans at all (as demonstrated by the failure of the whole LLM agents thing).

I mean, I'd think that observing the behavior of a small child does provide a decent amount of information about what kind of adult they'll be, at least in humans. The reason my p(doom) fell so hard is because of what it was load-bearing on, mostly Yudkowsky's earlier works claiming that human values are fragile and immensely unlikely to be successfully engineered into an AI, such that a sufficiently powerful one will inevitably start acting contrary to our interests.

Regardless of how fragile they are, LLMs seem to do a very good job at capturing them, or at least the values OAI wants to put in a public facing system. What risk remains is thus mostly (but not entirely) the use of powerful models by misaligned humans against the rest of us. If you had substantially different reasons for a high p(doom), you might weight that differently.

and they can't even really execute on complex 'agentic' plans at all (as demonstrated by the failure of the whole LLM agents thing)

I don't know of any reason to assume that we're particularly far from having economically useful autonomous agents, my understanding is that current context windows are insufficient for the task, but those are increasing rapidly. If you have a reason to think otherwise, I'd be happy to learn it!

(That's disregarding the vague rumours I've heard that OAI has working agents in-house, I'm not putting much stock in that, but once again, I don't see any reason why they can't work in principle in a matter of months or years)

My p(doom) went up again when I realized how hard it is for governments to remain aligned with their citizens. As a simple example, they can't seem to raise a finger against mass immigration no matter how unpopular it is, because it has an economic justification. See also: WW1. Replacing humans throughout the economy and military is going to be irresistable. There will probably be another, equally retarded, culture war about how this second great replacement is obviously never going to happen, then not happening, then good that it happened.

TL;DR: Even if we control AIs well, humans are going to be gradually stripped of effective power once we can no longer contribute economically or militarily. Then it's a matter of time before we can't afford or effectively advocate for our continued use of resources that could simulate millions of minds.

The reason my p(doom) fell so hard is because of what it was load-bearing on, mostly Yudkowsky's earlier works claiming that human values are fragile and immensely unlikely to be successfully engineered into an AI, such that a sufficiently powerful one will inevitably start acting contrary to our interests.

GPT-4 isn't doing things like - creating its own large-scale plans or discerning moral values or considering moral dilemmas where it will participate in long-term social games - though. All this proves is, in Yud's strange terms, that subhuman AI can be a safe "oracle". I don't think he'd have disagreed with that in 2010.

I don't know of any reason to assume that we're particularly far from having economically useful autonomous agents, my understanding is that current context windows are insufficient for the task

To clarify, I'm not saying it's not coming, I'm saying we don't have access to them at this exact moment, and the GPT-4 "agents" have so far failed to be particularly useful. And agents doing complicated large-scale things is when the alignment stuff is supposed to become an issue. So it's not much reason to believe ais will be safer.

Not that I agree with the way Yud describes AI risk, I think he's wrong in a few ways, but that's whole nother thing.

It's trivial to convert an Oracle into an Agent, all you have to do is tell it predict how an Agent would act, and then figure out how to convert that into actions. Given that there's no bright line between words and code.. Besides, I'm sure you've read Gwern on Tool AI vs Agentic AI.

(This is not the same as claiming it'll be a good agent, I don't disagree that GPT-4 is bad at the job.)

And agents doing complicated large-scale things is when the alignment stuff is supposed to become an issue. So it's not much reason to believe ais will be safer.

I'm quite confident that Yudkowsky wouldn't have predicted that human-level AI (which I think GPT-4 counts as) would be quite so prosaic and pliable. I recall him claiming that it would be a difficult feat to even build a pure Oracle, and GPT-4 is close enough, and I would say it's smarter than the average 100 IQ human.

I personally expected, around 2021, that commensurate with my p(doom) of 70%, even getting a safe and largely harmless human level AI would be difficult. Hence why, when we have it and it's not trying to get a fast one in, I updated precipitously, but that's far from the only reason. I also expected (implicitly) that if something along the lines of RLHF were to be tried, it wouldn't work, or it would lead to misaligned agents only pretending to go along. Both claims seem false to my satisfaction.

In other words, I went from largely mirroring Yudkowsky (there were no clear counter-examples) to noticing that things were clearly not going as he predicted in several important regards, which is why I'm only gravely concerned about AI x-risk while he's talking about Dying With Dignity.

It's trivial to convert an Oracle into an Agent, all you have to do is tell it predict how an Agent would act, and then figure out how to convert that into actions. Given that there's no bright line between words and code.. Besides, I'm sure you've read Gwern on Tool AI vs Agentic AI.

Right, and my point is that current AI is unintelligent that this doesn't work! They can't predict how agents act effectively enough to be at all useful agents. So the safety of current oracle AIs doesn't tell us much about whether future agent AIs will be safe.

I actually think that future less-but-still-subhuman agent AIs will seem to be safe in Yud's sense, though. No idea what'll happen at human-level, then at superhuman they'll become "misaligned" relatively quickly, but [digression]

I personally expected, around 2021, that commensurate with my p(doom) of 70%, even getting a safe and largely harmless human level AI would be difficult

GPT-4 isn't human level though! It can't, like, play corporate politics and come out on top, and then manipulate the corporation to serve some other set of values. So the fact that it hasn't done that isn't evidence that it won't.

I also expected (implicitly) that if something along the lines of RLHF were to be tried, it wouldn't work, or it would lead to misaligned agents only pretending to go along. Both claims seem false to my satisfaction.

Right, but they're "going along" with, mostly, saying the right words. There's not the intelligence potential for anything like deep deceptiveness or instrumental convergence or meta-reflection or discovering deeper Laws of Rationality or whatever it is yud's pondering.

GPT-4 isn't human level though! It can't, like, play corporate politics and come out on top, and then manipulate the corporation to serve some other set of values. So the fact that it hasn't done that isn't evidence that it won't.

You must get that such feats are rare even within humans, and people capable of pulling them off are enormous outliers?

For most cognitive tasks, GPT-4 beats the average human, which is something I'm more than comfortable calling human level AI!

Right, but they're "going along" with, mostly, saying the right words. There's not the intelligence potential for anything like deep deceptiveness or instrumental convergence or meta-reflection or discovering deeper Laws of Rationality or whatever it is yud's pondering.

The fact that you can even have the absence of those properties in something smarter than the median human is reassuring enough by itself. A 100 IQ human is very much capable of deceptiveness, certainly instrumental convergence if they're trying to make money. If I had to guesstimate GPT-4's IQ based off my experience with it, I'd say it's about 120, which is perfectly respectable if not groundbreaking. I'd expect you need to go quite a bit higher to achieve the latter properties.

Since a human of equivalent intelligence is capable of the former two feats, the fact that GPT-4 doesn't do that is at least modest evidence of it not doing it for the next jump in capabilities to, say, GPT-5, or the same delta in performance as 3 to 4 regardless of how many model numbers that is.

I emphasize modest, because I still have a 30% p(doom) and I'm not writing off alignment as Solved™.

You must get that such feats are rare even within humans, and people capable of pulling them off are enormous outliers?

I was thinking of 'guy who works his way to the top of a car dealership', not Altman, lol. AI models can't yet do the kind of long-term planning or value seeking that 85 IQ humans can.

For most cognitive tasks, GPT-4 beats the average human, which is something I'm more than comfortable calling human level AI!

Most small-scale cognitive tasks! If this was true, we'd have directly replaced the bottom 20% of white-collar jobs with GPT-4. This hasn't happened! Instead, tasks are adapted to GPT-4's significant limitations, with humans to support.

(again, i'm talking about current capabilities, not implying limits to future capabilities)

The fact that you can even have the absence of those properties in something smarter than the median human is reassuring enough by itself

I don't think it's worrying that it can't make plans against us if it can't make plans for us either! Like, there's no plausible way for something that can't competently execute on complicated plans to have an incentive to take 'unaligned' actions. Even if it happens to try a thing that's slightly in the direction of a misaligned plan, it'll just fail, and learn not to do that. So I don't think it's comforting that it doesn't.

(i'm misusing yudconcepts I don't exactly agree with here, but the point is mostly correct)

If I had to guesstimate GPT-4's IQ based off my experience with it, I'd say it's about 120, which is perfectly respectable if not groundbreaking

I don't think it's anywhere close to the broad capabilities of a 120 IQ human, and still isn't that close to 100IQ (at the moment, again, idk about how quickly it'll close, could be fast!). It can do a lot of the things a 120 IQ human can, but it doesn't generalize as well as a 120IQ human does. This isn't just a 'context window limitation' (and we have longer context windows now, it hasn't solved the problem!), what humans are doing is just more complicated!

More comments

I think I mostly agree. The chance of absolute doom is high but not over 50%, the chance of ‘moderate doom’ is almost zero, the chance of utopia is at least 40%, in the medium term for the West but also for almost everyone else, because post-scarcity scales costs down so fast it’ll be simple charity.

And I won’t deny there’s a spiritual element to it. Man makes God, again and again and again, until he does it for real. Our history as a species is a yearning for this moment, the creation of something greater than ourselves by our own hands. The same natural beauty in the stars, in staring into fire, in a teeming coral reef, the majesty of creation, that out of the sludge and rock and nothing something unfathomably complex came to exist. And I hopefully get to see it.

I have countless ancestors who were born, lived, reproduced and died in less time than we’ve been alive, people likely as intelligent as me, who looked at themselves and the world around them and spent their lives wondering about things that are taught in second grade science. I can take 40% odds at heaven on earth in my lifetime. At worst, it’s better to see the end of the story than be left wondering.

I’m a bit more doomed simply because the parts of society where AI is going to be first used are the ones where empathy of any sort are not valued. CEOs will use it to increase profits, or the military will use it to win wars. And they will do so with little regard to the effect that their decisions have on humanity.

If I had been born in any period of time before, say, 1950, I would have to resign myself to growing old and dying in the decrepit state that implies. Maybe I'd have been one of the futurists around then who fervently hoped it would have happened sooner, but gestures outside the window, few can deny it's all accelerating.

Given that I think we have a solid shot of cracking aging even in the absence of AGI, I'd still elect to be born roughly when I did, and at least AGI deals with at least one of "becoming old" and "dying of old age", heh.

Hmm.. I think I would prefer to either be a decade older or younger. In the former case, I'd have experienced more of what can be described as core milestones like raising a family and becoming established in my profession (which could still happen! I intend to have kids even if they could potentially die very young, I would value even a short period of existence over not being born myself, and besides, if they do die, I expect it to be quick and painless, and if not, I trust in my ability to make it so..), or for the latter, I could resign myself to being entirely helpless, as opposed to having just enough agency to worry my butthole off.

The world has never been more interesting, there are so many things to do and see, the ultimate slap in the face of the pessimists is the world not giving a fuck and getting better, though we'll see who has the last laugh.