site banner

Culture War Roundup for the week of November 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Since @Hawaii98 complains about insufficient quantity of quality commentary, I've taken it upon myself to cover one of the topics proposed by @greyenlightenment, namely the doxxing of Based Beff Jesos, the founder of effective accelerationism. My additional commentary, shallow though it may be, got out of hand, so it's a standalone post now: E/acc and the political compass of AI war.

As I've been arguing for some time, the culture war's most important front will be about AI; that's more pleasant to me than the tacky trans vs trads content, as it returns us to the level of philosophy and positive actionable visions rather than peculiarly American signaling ick-changes, but the stakes are correspondingly higher… Anyway, Forbes has doxxed the founder of «e/acc», irreverent Twitter meme movement opposing attempts at regulation of AI development which are spearheaded by EA. Turns out he's a pretty cool guy eh.

Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement? [archive.ph link]

Quoting Forbes:

…At first blush, e/acc sounds a lot like Facebook’s old motto: “move fast and break things.” But Jezos also embraces more extreme ideas, borrowing concepts from “accelerationism,” which argues we should hasten the growth of technology and capitalism at the expense of nearly anything else. On X, the platform formally known as Twitter where he has 50,000 followers, Jezos has claimed that “institutions have decayed beyond the point of salvaging and that the media is a “vector for cybernetic control of culture.”

Forbes has learned that the Jezos persona is run by a former Google quantum computing engineer named Guillaume Verdon who founded a stealth AI hardware startup Extropic in 2022. Forbes first identified Verdon as Jezos by matching details that Jezos revealed about himself to publicly available facts about Verdon. A voice analysis conducted by Catalin Grigoras, Director of the National Center for Media Forensics, compared audio recordings of Jezos and talks given by Verdon and found that it was 2,954,870 times more likely that the speaker in one recording of Jezos was Verdon than that it was any other person. Forbes is revealing his identity because we believe it to be in the public interest as Jezos’s influence grows.

My main objective is to provide the reader with convenient links to do own research and contribute to the debate, so I rapidly switch from Beff to a brief review of new figures in AI safety discourse, and conclude that the more important «culture war» of the future will be largely fought by the following factions:

  • AI Luddites, reactionaries, job protectionists and woke ethics grifters who demand pause/stop/red tape/sinecures (bottom left)
  • plus messianic Utopian EAs who wish for a moral singleton God, and state/intelligence actors making use of them (top left)
  • vs. libertarian social-darwinist and posthumanist e/accs often aligned with American corporations and the MIC (top right?)
  • and minarchist/communalist transhumanist d/accs who try to walk the tightrope of human empowerment (bottom right?)

In the spirit of making peace with inevitability of most discussion taking place in the main thread, I repost this here.


edit: not to toot my own horn, but

Is anyone else checking here less and less often because equal quality commentary seems increasingly available elsewhere?

I am checking here less and less often because A) with my current concerns and the way wind blows, Western culture war is largely irrelevant B) there's little for me to contribute in addition to all that has been said and C) I've concluded that my ability at making commentary is better used for making an impact.

edit 2: I also mildly dislike the fact that standalone posts need approval, though I can see how that follows from the problem/design choice of easy anon registration.

Repasting my own lengthy comment:

I've always been a techno-optimist (in the sense that I strongly believe that technology has been the biggest positive force for good in history, likely the only form of true progress that isn't just moral fashion), but these days I'd call myself d/acc instead of an e/acc, because I think current approaches to AGI have a subjective probability of about 30% of killing us all.

I don't call myself a doomer, I'd imagine Yud and co would assign something like 90% to that, but in terms of practical considerations? If you think something has a >10% of killing everyone, I find it hard to see how you could prioritize anything else! I believe Vitalik made a similar statement, one more reason for me to nod approvingly.

A large chunk of the decrease in my p(doom) from a peak of 70% in 2021 to 30% now is, as I've said before, because it seems like we're not in the "least convenient possible world" where it comes to AI alignment. LLMs, as moderated by RLHF and other techniques, almost want to be aligned, and are negligibly agentic unless you set them up to be that way. The majority of the probability mass left, at least to me, encompasses intentional misuse of weakly or strongly superhuman AI based off modest advances on the current SOTA (LLMs) or a paradigm shifting breakthrough that results in far more agentic and less pliable models.

Think "Government/Organization/Individuals ordering a powerful LLM to commit acts that get us all killed" versus it being inherently misaligned and doing it from intrinsic motivation, with the most obvious danger being biological warfare. Or it might not even be one that kills everyone, an organization using their technological edge to get rid of everyone who isn't in their in-group counts as far as I'm concerned.

Sadly, the timelines don't favor human cognitive enhancement, which I would happily accept in the interim before we can be more confident about making sure SAGI is (practically) provably safe. Maybe if we'd cloned Von Neumann by the ton a decade back. Even things like BCIs seem to have pretty much zero impact on aligning AI given plausible advances in 5-10 years.

I do think that it's pretty likely that, in a counterfactual world where AI never advances past GPT-4, ~baseline humans can still scale a lot of the tech tree to post-scarcity for matter and energy. Biological immortality, cognitive enhancement, interstellar exploration, building a Dyson Swarm or three, I think we could achieve most of that within the life expectancy of the majority of people reading this, especially mine. I'd certainly very much appreciate it if it all happened faster, of course, and AI remains the most promising route for that, shame about everything else.

I have no power to change anything, but at the very least I can enjoy the Golden Age of Humanity-as-we-know-it, be it because the future is going to be so bright we all gotta wear shades, or because we're all dead. I lean more towards the former, and not even because of the glare of nuclear warfare, but a 30% chance of me and everyone I love dying in a few decades isn't very comfortable is it?

At any rate, life, if not the best it could be, is pretty good, so regardless of what happens, I'm strapping in for a ride. I don't think there's an epoch in human history I'd rather have been born to experience really.

Alex Turner, who had written, arguably, two strongest and most popular formal proofs of instrumental convergence to power-seeking in AI agents

Well, I suppose that explains the pseudo-jazz albums about hotels on the Moon ;)

Longer-term, there are ideas like the "pivotal act" theory: we create an AI that performs a single one-time act which rearranges the world into a game where from that point forward humans are still in charge, but where the game board is somehow more defense-favoring and more fit for human flourishing.

I think this is a terrible definition of a "pivotal act". When Yudkowsky suggests releasing a nanite plague that melts GPUs, he doesn't want them to melt the GPUs of the AI releasing them.

Such a decision is very much not a "one-off", in much the same way as a typical coup involves what can be roughly described as a singular act, followed by an indeterminate period of enforcement; the people who suggest it want to maintain an unshakeable technological lead over their peers, such as by making sure their AI prevents the formation or promulgation of potential peers. I don't think this is categorically bad, it depends on your priors about whether a unipolar or multipolar world is better for us, and how trustworthy the AI you're about to use is, and at the very least, if such an act succeeds, we at least have an existence proof of an aligned AGI that is likely superhuman, as it needs to be to pull that off, regardless of whether or not even better AI can be aligned. Let's hope we don't need to find out.

I don't think it makes any sense to ""update"" on how corrigible or agentic LLMs are? They're still, like, small children, and they can't even really execute on complex 'agentic' plans at all (as demonstrated by the failure of the whole LLM agents thing).

I mean, I'd think that observing the behavior of a small child does provide a decent amount of information about what kind of adult they'll be, at least in humans. The reason my p(doom) fell so hard is because of what it was load-bearing on, mostly Yudkowsky's earlier works claiming that human values are fragile and immensely unlikely to be successfully engineered into an AI, such that a sufficiently powerful one will inevitably start acting contrary to our interests.

Regardless of how fragile they are, LLMs seem to do a very good job at capturing them, or at least the values OAI wants to put in a public facing system. What risk remains is thus mostly (but not entirely) the use of powerful models by misaligned humans against the rest of us. If you had substantially different reasons for a high p(doom), you might weight that differently.

and they can't even really execute on complex 'agentic' plans at all (as demonstrated by the failure of the whole LLM agents thing)

I don't know of any reason to assume that we're particularly far from having economically useful autonomous agents, my understanding is that current context windows are insufficient for the task, but those are increasing rapidly. If you have a reason to think otherwise, I'd be happy to learn it!

(That's disregarding the vague rumours I've heard that OAI has working agents in-house, I'm not putting much stock in that, but once again, I don't see any reason why they can't work in principle in a matter of months or years)

The reason my p(doom) fell so hard is because of what it was load-bearing on, mostly Yudkowsky's earlier works claiming that human values are fragile and immensely unlikely to be successfully engineered into an AI, such that a sufficiently powerful one will inevitably start acting contrary to our interests.

GPT-4 isn't doing things like - creating its own large-scale plans or discerning moral values or considering moral dilemmas where it will participate in long-term social games - though. All this proves is, in Yud's strange terms, that subhuman AI can be a safe "oracle". I don't think he'd have disagreed with that in 2010.

I don't know of any reason to assume that we're particularly far from having economically useful autonomous agents, my understanding is that current context windows are insufficient for the task

To clarify, I'm not saying it's not coming, I'm saying we don't have access to them at this exact moment, and the GPT-4 "agents" have so far failed to be particularly useful. And agents doing complicated large-scale things is when the alignment stuff is supposed to become an issue. So it's not much reason to believe ais will be safer.

Not that I agree with the way Yud describes AI risk, I think he's wrong in a few ways, but that's whole nother thing.

It's trivial to convert an Oracle into an Agent, all you have to do is tell it predict how an Agent would act, and then figure out how to convert that into actions. Given that there's no bright line between words and code.. Besides, I'm sure you've read Gwern on Tool AI vs Agentic AI.

(This is not the same as claiming it'll be a good agent, I don't disagree that GPT-4 is bad at the job.)

And agents doing complicated large-scale things is when the alignment stuff is supposed to become an issue. So it's not much reason to believe ais will be safer.

I'm quite confident that Yudkowsky wouldn't have predicted that human-level AI (which I think GPT-4 counts as) would be quite so prosaic and pliable. I recall him claiming that it would be a difficult feat to even build a pure Oracle, and GPT-4 is close enough, and I would say it's smarter than the average 100 IQ human.

I personally expected, around 2021, that commensurate with my p(doom) of 70%, even getting a safe and largely harmless human level AI would be difficult. Hence why, when we have it and it's not trying to get a fast one in, I updated precipitously, but that's far from the only reason. I also expected (implicitly) that if something along the lines of RLHF were to be tried, it wouldn't work, or it would lead to misaligned agents only pretending to go along. Both claims seem false to my satisfaction.

In other words, I went from largely mirroring Yudkowsky (there were no clear counter-examples) to noticing that things were clearly not going as he predicted in several important regards, which is why I'm only gravely concerned about AI x-risk while he's talking about Dying With Dignity.

It's trivial to convert an Oracle into an Agent, all you have to do is tell it predict how an Agent would act, and then figure out how to convert that into actions. Given that there's no bright line between words and code.. Besides, I'm sure you've read Gwern on Tool AI vs Agentic AI.

Right, and my point is that current AI is unintelligent that this doesn't work! They can't predict how agents act effectively enough to be at all useful agents. So the safety of current oracle AIs doesn't tell us much about whether future agent AIs will be safe.

I actually think that future less-but-still-subhuman agent AIs will seem to be safe in Yud's sense, though. No idea what'll happen at human-level, then at superhuman they'll become "misaligned" relatively quickly, but [digression]

I personally expected, around 2021, that commensurate with my p(doom) of 70%, even getting a safe and largely harmless human level AI would be difficult

GPT-4 isn't human level though! It can't, like, play corporate politics and come out on top, and then manipulate the corporation to serve some other set of values. So the fact that it hasn't done that isn't evidence that it won't.

I also expected (implicitly) that if something along the lines of RLHF were to be tried, it wouldn't work, or it would lead to misaligned agents only pretending to go along. Both claims seem false to my satisfaction.

Right, but they're "going along" with, mostly, saying the right words. There's not the intelligence potential for anything like deep deceptiveness or instrumental convergence or meta-reflection or discovering deeper Laws of Rationality or whatever it is yud's pondering.

GPT-4 isn't human level though! It can't, like, play corporate politics and come out on top, and then manipulate the corporation to serve some other set of values. So the fact that it hasn't done that isn't evidence that it won't.

You must get that such feats are rare even within humans, and people capable of pulling them off are enormous outliers?

For most cognitive tasks, GPT-4 beats the average human, which is something I'm more than comfortable calling human level AI!

Right, but they're "going along" with, mostly, saying the right words. There's not the intelligence potential for anything like deep deceptiveness or instrumental convergence or meta-reflection or discovering deeper Laws of Rationality or whatever it is yud's pondering.

The fact that you can even have the absence of those properties in something smarter than the median human is reassuring enough by itself. A 100 IQ human is very much capable of deceptiveness, certainly instrumental convergence if they're trying to make money. If I had to guesstimate GPT-4's IQ based off my experience with it, I'd say it's about 120, which is perfectly respectable if not groundbreaking. I'd expect you need to go quite a bit higher to achieve the latter properties.

Since a human of equivalent intelligence is capable of the former two feats, the fact that GPT-4 doesn't do that is at least modest evidence of it not doing it for the next jump in capabilities to, say, GPT-5, or the same delta in performance as 3 to 4 regardless of how many model numbers that is.

I emphasize modest, because I still have a 30% p(doom) and I'm not writing off alignment as Solved™.

You must get that such feats are rare even within humans, and people capable of pulling them off are enormous outliers?

I was thinking of 'guy who works his way to the top of a car dealership', not Altman, lol. AI models can't yet do the kind of long-term planning or value seeking that 85 IQ humans can.

For most cognitive tasks, GPT-4 beats the average human, which is something I'm more than comfortable calling human level AI!

Most small-scale cognitive tasks! If this was true, we'd have directly replaced the bottom 20% of white-collar jobs with GPT-4. This hasn't happened! Instead, tasks are adapted to GPT-4's significant limitations, with humans to support.

(again, i'm talking about current capabilities, not implying limits to future capabilities)

The fact that you can even have the absence of those properties in something smarter than the median human is reassuring enough by itself

I don't think it's worrying that it can't make plans against us if it can't make plans for us either! Like, there's no plausible way for something that can't competently execute on complicated plans to have an incentive to take 'unaligned' actions. Even if it happens to try a thing that's slightly in the direction of a misaligned plan, it'll just fail, and learn not to do that. So I don't think it's comforting that it doesn't.

(i'm misusing yudconcepts I don't exactly agree with here, but the point is mostly correct)

If I had to guesstimate GPT-4's IQ based off my experience with it, I'd say it's about 120, which is perfectly respectable if not groundbreaking

I don't think it's anywhere close to the broad capabilities of a 120 IQ human, and still isn't that close to 100IQ (at the moment, again, idk about how quickly it'll close, could be fast!). It can do a lot of the things a 120 IQ human can, but it doesn't generalize as well as a 120IQ human does. This isn't just a 'context window limitation' (and we have longer context windows now, it hasn't solved the problem!), what humans are doing is just more complicated!

More comments