site banner

E/acc and the political compass of AI war

As I've been arguing for some time, the culture war's most important front will be about AI; that's more pleasant to me than the tacky trans vs trads content, as it returns us to the level of philosophy and positive actionable visions rather than peculiarly American signaling ick-changes, but the stakes are correspondingly higher… Anyway, Forbes has doxxed the founder of «e/acc», irreverent Twitter meme movement opposing attempts at regulation of AI development which are spearheaded by EA. Turns out he's a pretty cool guy eh.

Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement?

…At first blush, e/acc sounds a lot like Facebook’s old motto: “move fast and break things.” But Jezos also embraces more extreme ideas, borrowing concepts from “accelerationism,” which argues we should hasten the growth of technology and capitalism at the expense of nearly anything else. On X, the platform formally known as Twitter where he has 50,000 followers, Jezos has claimed that “institutions have decayed beyond the point of salvaging and that the media is a “vector for cybernetic control of culture.”

Alarmed by this extremist messaging, «the media» proceeds to… harness the power of an institution associated with the Department of Justice to deanonymize him, with the explicit aim to steer the cultural evolution around the topic:

Forbes has learned that the Jezos persona is run by a former Google quantum computing engineer named Guillaume Verdon who founded a stealth AI hardware startup Extropic in 2022. Forbes first identified Verdon as Jezos by matching details that Jezos revealed about himself to publicly available facts about Verdon. A voice analysis conducted by Catalin Grigoras, Director of the National Center for Media Forensics, compared audio recordings of Jezos and talks given by Verdon and found that it was 2,954,870 times more likely that the speaker in one recording of Jezos was Verdon than that it was any other person. Forbes is revealing his identity because we believe it to be in the public interest as Jezos’s influence grows.

That's not bad because Journalists, as observed by @TracingWoodgrains, are inherently Good:

(Revealing the name behind an anonymous account of public note is not “doxxing,” which is an often-gendered form of online harassment that reveals private information — like an address or phone number — about a person without consent and with malicious intent.)

(That's one creative approach to encouraging gender transition, I guess).

Now to be fair, this is almost certainly parallel construction narrative – many people in the SV knew Beff's real persona, and as of late he's been very loose with opsec, funding a party, selling merch and so on. Also, the forced reveal will probably help him a great deal – it's harder to dismiss the guy as some LARPing shitposter or a corporate shill pandering to VCs (or as @Tomato said, running «an incredibly boring b2b productivity software startup») when you know he's, well, this. And this too.

Forbes article itself doesn't go very hard on Beff, presenting him as a somewhat pretentious supply-side YIMBY, an ally to Marc Andreessen, Garry Tan and such; which is more true of Beff's followers than the man himself. The more potentially damaging (to his ability to draw investment) parts are casually invoking the spirit of Nick Land and his spooky brand of accelerationism (not unwarranted – «e/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism; in order to spread to the stars, the light of consciousness/intelligence will have to be transduced to non-biological substrates» Beff says in his manifesto), and citing some professors of «communications» and «critical theory» who are just not very impressed with the whole technocapital thing. At the same time, it reminds the reader of EA's greatest moment (no not the bed nets).

Online, Beff confirms being Verdon:

I started this account as a means to spread hope, optimism, and a will to build the future, and as an outlet to share my thoughts despite to the secretive nature of my work… Around the same time as founding e/acc, I founded @Extropic_AI. A deep tech startup where we are building the ultimate substrate for Generative AI in the physical world by harnessing thermodynamic physics. Ideas simmering while inventing a this paradigm of computing definitely influenced the initial e/acc writings. I very much look forward to sharing more about our vision for the technology we are building soon. In terms of my background, as you've now learned, my main identity is @GillVerd. I used to work on special projects at the intersection of physics and AI at Alphabet, X and Google. Before this, I was a theoretical physicist working on information theory and black hole physics. Currently working on our AI Manhattan project to bring fundamentally new computing to the world with an amazing team of physics and AI geniuses, including my former TensorFlow Quantum co-founder @trevormccrt1 as CTO. Grateful every day to get to build this technology I have been dreaming of for over 8 years now with an amazing team.

And Verdon confirms the belief in Beffian doctrine:

Civilization desperately needs novel cultural and computing paradigms for us to achieve grander scope & scale and a prosperous future. I strongly believe thermodynamic physics and AI hold many of the answers we seek. As such, 18 months ago, I set out to build such cultural and computational paradigms.

I am fairly pessimistic about Extropic for reasons that should be obvious enough to people who've been monitoring the situation with DL compute startups and bottlenecks, so it may be that Beff's cultural engineering will make a greater impact than Verdon's physical one. Ironic, for one so contemptuous of wordcels.


Maturation of e/acc from a meme to a real force, if it happens (and as feared on Alignment Forum, in the wake of OpenAI coup-countercoup debacle), will be part of a larger trend, where the quasi-Masonic NGO networks of AI safetyists embed themselves in legacy institutions to procure the power of law and privileged platforms, while the broader organic culture and industry develops increasingly potent contrarian antibodies to their centralizing drive. Shortly before the doxx, two other clusters in the AI debate have been announced.

First one I'd mention is d/acc, courtesy of Vitalik Buterin; it's the closest to acceptable compromise that I've seen. It does not have many adherents yet but I expect it to become formidable because Vitalik is.

Across the board, I see far too many plans to save the world that involve giving a small group of people extreme and opaque power and hoping that they use it wisely. And so I find myself drawn to a different philosophy, one that has detailed ideas for how to deal with risks, but which seeks to create and maintain a more democratic world and tries to avoid centralization as the go-to solution to our problems. This philosophy also goes quite a bit broader than AI, and I would argue that it applies well even in worlds where AI risk concerns turn out to be largely unfounded. I will refer to this philosophy by the name of d/acc.

The "d" here can stand for many things; particularly, defensedecentralizationdemocracy and differential. First, think of it about defense, and then we can see how this ties into the other interpretations.

[…] The default path forward suggested by many of those who worry about AI essentially leads to a minimal AI world government. Near-term versions of this include a proposal for a "multinational AGI consortium" ("MAGIC"). Such a consortium, if it gets established and succeeds at its goals of creating superintelligent AI, would have a natural path to becoming a de-facto minimal world government. Longer-term, there are ideas like the "pivotal act" theory: we create an AI that performs a single one-time act which rearranges the world into a game where from that point forward humans are still in charge, but where the game board is somehow more defense-favoring and more fit for human flourishing.

The main practical issue that I see with this so far is that people don't seem to actually trust any specific governance mechanism with the power to build such a thing. This fact becomes stark when you look at the results to my recent Twitter polls, asking if people would prefer to see AI monopolized by a single entity with a decade head-start, or AI delayed by a decade for everyone… The size of each poll is small, but the polls make up for it in the uniformity of their result across a wide diversity of sources and options. In nine out of nine cases, the majority of people would rather see highly advanced AI delayed by a decade outright than be monopolized by a single group, whether it's a corporation, government or multinational body. In seven out of nine cases, delay won by at least two to one. This seems like an important fact to understand for anyone pursuing AI regulation.

[…] my experience trying to ensure "polytheism" within the Ethereum ecosystem does make me worry that this is an inherently unstable equilibrium. In Ethereum, we have intentionally tried to ensure decentralization of many parts of the stack: ensuring that there's no single codebase that controls more than half of the proof of stake network, trying to counteract the dominance of large staking pools, improving geographic decentralization, and so on. Essentially, Ethereum is actually attempting to execute on the old libertarian dream of a market-based society that uses social pressure, rather than government, as the antitrust regulator. To some extent, this has worked: the Prysm client's dominance has dropped from above 70% to under 45%. But this is not some automatic market process: it's the result of human intention and coordinated action.

[…] if we want to extrapolate this idea of human-AI cooperation further, we get to more radical conclusions**. Unless we create a world government powerful enough to detect and stop every small group of people hacking on individual GPUs with laptops, someone is going to create a superintelligent AI eventually - one that can think a thousand times faster than we can - and no combination of humans using tools with their hands is going to be able to hold its own against that. And so we need to take this idea of human-computer cooperation much deeper and further. A first natural step is brain-computer interfaces.…

etc. I mostly agree with his points. By focusing on the denial of winner-takes-all dynamics, it becomes a natural big tent proposal and it's already having effect on the similarly big tent doomer coalition, pulling anxious transhumanists away from the less efficacious luddites and discredited AI deniers.

The second one is «AI optimism» represented chiefly by Nora Belrose from Eleuther and Qiuntin Pope (whose essays contra Yud 1 and contra appeal to evolution as an intuition pump 2 I've been citing and signal-boosting for next to a year now; he's pretty good on Twitter too). Belrose is in agreement with d/acc; and in principle, I think this one is not so much a faction or a movement as the endgame to the long arc of AI doomerism initiated by Eliezer Yudkowsky, the ultimate progenitor of this community, born of the crisis of faith in Yud's and Bostrom's first-principles conjectures and entire «rationality» in light of empirical evidence. Many have tried to attack the AI doom doctrine from the outside (eg George Hotz), but only those willing to engage in the exegesis of Lesswrongian scriptures can sway educated doomers. Other actors in, or close to this group:

Optimists claim:

The last decade has shown that AI is much easier to control than many had feared. Today’s brain-inspired neural networks inherit human common sense, and their behavior can be molded to our preferences with simple, powerful algorithms. It’s no longer a question of how to control AI at all, but rather who will control it.

As optimists, we believe that AI is a tool for human empowerment, and that most people are fundamentally good. We strive for a future in which AI is distributed broadly and equitably, where each person is empowered by AIs working for them, under their own control. To this end, we support the open-source AI community, and we oppose attempts to centralize AI research in the hands of a small number of corporations in the name of “safety.” Centralization is likely to increase economic inequality and harm civil liberties, while doing little to prevent determined wrongdoers. By developing AI in the open, we’ll be able to better understand the ways in which AI can be misused and develop effective defense mechanisms.

So in terms of a political compass:

  • AI Luddites, reactionaries, job protectionists and woke ethics grifters who demand pause/stop/red tape/sinecures (bottom left)
  • plus messianic Utopian EAs who wish for a moral singleton God, and state/intelligence actors making use of them (top left)
  • vs. libertarian social-darwinist and posthumanist e/accs often aligned with American corporations and the MIC (top right?)
  • and minarchist/communalist transhumanist d/accs who try to walk the tightrope of human empowerment (bottom right?)

(Not covered: Schmidhuber, Sutton& probably Carmack as radically «misaligned» AGI successor species builders, Suleyman the statist, LeCun the Panglossian, Bengio&Hinton the naive socialists, Hassabis the vague, Legg the prophet, Tegmark the hysterical, Marcus the pooh-pooher and many others).

This compass will be more important than the default one as time goes on. Where are you on it?


As an aside: I recommend two open LLMs above all others. One is OpenHermes 2.5-7B, the other is DeepSeek-67B (33b-coder is OK too). Try them. It's not OpenAI, but it's getting closer and you don't need to depend on Altman's or Larry Summers' good graces to use them. With a laptop, you can have AI – at times approaching human level – anywhere. This is irreversible.

28
Jump in the discussion.

No email address required.

No, Dase, simply finding another Indian nerd with such similar personality traits is far from sufficient for me to consider him isomorphic to myself. I do not care that he loves his mother, I love mine. Certainly from your perspective you might well be indifferent between us, but I am merely me.

Like, is he closer to me than almost everyone else? Certainly. As are humans practically negligibly different in the space of All Possible Minds. Doesn't make them me.

Is that sufficient for him to be considered me? Not at all.

Leaving aside that what entities one identifies with is inherently subjective, I've proposed a reasonably robust criterion for determining that, at least to my satisfaction. You blackbox both of us, and assess response to a wide variety of relevant stimuli. If the variability between us is within acceptable parameters, such as being less than the variability seen in the biological me after a nap or when I took the test 2 years ago, then that system is close enough to count as including a copy of "me".

This accounts for. even mind uploads, hence the blackboxing, I don't particularly privilege my biological form, though you could do a DNA test and MRI if you really prefer to.

It might well count an emulation of me within a wider system, say a Superintelligence, but that's a feature and not a bug. That component, if it can be isolated, counts as me, leaving aside more practical concerns like what degree of power it has in the ensemble.

Moreover, if you get brain damage or dementia, your hardware and computational divergences will skyrocket, but you will insist on being a continuous (if diminished) person, and me and him will agree! It

I am tolerant of minor performance fluctuations, but a sufficient amount of brain damage or dementia? Then I consider myself gone, in most of the aspects I care about, even if the system is physically and temporally contiguous.

The primary reason I might still value further existence is-

  1. Hopes that the damage can be mitigated with future advances, if not losslessly.

  2. Until the damage gets really bad, that poor soul is still closer to me than anyone else.

But if it gets bad enough, I assure you I consider the core construct to be dead.

I am, incidentally, immune to this issue because I do not believe in computationalism or substrate independence

I have always found this a peculiar view, and certainly I haven't seen any particular reason to assume a difference in internal qualia because of a difference in substrate, as long as the algorithms deriving it are interchangeable in terms of inputs and outputs.

Is it possible? I can't rule it out. But the bulk of my probability mass is against it.

If you have a convincing argument otherwise, I'm curious to hear it.

I will reject this conclusion until the time I have an opportunity to get much smarter and reexamine the topic or perhaps design some fix for this pervasive mental defect

My current approach to modeling myself has enough practical ramification that I will accept it on an operational basis. Certainly I would love to re-examine it in more detail when it becomes more relevant, such as if I'm contemplating a mind upload and am either smarter myself or have an ASI to answer my questions.

But it reduces to normality for almost every situation I can expect to encounter today, so it's hardly the most pressing matter.

You are performing more or less identical calculations, on very similar hardware, to a near-identical result, and if you one day woke up, Zhuangzi style, to be him, your own life story a mere what-if distribution shift quickly fading into the morning air – I bet you would have only felt a tiny pinprick of nostalgia before going on with his business, not some profound identity crisis.

Believe it or not, I have often imagined, idly, having my consciousness magically transferred into the shell of someone I envy. The conclusion I have drawn is that there are some aspects of my life I would happily discard, if he's an accomplished banker (and I retain his skills and memories), I would happily not attempt to pursue medicine. But I would still prefer my original parents or kin, and attempt to convey my conundrum to them, likely by divulging privileged information only known to the original me.

If my "original" is still around? Inform him and work with him. I might be suitably disposed to help the "replaced" persons family and friends, but largely because they're predisposed to help me, assuming they don't know the truth.

After all, I expect and wish to continue preferring the consciousnesses descended from my own current kin even after we've all become post-biological, a mere swap of DNA carrier, while extremely queer and not entirely desirable, represents no major impediment.*

*The primary reason I am attached to my genes is because they code for people and personalities similar to mine. I couldn't care less about most phenotypic traits.

I think your problem is typical for Indians (and most other non-WEIRDs and non-Japanese, to be fair, including my people… but worse so in Indians): you have no taste, not even the notion of "taste", to you it's probably an arbitrary set of markers of one's social milieu rather than some relatively lawful intuition. So you settle for mediocre half-baked ideas easily as long as they seem "cool" or "practical", and – physics of consciousness being currently impractical – coolness is a much simpler function than tastefulness. I am not sure how or why this works. Maybe @2rafa can explain better; maybe she'll opine I'm wrong and it is in fact purely about social markers. (Also interested in the input of @Southkraut and @ArjinFerman). In any case, it's exasperating to debate such uncertain grounds without the recourse to "this is just ugly" when it patently is.

I've proposed a reasonably robust criterion for determining that, at least to my satisfaction. You blackbox both of us, and assess response to a wide variety of relevant stimuli. If the variability between us is within acceptable parameters, such as being less than the variability seen in the biological me after a nap or when I took the test 2 years ago, then that system is close enough to count as including a copy of "me".

Oh yeah? So which is it, a nap or a 2-year time span? Are you sure you can, really, practically can, define a rubric such that no other person I find comes closer to the first data point in the latter case? Sure you can do this without including password-recovery-tier questions, the answers to which are entirely value-free, RNG-produced token sequences, in no way corresponding to actually unique specifics of your inner conscious computation?

It's only reasonably robust from the viewpoint of a time-constrained clerk – or an archetypal redditor. As stated, I claim that you might well fail this test under realistic and legitimate conditions of dropping cheat items; and then, if I decide, in this contrived scenario, that the non-self-made-human is to be sent to the garbage compressor, you will very loudly (and rightfully) complain, not showing any "satisfaction" whatsoever. The only reason you propose it is your confidence that this does not matter in actuality – which it admittedly does not. And in any case, you do not need to optimize for a le scientific, robust, replicable, third-person-convincing etc. identity test. Rather, you need to think about what it is you are trying to achieve by clinging to the idea that a cluster of behavioral correlates an observer can identify will carry on your mind – just gotta make it dense enough that in practice you won't be confused for another naturally occurring person.

certainly I haven't seen any particular reason to assume a difference in internal qualia because of a difference in substrate, as long as the algorithms deriving it are interchangeable in terms of inputs and outputs.

Fair enough.

But I would still prefer my original parents or kin, and attempt to convey my conundrum to them, likely by divulging privileged information only known to the original me.

I'll trust you on this even though I strongly suspect this would depend on the intensity of original memories vs. the recovered set.

(Also interested in the input of Southkraut and ArjinFerman).

I've always had very rigid opinions on the subject, and the only way I can "steelman" his idea is by assuming I must be misunderstanding what he means by the copy being him. Like I said, the only way it makes the slightest bit of sense to me, is if he's looking at it the same way one might look at being survived by their children. Bringing up another Indian nerd isn't even that far off the mark. When people can't have children they're prone to becoming mentors so a part of them can live on through the impact they've made on others. From there, I suppose I can understand building some sort of Pinocchio that will share your memories and quirks of personality.

The problems is that when I look at @self_made_human's actual words, the above seems like blatant sane-washing. He seems to believe any such copy will actually be him in some non-symbolic sense, which seems rather absurd. Maybe you can argue it from some cosmic-nihilist bird's eye view, but, as you pointed out, it's hard to defend from a "help! you've put the wrong one in the garbage compressor!" perspective.

He seems to believe any such copy will actually be him in some non-symbolic sense, which seems rather absurd.

I do believe it will be "me" to my desired level of satisfaction, and far better at the job than the currently available options of having kids, promulgation one's cultural or ethical values, or even a biological clone. To the point that if such a being appeared before me, it can have half my money no strings attached.

As for it being absurd to you? That's simply irrelevant to me. I don't think you agree with Dase about our other transhumanist predilections, so I don't see it mattering for the purposes of the debate.

I do believe it will be "me" to my desired level of satisfaction, and far better at the job than the currently available options of having kids, promulgation one's cultural or ethical values, or even a biological clone.

So just to be sure I understand you: you don't actually think it will be *you*? We are simply discussing your potential descendants. Far better, by your estimation, than any other descendant we can currently come up with, but still just a descendant?

To the point that if such a being appeared before me, it can have half my money no strings attached.

You gotta move the the US. Someone might actually be tempted to train an LLM on your output, to get a chunk of an American doctor's salary.

you don't actually think it will be you?

"you" is, to put it gently, a vague word, and certainly well out of its depth when talking about such hypothetical advances. But I would have no qualms about talking about it as "me", in much the same manner I can happily do so when talking about "me" 10 years ago or in another 50.

I wouldn't call a hypothetical monozygotic twin "me" today, or a fresh clone of myself.

Far better, by your estimation, than any other descendant we can currently come up with, but still just a descendant?

Far better in terms of replication of consciousness? Certainly. I'm sure that if you were being sufficiently imaginative in terms of "what we can currently come up with", you might find something or someone I like more than myself, but that's not the point in contention. I might end up valuing my child's life more than my own, as is common for parents. Ask me when I have one I guess.

But a child is not me, even if I'd potentially give them half my money if necessary, or more, and at least half my name.

You gotta move the the US. Someone might actually be tempted to train an LLM on your output, to get a chunk of an American doctor's salary.

If you go through the comment chain, I have far stricter standards than merely being swayed by a fine-tuned LLM. I do not expect that to me remotely sufficient.

"you" is, to put it gently, a vague word, and certainly well out of its depth when talking about such hypothetical advances.

I disagree vehemently. In my opinion, no matter the advances, *I* will always immediately be able to tell if *I* ended up in the trash compactor, rather than a copy.

This is, again, where I have no choice but accuse myself of sane-washing your ideas. I could sort of see where you're coming from, if we're only discussing descendants, I can even understand, even as I disagree with it, putting a premium value on the artificial copies of yourself over the currently available descendants. But claiming there is any vagueness in the concept of *me* is coco bananas.

Far better in terms of replication of consciousness? Certainly.

Yeah, but I don't get why I should care about replicating consciousness. It's still just a copy.

If you go through the comment chain, I have far stricter standards than merely being swayed by a fine-tuned LLM. I do not expect that to me remotely sufficient.

I thought I did see your criteria, I was going by this:

I've proposed a reasonably robust criterion for determining that, at least to my satisfaction. You blackbox both of us, and assess response to a wide variety of relevant stimuli. If the variability between us is within acceptable parameters, such as being less than the variability seen in the biological me after a nap or when I took the test 2 years ago, then that system is close enough to count as including a copy of "me".

I don't see anything here that wouldn't be fundamentally replicable by an LLM.

I don't think a text interface suffices, as @2rafa once suggested, it might be possible to finetune an LLM on all the text I or anyone else has ever written, such that someone who is only interacting with us via text could be fooled indefinitely.

There you go.

At the very least my hypothetical test of replication of consciousness would require things that no LLM today, no matter how multimodal, could pull off.

I don't think a text interface suffices, as @2rafa once suggested, it might be possible to finetune an LLM on all the text I or anyone else has ever written, such that someone who is only interacting with us via text could be fooled indefinitely.

My point is that I expect both of us to have the same/indistinguishable internal qualia, so for a sufficiently high fidelity copy, I'm agnostic as to who gets trashed, not that I want that for either of us if it can be helped.

From the inside, we can't tell the difference.

It's still just a copy.

"Just". I think we have to agree to disagree on what the ramifications of a copy of either of us existing would entail.

At the very least my hypothetical test of replication of consciousness would require things that no LLM today, no matter how multimodal, could pull off.

Am I going crazy? I swear I've seen people upload screenshots and photos to ChatGPT and have it respond to their content. Is it that any AI that handles more than text input is technically no longer just a "language model"? That sounds like playing semantics.

From the inside, we can't tell the difference.

This might be the fundamental disagreement. From the outside we may not tell the difference. From the inside the difference is immediately obvious.

"Just". I think we have to agree to disagree on what the ramifications of a copy of either of us existing would entail.

Yeah. Here's hoping Roko's Basilisk tries to threaten me instead of you.

More comments