site banner

E/acc and the political compass of AI war

As I've been arguing for some time, the culture war's most important front will be about AI; that's more pleasant to me than the tacky trans vs trads content, as it returns us to the level of philosophy and positive actionable visions rather than peculiarly American signaling ick-changes, but the stakes are correspondingly higher… Anyway, Forbes has doxxed the founder of «e/acc», irreverent Twitter meme movement opposing attempts at regulation of AI development which are spearheaded by EA. Turns out he's a pretty cool guy eh.

Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement?

…At first blush, e/acc sounds a lot like Facebook’s old motto: “move fast and break things.” But Jezos also embraces more extreme ideas, borrowing concepts from “accelerationism,” which argues we should hasten the growth of technology and capitalism at the expense of nearly anything else. On X, the platform formally known as Twitter where he has 50,000 followers, Jezos has claimed that “institutions have decayed beyond the point of salvaging and that the media is a “vector for cybernetic control of culture.”

Alarmed by this extremist messaging, «the media» proceeds to… harness the power of an institution associated with the Department of Justice to deanonymize him, with the explicit aim to steer the cultural evolution around the topic:

Forbes has learned that the Jezos persona is run by a former Google quantum computing engineer named Guillaume Verdon who founded a stealth AI hardware startup Extropic in 2022. Forbes first identified Verdon as Jezos by matching details that Jezos revealed about himself to publicly available facts about Verdon. A voice analysis conducted by Catalin Grigoras, Director of the National Center for Media Forensics, compared audio recordings of Jezos and talks given by Verdon and found that it was 2,954,870 times more likely that the speaker in one recording of Jezos was Verdon than that it was any other person. Forbes is revealing his identity because we believe it to be in the public interest as Jezos’s influence grows.

That's not bad because Journalists, as observed by @TracingWoodgrains, are inherently Good:

(Revealing the name behind an anonymous account of public note is not “doxxing,” which is an often-gendered form of online harassment that reveals private information — like an address or phone number — about a person without consent and with malicious intent.)

(That's one creative approach to encouraging gender transition, I guess).

Now to be fair, this is almost certainly parallel construction narrative – many people in the SV knew Beff's real persona, and as of late he's been very loose with opsec, funding a party, selling merch and so on. Also, the forced reveal will probably help him a great deal – it's harder to dismiss the guy as some LARPing shitposter or a corporate shill pandering to VCs (or as @Tomato said, running «an incredibly boring b2b productivity software startup») when you know he's, well, this. And this too.

Forbes article itself doesn't go very hard on Beff, presenting him as a somewhat pretentious supply-side YIMBY, an ally to Marc Andreessen, Garry Tan and such; which is more true of Beff's followers than the man himself. The more potentially damaging (to his ability to draw investment) parts are casually invoking the spirit of Nick Land and his spooky brand of accelerationism (not unwarranted – «e/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism; in order to spread to the stars, the light of consciousness/intelligence will have to be transduced to non-biological substrates» Beff says in his manifesto), and citing some professors of «communications» and «critical theory» who are just not very impressed with the whole technocapital thing. At the same time, it reminds the reader of EA's greatest moment (no not the bed nets).

Online, Beff confirms being Verdon:

I started this account as a means to spread hope, optimism, and a will to build the future, and as an outlet to share my thoughts despite to the secretive nature of my work… Around the same time as founding e/acc, I founded @Extropic_AI. A deep tech startup where we are building the ultimate substrate for Generative AI in the physical world by harnessing thermodynamic physics. Ideas simmering while inventing a this paradigm of computing definitely influenced the initial e/acc writings. I very much look forward to sharing more about our vision for the technology we are building soon. In terms of my background, as you've now learned, my main identity is @GillVerd. I used to work on special projects at the intersection of physics and AI at Alphabet, X and Google. Before this, I was a theoretical physicist working on information theory and black hole physics. Currently working on our AI Manhattan project to bring fundamentally new computing to the world with an amazing team of physics and AI geniuses, including my former TensorFlow Quantum co-founder @trevormccrt1 as CTO. Grateful every day to get to build this technology I have been dreaming of for over 8 years now with an amazing team.

And Verdon confirms the belief in Beffian doctrine:

Civilization desperately needs novel cultural and computing paradigms for us to achieve grander scope & scale and a prosperous future. I strongly believe thermodynamic physics and AI hold many of the answers we seek. As such, 18 months ago, I set out to build such cultural and computational paradigms.

I am fairly pessimistic about Extropic for reasons that should be obvious enough to people who've been monitoring the situation with DL compute startups and bottlenecks, so it may be that Beff's cultural engineering will make a greater impact than Verdon's physical one. Ironic, for one so contemptuous of wordcels.


Maturation of e/acc from a meme to a real force, if it happens (and as feared on Alignment Forum, in the wake of OpenAI coup-countercoup debacle), will be part of a larger trend, where the quasi-Masonic NGO networks of AI safetyists embed themselves in legacy institutions to procure the power of law and privileged platforms, while the broader organic culture and industry develops increasingly potent contrarian antibodies to their centralizing drive. Shortly before the doxx, two other clusters in the AI debate have been announced.

First one I'd mention is d/acc, courtesy of Vitalik Buterin; it's the closest to acceptable compromise that I've seen. It does not have many adherents yet but I expect it to become formidable because Vitalik is.

Across the board, I see far too many plans to save the world that involve giving a small group of people extreme and opaque power and hoping that they use it wisely. And so I find myself drawn to a different philosophy, one that has detailed ideas for how to deal with risks, but which seeks to create and maintain a more democratic world and tries to avoid centralization as the go-to solution to our problems. This philosophy also goes quite a bit broader than AI, and I would argue that it applies well even in worlds where AI risk concerns turn out to be largely unfounded. I will refer to this philosophy by the name of d/acc.

The "d" here can stand for many things; particularly, defensedecentralizationdemocracy and differential. First, think of it about defense, and then we can see how this ties into the other interpretations.

[…] The default path forward suggested by many of those who worry about AI essentially leads to a minimal AI world government. Near-term versions of this include a proposal for a "multinational AGI consortium" ("MAGIC"). Such a consortium, if it gets established and succeeds at its goals of creating superintelligent AI, would have a natural path to becoming a de-facto minimal world government. Longer-term, there are ideas like the "pivotal act" theory: we create an AI that performs a single one-time act which rearranges the world into a game where from that point forward humans are still in charge, but where the game board is somehow more defense-favoring and more fit for human flourishing.

The main practical issue that I see with this so far is that people don't seem to actually trust any specific governance mechanism with the power to build such a thing. This fact becomes stark when you look at the results to my recent Twitter polls, asking if people would prefer to see AI monopolized by a single entity with a decade head-start, or AI delayed by a decade for everyone… The size of each poll is small, but the polls make up for it in the uniformity of their result across a wide diversity of sources and options. In nine out of nine cases, the majority of people would rather see highly advanced AI delayed by a decade outright than be monopolized by a single group, whether it's a corporation, government or multinational body. In seven out of nine cases, delay won by at least two to one. This seems like an important fact to understand for anyone pursuing AI regulation.

[…] my experience trying to ensure "polytheism" within the Ethereum ecosystem does make me worry that this is an inherently unstable equilibrium. In Ethereum, we have intentionally tried to ensure decentralization of many parts of the stack: ensuring that there's no single codebase that controls more than half of the proof of stake network, trying to counteract the dominance of large staking pools, improving geographic decentralization, and so on. Essentially, Ethereum is actually attempting to execute on the old libertarian dream of a market-based society that uses social pressure, rather than government, as the antitrust regulator. To some extent, this has worked: the Prysm client's dominance has dropped from above 70% to under 45%. But this is not some automatic market process: it's the result of human intention and coordinated action.

[…] if we want to extrapolate this idea of human-AI cooperation further, we get to more radical conclusions**. Unless we create a world government powerful enough to detect and stop every small group of people hacking on individual GPUs with laptops, someone is going to create a superintelligent AI eventually - one that can think a thousand times faster than we can - and no combination of humans using tools with their hands is going to be able to hold its own against that. And so we need to take this idea of human-computer cooperation much deeper and further. A first natural step is brain-computer interfaces.…

etc. I mostly agree with his points. By focusing on the denial of winner-takes-all dynamics, it becomes a natural big tent proposal and it's already having effect on the similarly big tent doomer coalition, pulling anxious transhumanists away from the less efficacious luddites and discredited AI deniers.

The second one is «AI optimism» represented chiefly by Nora Belrose from Eleuther and Qiuntin Pope (whose essays contra Yud 1 and contra appeal to evolution as an intuition pump 2 I've been citing and signal-boosting for next to a year now; he's pretty good on Twitter too). Belrose is in agreement with d/acc; and in principle, I think this one is not so much a faction or a movement as the endgame to the long arc of AI doomerism initiated by Eliezer Yudkowsky, the ultimate progenitor of this community, born of the crisis of faith in Yud's and Bostrom's first-principles conjectures and entire «rationality» in light of empirical evidence. Many have tried to attack the AI doom doctrine from the outside (eg George Hotz), but only those willing to engage in the exegesis of Lesswrongian scriptures can sway educated doomers. Other actors in, or close to this group:

Optimists claim:

The last decade has shown that AI is much easier to control than many had feared. Today’s brain-inspired neural networks inherit human common sense, and their behavior can be molded to our preferences with simple, powerful algorithms. It’s no longer a question of how to control AI at all, but rather who will control it.

As optimists, we believe that AI is a tool for human empowerment, and that most people are fundamentally good. We strive for a future in which AI is distributed broadly and equitably, where each person is empowered by AIs working for them, under their own control. To this end, we support the open-source AI community, and we oppose attempts to centralize AI research in the hands of a small number of corporations in the name of “safety.” Centralization is likely to increase economic inequality and harm civil liberties, while doing little to prevent determined wrongdoers. By developing AI in the open, we’ll be able to better understand the ways in which AI can be misused and develop effective defense mechanisms.

So in terms of a political compass:

  • AI Luddites, reactionaries, job protectionists and woke ethics grifters who demand pause/stop/red tape/sinecures (bottom left)
  • plus messianic Utopian EAs who wish for a moral singleton God, and state/intelligence actors making use of them (top left)
  • vs. libertarian social-darwinist and posthumanist e/accs often aligned with American corporations and the MIC (top right?)
  • and minarchist/communalist transhumanist d/accs who try to walk the tightrope of human empowerment (bottom right?)

(Not covered: Schmidhuber, Sutton& probably Carmack as radically «misaligned» AGI successor species builders, Suleyman the statist, LeCun the Panglossian, Bengio&Hinton the naive socialists, Hassabis the vague, Legg the prophet, Tegmark the hysterical, Marcus the pooh-pooher and many others).

This compass will be more important than the default one as time goes on. Where are you on it?


As an aside: I recommend two open LLMs above all others. One is OpenHermes 2.5-7B, the other is DeepSeek-67B (33b-coder is OK too). Try them. It's not OpenAI, but it's getting closer and you don't need to depend on Altman's or Larry Summers' good graces to use them. With a laptop, you can have AI – at times approaching human level – anywhere. This is irreversible.

28
Jump in the discussion.

No email address required.

One is OpenHermes 2.5-7B, the other is DeepSeek-67B (33b-coder is OK too). Try them.

Serious question. What do I do with them?

Most of the AI stuff I see people talking about is pretty much playing with toys. Oh, I got it to produce art. Oh, I asked it to write an essay.

What practical use is this AI for me, as a clerical assistant in a small office? I'm not interesting in making art, and I don't work in the fancy coding environments where "this is a useful tool for me to check my code" and so forth. I'm an ordinary, lower middle-class person. I'm not writing college essays, grant funding applications for research labs, or code. What makes this of any use or interest to me?

This is a quite separate question from "AI is going to impact your life because business and government are adopting it". That's not something I have any control over. What I want to know is this: right now, there are very fancy toys there. What can I do with them, that is not playing with the toy?

You've previously said you've refrained from even trying ChatGPT, where the 3.5 model is available for the low, low cost of signing up, and even GPT-4 is freely available through Microsoft Bing.

I think the set of people who do not gain even the minimal amount of utility necessary to justify trying a free service that provides you with a human-adjacent intelligence on tap is zero.

I certainly find it incredibly useful even for medical tasks.

What practical use is this AI for me, as a clerical assistant in a small office? I'm not interesting in making art, and I don't work in the fancy coding environments where "this is a useful tool for me to check my code" and so forth. I'm an ordinary, lower middle-class person. I'm not writing college essays, grant funding applications for research labs, or code. What makes this of any use or interest to me?

Your job presumably involves a great deal of writing and correspondence. You can automate almost all of that away, give it a TLDR of what you want to write, and it'll give you a polished product, and if you bother to add one sentence to nudge it away from bland GPT-speak we've come to associate with it, nobody will even notice.

At the bare minimum, the versions with internet search enabled, like paid ChatGPT-4 or free Bing, provide all the benefits of a search engine while also letting you converse with it and follow up in a manner that will leave the dumb systems behind plain old Google scratching their head.

At the very least, give me concrete examples of what a routine day looks like to you, including anything that involves writing more than a few sentences of text on a computer, and I'm confident I can find a use case, as I would if a subsistence farmer was asking me the same question.

I think the set of people who do not gain even the minimal amount of utility necessary to justify trying a free service that provides you with a human-adjacent intelligence on tap is zero.

Well, what'll it do for me? To take a work example, I had to manually enter data from a year's worth of hard copy receipt books to check against records kept on a spreadsheet, then check those receipts against the records online by the credit card payment processor. Can ChatGPT do anything about that for me? I'd love to shove off that tedious work onto an AI but as yet I don't think there's anything there. I'm reading the written slips with my human eyes, typing the information into a spreadsheet, and breaking it down by date on the receipt book versus month on the spreadsheet. ChatGPT can do everything with prompts on a screen, but it's not yet, so far as I know, able to directly scan in "this is written text" and turn it into "okay, I need to sort this by date, name, amount of money, paid by credit card or cash, and match it up with the month, then get a total of money received, check that against the credit card payment records, and find any discrepancies, then match all those against what is on the bank statements as money lodged to our account and find any discrepancies there". I still have to give it all that data by typing it in.

Your job presumably involves a great deal of writing and correspondence. You can automate almost all of that away, give it a TLDR of what you want to write, and it'll give you a polished product

By the time I enter the prompt and copy'n'paste the answer into the email, I'd have written the reply to the email myself. And that's the majority of my correspondence: keep getting emailed every ten minutes by the boss or colleagues about "hey, have you got the balance of the savings account?" and the like.

Again, there's a very damn tedious, detailed, and they now want even more details in the returns, form I have to complete every year for our funding agreement. If the AI can read that, fill in the blanks, and extract the data from the various sources where it's located and put it into the template spreadsheet and Word document we have to return, again I'd love to shove that task off on it. But if it gets anything wrong or omits data or worse, hallucinates information, I will be getting multiple emails from the government agency for which we provide social services, my boss, and Uncle Tom Cobley and all.

At the bare minimum, the versions with internet search enabled, like paid ChatGPT-4 or free Bing, provide all the benefits of a search engine while also letting you converse with it and follow up in a manner that will leave the dumb systems behind plain old Google scratching their head.

That at least sounds useful, if I can tell it "I need to find the newest tax regulations for this particular statutory reporting that will be coming onstream in 2024" and get an accurate answer. But again, I don't see a huge improvement over plain old Google for that. The AI won't, for instance, be able to apply that information to running payroll and the new, separate, reporting requirements starting in January.

The day that AI can do all that, I'm out of a job, because if it can take "okay here are the hours worked by staff in the week, but change this, add in extra for that, Jane rang in ten minutes ago to say she'll be out sick today, Annie wants her savings, Sally is to be paid expenses and we'll send on the receipts later" and go through it all and upload the file to the bank and all the associated record keeping, I'm not needed. I'm happy about that much, apart from the "I need a job to earn money to live on".

EDIT: Okay, I can see the "enter the prompt" thing being useful for the kind of unctuous thank-you letters to donors I have to send, but those are irregular and I have a fair idea of how to knock one out in five minutes myself anyway.

EDIT EDIT: I can see the potential for automation, and I get that Microsoft is trying to do this for business, but at the moment it's at way too technical a level above anything I do. We don't touch anything remotely like analytics because that is not impinging on what we do (the upstream returns to government bodies will feed into that, but that's nothing to do with us apart from providing the raw data). The rest of it is just jargon and buzzwords:

Lufthansa Technik AG has been running Azure SQL to support its application platform and data estate, leveraging fully managed capabilities to empower teams across functions.

That's nice, now what the fuck does that mean in plain English?

Right now, I don't see any benefit to AI for me. Once they get down to the coalface level I'm working at, sure, but at the moment it's all "nimbly infusing content generation capabilities to transform all kinds of apps into intuitive, contextual experiences". We don't do apps, we deal with kids with special and additional needs.

ChatGPT can do everything with prompts on a screen, but it's not yet, so far as I know, able to directly scan in "this is written text" and turn it into "okay, I need to sort this by date, name, amount of money, paid by credit card or cash, and match it up with the month, then get a total of money received, check that against the credit card payment records, and find any discrepancies, then match all those against what is on the bank statements as money lodged to our account and find any discrepancies there"

Actually GPT-4V can, with a decent system prompt. In fact this kind of labor-intensive data processing is exactly what I had in mind to recommend you. Small text-only models can parse unstructured text into structured JSONs. Frontier models can recognize images and arbitrarily process symbols extracted from them – this is just a toy example. I'm not sure if it'll be up to your standards, but presumably checking will be easier than typing from scratch.

Do you happen to know if Bing Chat uses GPT-4V for its image recognition or something else entirely? It debuted with the feature while OAI had it locked away, even if the base model had it from the start.

Yes, according to Parakhin. Bing is basically a GPT wrapper now. Bing also debuted with GPT-4 in the first place.