site banner

Culture War Roundup for the week of November 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Since @Hawaii98 complains about insufficient quantity of quality commentary, I've taken it upon myself to cover one of the topics proposed by @greyenlightenment, namely the doxxing of Based Beff Jesos, the founder of effective accelerationism. My additional commentary, shallow though it may be, got out of hand, so it's a standalone post now: E/acc and the political compass of AI war.

As I've been arguing for some time, the culture war's most important front will be about AI; that's more pleasant to me than the tacky trans vs trads content, as it returns us to the level of philosophy and positive actionable visions rather than peculiarly American signaling ick-changes, but the stakes are correspondingly higher… Anyway, Forbes has doxxed the founder of «e/acc», irreverent Twitter meme movement opposing attempts at regulation of AI development which are spearheaded by EA. Turns out he's a pretty cool guy eh.

Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement? [archive.ph link]

Quoting Forbes:

…At first blush, e/acc sounds a lot like Facebook’s old motto: “move fast and break things.” But Jezos also embraces more extreme ideas, borrowing concepts from “accelerationism,” which argues we should hasten the growth of technology and capitalism at the expense of nearly anything else. On X, the platform formally known as Twitter where he has 50,000 followers, Jezos has claimed that “institutions have decayed beyond the point of salvaging and that the media is a “vector for cybernetic control of culture.”

Forbes has learned that the Jezos persona is run by a former Google quantum computing engineer named Guillaume Verdon who founded a stealth AI hardware startup Extropic in 2022. Forbes first identified Verdon as Jezos by matching details that Jezos revealed about himself to publicly available facts about Verdon. A voice analysis conducted by Catalin Grigoras, Director of the National Center for Media Forensics, compared audio recordings of Jezos and talks given by Verdon and found that it was 2,954,870 times more likely that the speaker in one recording of Jezos was Verdon than that it was any other person. Forbes is revealing his identity because we believe it to be in the public interest as Jezos’s influence grows.

My main objective is to provide the reader with convenient links to do own research and contribute to the debate, so I rapidly switch from Beff to a brief review of new figures in AI safety discourse, and conclude that the more important «culture war» of the future will be largely fought by the following factions:

  • AI Luddites, reactionaries, job protectionists and woke ethics grifters who demand pause/stop/red tape/sinecures (bottom left)
  • plus messianic Utopian EAs who wish for a moral singleton God, and state/intelligence actors making use of them (top left)
  • vs. libertarian social-darwinist and posthumanist e/accs often aligned with American corporations and the MIC (top right?)
  • and minarchist/communalist transhumanist d/accs who try to walk the tightrope of human empowerment (bottom right?)

In the spirit of making peace with inevitability of most discussion taking place in the main thread, I repost this here.


edit: not to toot my own horn, but

Is anyone else checking here less and less often because equal quality commentary seems increasingly available elsewhere?

I am checking here less and less often because A) with my current concerns and the way wind blows, Western culture war is largely irrelevant B) there's little for me to contribute in addition to all that has been said and C) I've concluded that my ability at making commentary is better used for making an impact.

edit 2: I also mildly dislike the fact that standalone posts need approval, though I can see how that follows from the problem/design choice of easy anon registration.

I've concluded that my ability at making commentary is better used for making an impact

What impact?

I think the interesting question is who is going to have more impact on the discourse?

  1. People who have been talking about AI for years but who have no cultural or political power?
  2. People who have tons of power but who only got on the AI hype train last/this year?

It seems manifestly obvious to me that the answer will be 2. Google engineers are often very smart people, but in the end Silicon Valley has always bowed down to Washington, and to some extent to Wall Street.

It's like, imagine some absurd new war breaks out in some corner of the world nobody cares about and that nobody expected. Who suddenly has power? Is it the one analyst at a dusty CIA desk or the two guys in some obscure think tank in DC who were the only people who cared before this incident happened? Probably not, it's everyone powerful who jumps in on the gravy train now that something interesting has happened.


  1. AI Luddites, reactionaries, job protectionists and woke ethics grifters who demand pause/stop/red tape/sinecures (bottom left)
  2. messianic Utopian EAs who wish for a moral singleton God, and state/intelligence actors making use of them (top left)
  3. libertarian social-darwinist and posthumanist e/accs often aligned with American corporations and the MIC (top right?)
  4. minarchist/communalist transhumanist d/accs who try to walk the tightrope of human empowerment (bottom right?)

I don't think so. For example, I think 'true UBI' will never happen. Which is not to say that I expect the Manna scenario (and indeed I've argued it makes little sense for elites to pursue here before). It's to say that stratification by resource distribution is key to all human hierarchies and it's hard to see this system being abandoned any time soon. Therefore UBI will be distributed according to how closely some individual or group fulfils the role the 'system' considers prosocial in that context. Social credit, belonging to the right group, participating in a certain way, all this varies, but the core structure will be similar - UBI if.

I also think you'll see huge cultural shifts, as huge amounts of ambitious young (particularly young male) energy that has been devoted into pursuing economic self-improvement must suddenly be redirected into some other avenue. It could easily be video games or weightlifting (some would say it already is), but it could be something else.

I also have become more and more sceptical that mass automation heralds some new age of leisure in general. We already live (as both you and I have argued) in a substantially 'automated' society. Even if FALC is technically impossible, in the richest countries it's likely a high standard of living could be maintained with only 20% or even less of the population in full-time employment, the rest work bullshit jobs as per (slighly modified) Graeber. I now consider it substantially possible that in fifty years time the majority of the working age population engages in some form of 'employment'. You really can legislate luddism, New Jersey kept gas pump operators employed sixty years after they ceased to exist elsewhere after all.


I do think e/acc is compelling, and there's no inherent reason why huge social problems can't be brute forced by creating a machine god. The problem, as ever, will be that the solutions the machine god comes up with won't be amenable to a large proportion of the population, including many e/acc types.

It seems manifestly obvious to me that the answer will be 2. Google engineers are often very smart people, but in the end Silicon Valley has always bowed down to Washington, and to some extent to Wall Street.

This is obviously correct to me too. If there's one thing I agree with Yarvin 100%, it's that Big Tech has no power at all, in the grand scheme of things. People who think Altman or someone has a reasonable shot at harnessing the power of the emerging technology for political gain are deluded. I am not sure what you're imagining here – that I am trying to build our way out of Mot's grasp, one commit at a time?

However, there exists certain wiggle room. Engineers can accelerate the proliferation of specific technologies which will make at least some politically cheaper forms of surveillance and restriction unfeasible; this is but a toy example. Businessmen can lobby for lenience, and their lobbyists need talking points; it's a bit surprising how low the bar in this domain is. Big labs can invest into making their offerings so indispensable to the laymen, political elites will falter in enforcing regulation early and hard; this is what I take to be Altman's gamble.

I am not very optimistic about the degree to which the final state of the game board before singularity can be influenced. But I am not a believer in superdeterminism.