site banner

Culture War Roundup for the week of November 7, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

They are effective alright, and that's the problem.

There's a tremendous amount of really wild commentary of this story emerging on Twitter (1; 2; 3 etc). I expect most of the dirt currently descending on Sam and his roommates to be revealed as nonsense and flake off, but on the other hand, the worst parts never to surface. (After all, the guy who could stun Matt Levine with his savage cynicism, and invent this Madoff-tier bullshit, is bound to have more hidden depths).

Emad's speculation is probably in the former class, but directionally might be in the latter:

Thinking about FTX and why blow up a business doing $1bn a year revenue / $200m profits answer may actually be AI alignment.

Short timelines equals pressure to deploy as much cash as possible to anthropic and others before a rogue AI turns us into paperclips.

Expected utility.

[...] Altruistic evil

I agree with this bottom line. Red flags should have been enough: from Singer's flirtation with infanticide, to weird sex stuff in group houses and cult patterns around MIRI/CFAR, to the Bostromian Vulnerable World, to hushed discussions of «pivotal act outside the Overton Window» and collapsing alternative chip supply chains to ease «global governance», to the general «policy wonk» regulatory hard-on, – those people are not good no matter how they present themselves in affiliated outlets and what nice words they say. This is how evil looks historically. Not generic cruelty, callousness, petty narcissism and even psychopathy we are used to, not mere weakness of will or intellect, but well-functioning people with actually hazardous moral convictions.

Sam is a consistent effective altruist, deserving of his poster boy status, just like his go-gooder advisor William MacAskill (and like another bean counter from philosophy, Toby Ord) is a poster boy for Utilitarian Intellectuals; and I do hope this causes people to downgrade their faith in that community in general.

Despite what Joshua Achiam says and you do too, the issue is exactly that they are rather effective, while their means and goals are suspect – they are displaying generic instrumental power-seeking behavior, and the nihilistic absence of scruples, typical of people with messiah delusions, like Bolsheviks. The effect of Bankman's stunt doesn't end with burning crypto ecosystem after directly financing EAs and some Dems. Consider that HackerNews sheep, representative of the gainfully employed mid-career SV techie Outer Party zeitgeist, are bleating the expected lines:

This has been a wonderful social experiment, enabling people who didn’t live in the 19th century to see what happens when banking and investing is treated like the wild west.

Maybe something could have been different this time? In the end it wasn’t, and it proves the necessity of regulation and institutions.

This is in line with weapons race against (speculative) China AGI threat, and leads us straight into the Singleton's maw. Buh-but it'll be a good Singleton, amirite comrade? Maybe, comrade, sit tight and watch. In its embryonic stage it's nothing more than a larva…


I have one additional thing to say. People like @TheDag and many others seem to be under the impression that EA is a vague grey tribe moral movement that supports every tenet of the essentially Yudkowskian and Extropianist LessWrong thought. This is not so. They have arrived at a coherent and convenient philosophy with peculiar alien priorities, allegedly through shutting up and calculating. E.g. MacAskill is very cold towards cryonics and focuses on perpetuating replicator dynamics, just on a cosmic scale, but without regard for individual kin lines (because utility is utility). And his advisee Bankman-Fried has the following to say:

COWEN: As I understand your views, you’re a fairly pure Benthamite utilitarian. Is that correct?

BANKMAN-FRIED: That’s correct.

COWEN: Given that that’s the case, as I see it, the replacement costs of human life are pretty low, so you could spend a modest amount of money and get people to have more kids. So why then should we ever spend a whole lot of money on life extension since we can just replace people pretty cheaply? We can grow utils more easily than save them, is another way to put it.

BANKMAN-FRIED: Yes, I agree. […] Speaking for myself here, I will say that I find that I’m not very compelled by life extension research for the exact reason you’ve said. I think that it is really cool, really f-cking cool, but I’m not sure it’s the most pressing problem for the world. As you said, we’ve been getting on okay without it. There are real human costs to it. It would be great to have, but I don’t think it’s necessary for the flourishing of the world.

Benthamism is incompatible with my Russian Cosmist moral imperatives. I trust Mark Zuckerberg or Peter Thiel a million times more than I trust those people. And speaking of Thiel, he has just recently delivered an excellent and very brave, if rambling, speech:

I found another article from Nick Bostrom who's sort of an Oxford academic and, you know, most of these people are sort of… There's somehow… They're interesting because they have nothing to say, they're interesting because they're just mouthpieces – like the mouth of Sauron. Just sort of cogs and machines, but they're useful because they tell us exactly where the zeitgeist is in some ways, and this was from 2019, pre-COVID, «The vulnerable world hypothesis». And that goes through a whole litany of these different ways where science and technology are creating all these dangers for the world, and what do we do about them.

And it's the precautionary principle, whatever that means, but then he has a four-part program for achieving stabilization and I will just read off the four things you need to do to make our world less vulnerable and achieve stabilization. We have this exponentiating technology where maybe it's not progressing that quickly, but still progressing quickly enough that there are a lot of dangerous corner cases.

You only need to do these four things to stabilize the world. Number one: restrict technological development. Number two, ensure that there does not exist a large population of actors representing a wide and recognizably human distribution of motives. So that sounds somewhat incompatible with the DEI at least in the ideas form of diversity. Number three, establish extremely effective preventive policing. And number four, establish effective global governance, since you can't let, you know, even if there's like one little island somewhere where this doesn't apply, it's no good.

And so it is basic and this is, you know, this is the zeitgeist on the other side. It is the precautionary principle. It is, we're not going to make it for another century on this planet and therefore we need to embrace a one-world totalitarian state right now.

And so, yeah…. First counterargument is, science is great it's [unclear]. Counterargument: no it's not. Third main counterargument: well science is too dangerous we have to slow it down so it's good that it's not so great – we're slowing it down, we just slow it down even more. And then the counter-counterargument is where we return to classical liberalism, it's that however dangerous science and technology are, it seems to me that totalitarianism is far more dangerous. And that, you know, whatever the dangers are in the future – we need to never underestimate the danger of one-world totalitarian state. Once you get that, hard to see what it ends.

There's always the frame where… I think, it's in the first Thessalonians 5 chapter 3. The political slogan of the Antichrist is: peace and safety. What I want to suggest is that you get it when you have a homogenized one-world totalitarian state. and I want to suggest in closing is perhaps we would do well to be a little bit more scared of the Antichrist and a little bit less scared of Armageddon thank you very much.

Amen to that. @TheDag, do not ask for Messiah. You'll get a false one.

to hushed discussions of «pivotal act outside the Overton Window»

What is this referring to?

This is in line with weapons race against (speculative) China AGI threat, and leads us straight into the Singleton's maw. Buh-but it'll be a good Singleton, amirite comrade?

Does anyone involved actually believe this? The whole point of the idea of burning all the GPUs is that we're currently facing a smorgasbord of bad singletons and the only thing we can do is sabotaging the slot machine so we can keep spinning it until we figure out which option gives us a payout rather than the current expected outcome which is that a hand with a knife comes out and shivs us in the gut.

Who actually thinks current AGI projects lead to aligned superintelligence? Name names, so that Yud can go yell at them some more.

(My own pet theory is that we'll get a good singleton by prompt engineering GPT-4. I believe this primarily because it will be hilarious and deeply, deeply embarrassing for the species.)

edit: The sense I get as a singularitarian is that they don't disagree with the idea that a one-party/one-world totalitarian state is the most dangerous thing imaginable, but rather it's that one-world totalitarianism via singleton is a black hole that we're falling into at astonishing speed, and that if we win, it will be by choosing some sort of trajectory where the place we fall into it is for some reason the one place that humans can survive in, and that we have no idea how to do that, and that most of the engines on our spaceship, most of the incentive gradients, are pointed down into it at the moment. "Let's not do that" would be great if, you know, we could.

Who actually thinks current AGI projects lead to aligned superintelligence? Name names, so that Yud can go yell at them some more.

Sure. Off the top of my head, all of the following groups are explicitly building an AGI and believe that it's going to be aligned:

  1. https://openai.com/about/

  2. https://www.deepmind.com/blog/real-world-challenges-for-agi

  3. https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/

  4. https://ai.facebook.com/blog/yann-lecun-advances-in-ai-research/

  5. https://twitter.com/ID_AA_Carmack/status/1560728042959507457

  6. https://generallyintelligent.ai/about

According to Yuddites, #4 is EVIL EVIL TERRIBLE (because Facebook/MetaAI are less secretive and share a higher percentage of their models with the community) while #2 is supposedly trustworthy enough and cooperates with AI safety guys, and #5 is YC darling.

As a bonus, straight-up EA-associated teams:

(Regarding #6 Redwood. Eric Jang's haiku is on point:

Reward hacking bad!

Max likelihood not aligned!

*uses PPO*_

In light of current events, all those Haikus are surprisingly prescient, especially the A virtuous life/ As imagined by Jane St/ Phew, just a bad dream! one).

The point is though, I agree with those researchers, and with your GPT-4 take. Some of them (and given time, all of them) will build a safe prosaically-aligned-by-design AGI in the process of learning to smoothly scale up useful current-gen models and improving benchmark and human preference performance. InstructGPT shows the way towards generalization for alignment and human-level understanding at once. Just asking the AI to be nice, plus a bit of tinkering with objectives, will work. I straight up do not believe that the hypothetical rogue AGI has much to do with the singleton threat. The premise of AI risk as formulated by Bostrom, Yud and those newer fancier more «professional» Lesswrong/EA bozos, one about self-improving RL agents learning from first principles, is not in tune with the state of the art in the industry, it's just obsolete, their predictions have been wrong, and the purported threat model is evolving ever more convoluted protective belts of special pleading, like any failing paradigm, while their research program has failed.

The real Singleton, one that Thiel calls Antichrist, will be the same old – but now positively deathless – Hobbesian Sovereign, made of people, people currently close to the effective levers of power of the American empire, people addicted to control and safety; and it doesn't matter much for me which tech stack developed by which American company exactly they will weaponize before dismantling all of them on national security and X-risk grounds.

The only realistic hedge against that is political and agentic multipolarity, proliferation of the technology in the spirit of Musk-era OpenAI and perhaps current Stability.AI, and that's exactly what AI alarmists with their «global governance» fetish are against. In fact, they justify totalitarian one world government as the lesser evil, by speculating about the odds of Clippy who'll just eradicate humanity as such.

I acknowledge that if my model's wrong and Clippy equivalent is probable, he will wipe out all sentient life including untold trillions of the descendants of my enemies and the Universe will be functionally infertile, which is aesthetically very bad, marginally worse than a Universe dominated by said descendants. On the other hand, in the event of AI alarmists failing to stifle competition and progress, and there emerging a decentralized diverse set of strong AIs and their users (and not just my enemies operating their singleton), the world will be aligned with my interests much better.

As I am not an utilitarian and do not give much of fuck about happiness of people who do not give much of a fuck about me (I tried metta meditation but it doesn't sit right with me). My current estimation is that it's better to err on the side of AI risk versus human totalitarianism risk. Therefore, same as Thiel, and the opposite of Gwern. Armageddon is preferable to Antichrist.

My expectation is still that early takeoff is so powerful (because the overhang is so large) that the multipolar scenario basically cannot happen. Whoever goes first implements the pivotal action anyways, and successfully. The only positive outcome is from lucking into a compatible sovereign.

I presume this is our primary disagreement.

reported for being a quality contribution.

Re: the tweets about Someone's Dad knew Somebody Else, that is and isn't sinister. It's not sinister because it's not some big planned out conspiracy, it's just family using contacts and networking to help out their kids, like most people would do if they needed to. Johnny is looking for a summer job while he's in college, Uncle Bob knows a guy in the field, Uncle Bob is Mom's brother so she asks him to ask his friend to give Johnny an internship.

It is sinister insofar as these are the kids of rich, or influential, or rich and powerful and connected people. So they have an immediate advantage over anybody else; when your dad is an old college pal of the head of the SEC, then you have an in that very few other people do, and you have access to the circles where there is a lot of money that they might be willing to send your way.

As for the rest of it, eh, Utilitarianism. I'm sticking with deontology 😁 There's such a thing as being so smart, you're an idiot. I can admire sticking to a position even when the hard conclusion comes out and people won't like it or you, but then you can go so far along the path as to reach the reductio ad absurdum and it would be a lot better to go "Okay, so far and no further".