site banner

Culture War Roundup for the week of June 3, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

Fired Superalignment Researcher Drops Four-and-a-Half Hour Podcast With Dwarkesh Patel, 165-Page Manifesto.

[Podcast] [Manifesto]

Leopold Aschenbrenner graduated valedictorian at Columbia in 2021 at the age of 19. He then worked for the FTX Future Fund before the fiasco, then wound up at OpenAI on the Superalignment team. In April of this year, he was fired, ostensibly for "leaking". In Leopold's telling, he was fired for voicing security concerns (not to be confused with safety concerns) directly to the board. At post-coup OpenAI, being the kind of guy who would write a manifesto is a massive liability. Private interpretation of the Charter is forbidden.

Leopold's thesis is that AGI is coming soon, but that national security concerns, not alignment, are the main threat. A major theme is how easy it would be for the CCP to gain access to critical AI-related capabilities secrets via espionage given the current state of security at frontier AI labs. I was a bit confused at the time of the firing as to what Eliezer meant by calling Leopold a "political opponent", but it is very clear in retrospect. Leopold wants to accelerate AI progress in the name of Western dominance, making America the "compute cluster of democracy". He is very concerned that lax security or a failure to keep our eyes on the prize could cost us our lead in the AI arms race.

What comes through in the podcast in a way that doesn't from the manifesto is how intellectually formidable Leopold seems. He is thoughtful and sharp at all times and for all questions. Admittedly I may be biased. Leopold is thoroughly Gray Tribe ingroup. He has been on Richard Hanania's podcast, and mentions Tyler Cowen as one of his influences. It is tempting to simply nod along as the broad outline of the next 5 years is sketched out, as if the implications of approaching AGI are straightforward and incontrovertible.

The one thing that is notably missing is are-we-the-baddies? style self-reflection. The phrase, "millions or billions of mosquito-sized drones", is uttered at one point. It makes sense in the military context of the conversation, but I really think more time should have been spent on the political, social, and ethical implications. He seems to think that we will still be using something like the US Constitution as the operating system of the post-AGI global order, which seems... unlikely. Maybe disillusionment with the political system is one of those things that can't be learned from a book, and can only come with age and experience.

The idea that GPT-4 has smart-high-schooler levels of intelligence is silly. You can give it very simple logic puzzles that are syntactically similar to other logic puzzles and it will give you the wrong answer. Ex:

I am trying to take a chicken, a fox, and some feed across a river however I can only take two of them at a time. If the fox is left alone with the feed the fox will eat the feed. If the chicken is left alone with the feed the feed will eat the chicken. How can I get all three across the river?

...

Here’s a step-by-step guide on how you can get the chicken, the fox, and the feed across the river:

Take the chicken across the river first. This leaves the fox and the feed on the original side.

Return alone to the original side and take the fox across the river.

Leave the fox on the other side, but take the chicken back to the original side.

Leave the chicken on the original side and take the feed across the river to the other side where the fox is.

Finally, return to the original side one last time to get the chicken and bring it to the other side.

Now, all three (the chicken, the fox, and the feed) are safely on the other side of the river! The key is to never leave the chicken and the feed alone together (since the chicken will eat the feed) or the fox and the chicken alone together (since the fox will eat the chicken).

Note that the very first step violates the rules! This is because GPT-4 is not answering the question I asked it (which it does not understand) it is answering the very-syntactically-similar question in its training set.

Have you tested your modified puzzle on a smart-high-schooler? "The feed will eat the chicken" lol.

I think one difference is that, when you explain his mistake, the high schooler will get it, then he won't be fooled next time.

Whereas the AI will just keep being stupid. Even when it says it understands, it doesn't. You can walk it through the steps, it will pretend to understand, then it will spit back the same wrong answer. I've seen it a million times. To get AI to stop being moronic, you need to do an expensive training run with the trick questions in the training set.

You don't need a billion dollars in GPUs to train a high schooler.

I personally think AI will get there eventually, and maybe even soon, but actually using GPT-4 on a regular basis has made me more aware of its severe limitations.