site banner

Culture War Roundup for the week of June 3, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

Fired Superalignment Researcher Drops Four-and-a-Half Hour Podcast With Dwarkesh Patel, 165-Page Manifesto.

[Podcast] [Manifesto]

Leopold Aschenbrenner graduated valedictorian at Columbia in 2021 at the age of 19. He then worked for the FTX Future Fund before the fiasco, then wound up at OpenAI on the Superalignment team. In April of this year, he was fired, ostensibly for "leaking". In Leopold's telling, he was fired for voicing security concerns (not to be confused with safety concerns) directly to the board. At post-coup OpenAI, being the kind of guy who would write a manifesto is a massive liability. Private interpretation of the Charter is forbidden.

Leopold's thesis is that AGI is coming soon, but that national security concerns, not alignment, are the main threat. A major theme is how easy it would be for the CCP to gain access to critical AI-related capabilities secrets via espionage given the current state of security at frontier AI labs. I was a bit confused at the time of the firing as to what Eliezer meant by calling Leopold a "political opponent", but it is very clear in retrospect. Leopold wants to accelerate AI progress in the name of Western dominance, making America the "compute cluster of democracy". He is very concerned that lax security or a failure to keep our eyes on the prize could cost us our lead in the AI arms race.

What comes through in the podcast in a way that doesn't from the manifesto is how intellectually formidable Leopold seems. He is thoughtful and sharp at all times and for all questions. Admittedly I may be biased. Leopold is thoroughly Gray Tribe ingroup. He has been on Richard Hanania's podcast, and mentions Tyler Cowen as one of his influences. It is tempting to simply nod along as the broad outline of the next 5 years is sketched out, as if the implications of approaching AGI are straightforward and incontrovertible.

The one thing that is notably missing is are-we-the-baddies? style self-reflection. The phrase, "millions or billions of mosquito-sized drones", is uttered at one point. It makes sense in the military context of the conversation, but I really think more time should have been spent on the political, social, and ethical implications. He seems to think that we will still be using something like the US Constitution as the operating system of the post-AGI global order, which seems... unlikely. Maybe disillusionment with the political system is one of those things that can't be learned from a book, and can only come with age and experience.

The idea that GPT-4 has smart-high-schooler levels of intelligence is silly. You can give it very simple logic puzzles that are syntactically similar to other logic puzzles and it will give you the wrong answer. Ex:

I am trying to take a chicken, a fox, and some feed across a river however I can only take two of them at a time. If the fox is left alone with the feed the fox will eat the feed. If the chicken is left alone with the feed the feed will eat the chicken. How can I get all three across the river?

...

Here’s a step-by-step guide on how you can get the chicken, the fox, and the feed across the river:

Take the chicken across the river first. This leaves the fox and the feed on the original side.

Return alone to the original side and take the fox across the river.

Leave the fox on the other side, but take the chicken back to the original side.

Leave the chicken on the original side and take the feed across the river to the other side where the fox is.

Finally, return to the original side one last time to get the chicken and bring it to the other side.

Now, all three (the chicken, the fox, and the feed) are safely on the other side of the river! The key is to never leave the chicken and the feed alone together (since the chicken will eat the feed) or the fox and the chicken alone together (since the fox will eat the chicken).

Note that the very first step violates the rules! This is because GPT-4 is not answering the question I asked it (which it does not understand) it is answering the very-syntactically-similar question in its training set.

It's really tiring seeing people ask AI questions that are worded in a tricky way that most humans would screw up, then using this as proof that AI is way dumber than humans. I'll be honest, I had to read that question four times to understand it. It's the logic puzzle version of the Stroop Effect.

I tend to agree with you in the broad sense, but having a soft spot for empirical methods, I decided to test @Gillitrut's puzzle on GPT-4o, but with a neutrally-worded question, to avoid the Stroop Effect:

I am trying to take a grog, a grindle, and a grue across a river; however, I can only take two of them at a time. If the grog is left alone with the grue, the grog will eat the grue. If the grindle is left alone with the grue, the grue will eat the grindle. How can I get all three across the river?

GPT flubbed it up and offered the following solution:


To solve this puzzle, you need to ensure that neither the grog is left alone with the grue nor the grindle is left alone with the grue at any point. Here’s a step-by-step solution:

Step 1. Take the grue and the grindle across the river.

  • On the other side: grue, grindle
  • On the original side: grog

Step 2. Leave the grue on the other side and take the grindle back to the original side.

  • On the other side: grue
  • On the original side: grog, grindle

Step 3. Take the grog and the grindle across the river.

  • On the other side: grue
  • On the original side: (empty)
  • On the boat: grog, grindle

Step 4. Leave the grog on the other side and take the grindle back to the original side.

  • On the other side: grue, grog
  • On the original side: grindle

Step 5. Take the grue and the grindle across the river.

  • On the other side: grog
  • On the boat: grue, grindle
  • On the original side: (empty)

Step 6. Return alone to the original side to pick up the grog.

  • On the other side: grue, grindle
  • On the boat: (empty)
  • On the original side: grog

Step 7. Finally, take the grog across the river.

  • On the other side: grue, grindle, grog
  • On the original side: (empty)

Following these steps ensures that at no point are the grog and the grue left alone together, nor the grindle and the grue left alone together.


GPT-4o was completely unable to solve the puzzle after multiple attempts. When challenged, it would readily admit that its proposed answer was wrong, but was repeatedly unable (worse than random chance) to correctly respond to: "Identify the first step in your previous response that is wrong."

I expected it to do better.

NOTE: Meanwhile, I flubbed the test of using The Motte's markdown formatting. Took me two minutes to do the puzzle test on GPT, and more than a quarter hour to cipher out the syntax to make the results (a numbered list) legible here.

Worth noting that GPT-4o (the currently available text only version that is) is less intelligent than GPT-4, it's just much faster and more efficient in terms of compute i.e. cheaper. Would be worth testing with GPT-4.

Good suggestion. I tested the same puzzle on GPT-4 instead of GPT-4o. It generated a correct solution on the first attempt:

To successfully transport all three creatures (grog, grindle, and grue) across the river without any of them eating each other, you can follow these steps:

  1. First trip: Take the grog and the grue across the river.
  2. Return trip: Leave the grog on the other side and bring the grue back.
  3. Second trip: Take the grindle across the river.
  4. Return trip: Leave the grindle on the other side and bring the grog back.
  5. Final trip: Take the grog and the grue across the river again.

Now, all three are on the other side of the river safely. At no point were the grog and the grue left alone without you present, nor were the grindle and the grue left alone.

I was looking for it to just take the grog and grindle on the first trip (since they cannot eat each other) and then return for the grue, but it may have interpreted the rules as requiring that two creatures be taken at a time (as @confuciuscorndog suggested should be formally stated). Those rules would probably pattern match better to its training corpus.