This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Fired Superalignment Researcher Drops Four-and-a-Half Hour Podcast With Dwarkesh Patel, 165-Page Manifesto.
[Podcast] [Manifesto]
Leopold Aschenbrenner graduated valedictorian at Columbia in 2021 at the age of 19. He then worked for the FTX Future Fund before the fiasco, then wound up at OpenAI on the Superalignment team. In April of this year, he was fired, ostensibly for "leaking". In Leopold's telling, he was fired for voicing security concerns (not to be confused with safety concerns) directly to the board. At post-coup OpenAI, being the kind of guy who would write a manifesto is a massive liability. Private interpretation of the Charter is forbidden.
Leopold's thesis is that AGI is coming soon, but that national security concerns, not alignment, are the main threat. A major theme is how easy it would be for the CCP to gain access to critical AI-related capabilities secrets via espionage given the current state of security at frontier AI labs. I was a bit confused at the time of the firing as to what Eliezer meant by calling Leopold a "political opponent", but it is very clear in retrospect. Leopold wants to accelerate AI progress in the name of Western dominance, making America the "compute cluster of democracy". He is very concerned that lax security or a failure to keep our eyes on the prize could cost us our lead in the AI arms race.
What comes through in the podcast in a way that doesn't from the manifesto is how intellectually formidable Leopold seems. He is thoughtful and sharp at all times and for all questions. Admittedly I may be biased. Leopold is thoroughly Gray Tribe ingroup. He has been on Richard Hanania's podcast, and mentions Tyler Cowen as one of his influences. It is tempting to simply nod along as the broad outline of the next 5 years is sketched out, as if the implications of approaching AGI are straightforward and incontrovertible.
The one thing that is notably missing is are-we-the-baddies? style self-reflection. The phrase, "millions or billions of mosquito-sized drones", is uttered at one point. It makes sense in the military context of the conversation, but I really think more time should have been spent on the political, social, and ethical implications. He seems to think that we will still be using something like the US Constitution as the operating system of the post-AGI global order, which seems... unlikely. Maybe disillusionment with the political system is one of those things that can't be learned from a book, and can only come with age and experience.
Is there somewhere I can read more background on this guy? I'm seeing lots of memetic breathlessness on twitter as if he is a Person That Matters, but...I don't get it. He appears to be gay-coded and extremely high verbal intelligence, but I don't see much substance to any of what he is saying. It reminds me a lot of various cryptocurrency gurus. The outfit, the coifed hair, the speaking as if what he is saying is somehow extremely complex but he is Here to Explain It For You.
He seems like a skinny Charles Hoskinson.
Also I want to register my extreme doubts about where Ai is going or if "AGI" is coming. Current Ai as I use it (daily) seems very good at information retrieval, which it then brute forces into something which looks a lot like human speech, but there is no intelligence there. It's a trillion monkeys typing at a trillion typewriters all at the same time, and then the machine reads all of their outputs (quickly) and finds one which looks like it matches your input.
It's really cool, but getting more monkeys at more typewriters doesn't produce intelligence.
AI is a philosophical zombie: https://en.wikipedia.org/wiki/Philosophical_zombie?useskin=vector
At some point people pretending that it is anything other than this just come across as stupid, especially when they try to dress up what they're saying.
I certainly hope current LLMs are philosophical zombies, or we're committing some pretty heinous moral crimes!
But why should a lack of qualia imply AI can't be a potent weapon? Conscious subjective experience, much as I enjoy it, is not the core element of human intelligence which allowed us to reach such incredible heights of lethality compared to our ancestors.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link