This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Fired Superalignment Researcher Drops Four-and-a-Half Hour Podcast With Dwarkesh Patel, 165-Page Manifesto.
[Podcast] [Manifesto]
Leopold Aschenbrenner graduated valedictorian at Columbia in 2021 at the age of 19. He then worked for the FTX Future Fund before the fiasco, then wound up at OpenAI on the Superalignment team. In April of this year, he was fired, ostensibly for "leaking". In Leopold's telling, he was fired for voicing security concerns (not to be confused with safety concerns) directly to the board. At post-coup OpenAI, being the kind of guy who would write a manifesto is a massive liability. Private interpretation of the Charter is forbidden.
Leopold's thesis is that AGI is coming soon, but that national security concerns, not alignment, are the main threat. A major theme is how easy it would be for the CCP to gain access to critical AI-related capabilities secrets via espionage given the current state of security at frontier AI labs. I was a bit confused at the time of the firing as to what Eliezer meant by calling Leopold a "political opponent", but it is very clear in retrospect. Leopold wants to accelerate AI progress in the name of Western dominance, making America the "compute cluster of democracy". He is very concerned that lax security or a failure to keep our eyes on the prize could cost us our lead in the AI arms race.
What comes through in the podcast in a way that doesn't from the manifesto is how intellectually formidable Leopold seems. He is thoughtful and sharp at all times and for all questions. Admittedly I may be biased. Leopold is thoroughly Gray Tribe ingroup. He has been on Richard Hanania's podcast, and mentions Tyler Cowen as one of his influences. It is tempting to simply nod along as the broad outline of the next 5 years is sketched out, as if the implications of approaching AGI are straightforward and incontrovertible.
The one thing that is notably missing is are-we-the-baddies? style self-reflection. The phrase, "millions or billions of mosquito-sized drones", is uttered at one point. It makes sense in the military context of the conversation, but I really think more time should have been spent on the political, social, and ethical implications. He seems to think that we will still be using something like the US Constitution as the operating system of the post-AGI global order, which seems... unlikely. Maybe disillusionment with the political system is one of those things that can't be learned from a book, and can only come with age and experience.
The idea that GPT-4 has smart-high-schooler levels of intelligence is silly. You can give it very simple logic puzzles that are syntactically similar to other logic puzzles and it will give you the wrong answer. Ex:
...
Note that the very first step violates the rules! This is because GPT-4 is not answering the question I asked it (which it does not understand) it is answering the very-syntactically-similar question in its training set.
And if you asked some 110 IQ high-schoolers this question, half would get it wrong. Half get the original question wrong or can't even supply an answer. There are plenty of examples of this even among very intelligent people. 75% of economics PhD students and professors got this simple question about opportunity costs wrong, because it was worded strangely. Despite the fact that it's formally an easy question to answer. It's easy to trick LLMs just like it's easy to trick people. Doesn't mean either are stupid.
I'm not saying the current models do original meaningful reasoning. If they could the whole world would be turned upside down and we wouldn't be debating if they could.
I think GPT-20 will be able to do that kind of thing in 50 years, either because all we need is scaling; or, because, we will make some new advance in the underlying architecture.
My point is more that high schoolers don't do meaningful original reasoning either. Monkey see, Monkey do. Most human innovation is just random search that is copied by others.
The fact that this machine is dumb isn't surprising, almost all things are dumb, and most humans are. That it can do anything at all is an innovation that puts all the rest to shame.
It's like being mad the first organism that evolved a proto-neuron or proto-central nervous system can't add 2+2 correctly.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link