This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I don't expect him to be 100% on the ball but what are his major predictions that have come true? In a vague sense yes, AI is getting better, but I don't think anybody thought that AI was never going to improve. There's a big gap between that and predicting that we'll invent AGI and it will kill us all. His big predictions in my book are:
We will invent AGI
It will be able to make major improvements to itself in a short span of time
It will have an IQ of 1000 (or whatever) and that will essentially give it superpowers of persuasion
None of those have come true or look (to me) particularly likely to come true in the immediate future. It would be premature to give him credit for predicting something that hasn't happened.
Decent post with an overview of Yud's predictions: On Deference and Yudkowsky's AI Risk Estimates.
In general Yud was always confident, believing himself to know General High-Level Reasons for things to go wrong if not for intervention in the direction he advises, but his nontrivial ideas were erroneous, and his correct ideas were trivial in that many people in the know thought the same, but they're not niche nerd celebrities. E.g. Legg in 2009:
Hanson was sorta-correct about data, compute and human imitation.
Meanwhile Yud called protein folding, but thought that'll already need an agentic AGI who'll develop it to mind-rape us.
Or how's that:, Yud-2021 I expect world GDP to tick along at roughly the current pace, unchanged in any visible way by the precursor tech to AGI; until, on the most probable outcome, everybody falls over dead in 3 seconds after diamondoid bacteria release botulinum into our blood
But Yud has clout; so people praise him for Big Picture Takes and hail him as a Genius Visionary.
Excerpts:
…in conclusion, I think I'm starting to understand another layer of Krylov's genius. He had this recurring theme in his fictional work, which I considered completely meta-humorous, that The Powers That Be inject particular notions into popular science fiction, to guide the development of civilization towards tyranny. Complete self-serving nonsense, right? But here we have a regular sci-fi fan donning the mantle of AI Safety Expert and forcing absolutely unoriginal, age-old sci-fi/jorno FUD into the mainstream, once technology does in fact get close to the promised capability and proves benign. Grey goo (to divest from actually promising nanotech), AI (to incite the insane mob to attempt a Butlerian Jihad, and have regulators intervene, crippling decentralized developments). Everything's been prepped in advance, starting with Samuel Butler himself.
Feels like watching Ronnie O'Sullivan in his prime.
He seems like a character out of a Kurt Vonnegut novel
More options
Context Copy link
Us tinfoil hatters call it "negative priming".
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I don't think you're giving him enough credit. Before he was known as the "doom" guy, he was known as the "short timelines" guy. The reason that we are now arguing about doom is because it is increasingly clear that timelines are in fact short. His conceptualization of intelligence as generalized reasoning power also seems to jive with the observed rapid capability gains in GPT models. The fact that next-token prediction generalized to coding skill, among myriads of other capabilities, would seem to be evidence in favor of this view.
2010, to be precise.
More options
Context Copy link
Eh. I gave him some respect back when he was simply arguing that timelines could be short and the consequences of being wrong could be disastrous, so we should be spending more resources on alignment. This was a correct if not particularly hard argument to make (note that he certainly was not the one who invented AI Safety, despite his hallucinatory claim in "List of Lethalities"), but he did a good job popularizing it.
Then he wrote his April Fool's post and it's all been downhill from here. Now he's an utter embarrassment, and frankly I try my best not to talk about him for the same reason I'd prefer that media outlets stop naming school shooters. The less exposure he gets, the better off we all are.
BTW, as for his "conceptualization of intelligence", it went beyond the tautological "generalized reasoning power" that is, um, kind of the definition. He strongly pushed the Orthogonality Hypothesis (one layer of the tower of assumptions his vision of the future is based around), which is that the space of possible intelligences is vast and AGIs are likely to be completely alien to us, with no hope of mutual understanding. Which is at least a non-trivial claim, but is not doing so hot in the age of LLMs.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link