site banner

Culture War Roundup for the week of March 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

Sooo, Big Yud appeared on Lex Fridman for 3 hours, a few scattered thoughts:

Jesus Christ his mannerisms are weird. His face scrunches up and he shows all his teeth whenever he seems to be thinking especially hard about anything, I didn't remember him being this way in the public talks he gave a decade ago, so this must either only be happening in conversations, or something changed. He wasn't like this on the bankless podcast he did a while ago. It also became clear to me that Eliezer cannot become the public face of AI safety, his entire image, from the fedora, to the cheap shirt, facial expressions and flabby small arms oozes "I'm a crank" energy, even if I mostly agree with his arguments.

Eliezer also appears to very sincerely believe that we're all completely screwed beyond any chance of repair and all of humanity will die within 5 or 10 years. GPT4 was a much bigger jump in performance from GPT3 than he expected, and in fact he thought that the GPT series would saturate to a level lower than GPT4's current performance, so he doesn't trust his own model of how Deep Learning capabilities will evolve. He sees GPT4 as the beginning of the final stretch: AGI and SAI are in sight and will be achieved soon... followed by everyone dying. (in an incredible twist of fate, him being right would make Kurzweil's 2029 prediction for AGI almost bang on)

He gets emotional about what to tell the children, about physicists wasting their lives working on string theory, and I can see real desperation in his voice when he talks about what he thinks is really needed to get out of this (global cooperation about banning all GPU farms and large LLM training runs indefinitely, on the level of even stricter nuclear treaties). Whatever you might say about him, he's either fully sincere about everything or has acting ability that stretches the imagination.

Lex is also a fucking moron throughout the whole conversation, he can barely even interact with Yud's thought experiments of imagining yourself being someone trapped in a box, trying to exert control over the world outside yourself, and he brings up essentially worthless viewpoints throughout the whole discussion. You can see Eliezer trying to diplomatically offer suggested discussion routes, but Lex just doesn't know enough about the topic to provide any intelligent pushback or guide the audience through the actual AI safety arguments.

Eliezer also makes an interesting observation/prediction about when we'll finally decide that AIs are real people worthy of moral considerations: that point is when we'll be able to pair midjourney-like photorealistic video generation of attractive young women with chatGPT-like outputs and voice synthesis. At that point he predicts that millions of men will insist that their waifus are actual real people. I'm inclined to believe him, and I think we're only about a year or at most two away from this actually being a reality. So: AGI in 12 months. Hang on to your chairs people, the rocket engines of humanity are starting up, and the destination is unknown.

I didn’t watch the video - it’s hard for me to take the topic and it’s high priests seriously; AI safety is a reformulated Pacal’s wager.

Even if you believe otherwise, there are maybe one or two universes at most in which we could solve the coordination problem of stopping everyone from networking a bunch of commodity hardware and employing 30 engineers to throw publicly available data sets at it using known algos. A not too wealthy person could solo fund the entire thing to say nothing of criminal syndicates, corporations or nation states. This one is not going back in the bag when everyone involved has every incentive to do the opposite.

As it happens, your latter point lines up with my own idle musings, to the effect of, "If our reality is truly so fragile that something as banal as an LLM can tear it asunder, then does it really deserve our preservation in the first place?" The seemingly impenetrable barrier between fact and fiction has held firm for all of human history so far, but if that barrier were ever to be broken, its current impenetrability must be an illusion. And if our reality isn't truly bound to any hard rules, then what's even the point of it all? Why must we keep up the charade of the limited human condition?

That's perhaps my greatest fear, even more so than the extinction of humanity by known means. If we could make a superintelligent AI that could invent magic bullshit at the drop of a hat, regardless of whether it creates a utopia or kills us all, it would mean that we already live in a universe full of secret magic bullshit. And in that case, all of our human successes, failures, and expectations are infinitely pedestrian in comparison.

In such a lawless world, the best anyone can do is have faith that there isn't any new and exciting magic bullshit that can be turned against them. All I can hope for is that we aren't the ones stuck in that situation. (Thus I set myself against most of the AI utopians, who would gladly accept any amount of magic bullshit to further the ideal society as they envision or otherwise anticipate it. To a lesser extent I also set myself against those seeking true immortality.) Though if that does turn out to be the kind of world we live in, I suppose I won't have much choice but to accept it and move on.

"If our reality is truly so fragile that something as banal as an LLM can tear it asunder, then does it really deserve our preservation in the first place?"

How about: "If a baby is so fragile that it can't take a punch, does it really deserve our preservation in the first place?"

Sorry to speculate about your mental state, but I suggest you try practicing stopping between "This is almost inevitable" and "Therefore it's a good thing".

In any case, I do think there are good alternatives besides "Be Amish forever" and "Let AI rip". Specifically, it's to gradually expand human capabilities. I realize that doing this will require banning pure accelerationism, which will probably look like enforcing Potemkin fake tradition and arbitrary limitations. The stupidest version of this is a Jupiter brain celebrating Kwanzaa. Maybe a smarter version looks like spinning up ancestor simulations and trying to give them input into the problems of the day or something. I don't know.

These bans will also require a permanent "alignment" module or singleton government in order to avoid these affectations being competed away. Basically, if we want to have any impact on the far future, in which agents can rewrite themselves from scratch to be more competitive, I think we have to avoid a race to the bottom.

How about: "If a baby is so fragile that it can't take a punch, does it really deserve our preservation in the first place?"

Sorry to speculate about your mental state, but I suggest you try practicing stopping between "This is almost inevitable" and "Therefore it's a good thing".

Well, my framing was a bit deliberately hyperbolic; obviously, with all else equal, we should prefer not to all die. And this implies that we should be very careful about not expanding access to the known physically-possible means of mass murder, through AI or otherwise.

Perhaps a better way to say it is, if we end up in a future full of ubiquitous magic bullshit, then that inherently comes at a steep cost, regardless of the object-level situation of whether it saves or dooms us. Right now, we have a foundation of certainty about what we can expect never to happen: my phone can display words that hurt me, but it can't reach out and slap me in the face. Or, more importantly to me, those with the means of making my life a living hell have not the motive, and those few with the motive have not the means. So it's not the kind of situation I should spend time worrying about, except to protect myself by keeping the means far away from the latter group.

But if we were to take away our initial foundation of certainty, revealing it to be illusory, then we'd all turn out to have been utter fools to count on it, and we'd never be able to regain any true certainty again. We can implement a "permanent 'alignment' module or singleton government" all we want, but how can we really be sure that some hyper–Von Neumann or GPT-9000 somewhere won't find a totally-unanticipated way to accidentally make a Basilisk that breaks out of all the simulations and tortures everyone for an incomprehensible time? Not to even mention the possibility of being attacked by aliens having more magic bullshit than we do. If the fundamental limits of possibility can change even once, the powers that be can do absolutely nothing to stop them from changing again. There would be no sure way to preserve our "baby" from some future "punch".

That future of uncertainty is what I am afraid of. Thus my hyperbolic thought, that I don't get the appeal of living in such a fantastic world at all, if it takes away the certainty that we can never get back; I find such a state of affairs absolutely repulsive. Any of our expectations, present or future, would be predicated on the lie that anything is truly implausible.

There would be no sure way to preserve our "baby" from some future "punch".

Right, there never was, and never will be. But it's a matter of degree, we can reduce the chances.

I have no idea what you're arguing or advocating for in the rest of your reply - something about how if the world has surprising aspects that could change everything, that's probably bad and a stressful situation to be in? I agree, but I'm still going to roll up my sleeves and try to reason and plan, anyways.

I have no idea what you're arguing or advocating for in the rest of your reply - something about how if the world has surprising aspects that could change everything, that's probably bad and a stressful situation to be in? I agree, but I'm still going to roll up my sleeves and try to reason and plan, anyways.

Of course, that's what you do if you're sane, and I wouldn't suggest anything different. It's more just a feeling of frustration toward most people in these circles, that they hardly seem to find an iota of value of living in a world not full of surprises on a fundamental level. That is, if I had a choice between a fundamentally unsurprising world like the present one and a continually surprising world with [insert utopian characteristics], then I'd choose the former every time (well, as long as it meets a minimum standard of not everyone being constantly tortured or whatever); I feel like no utopian pleasures are worth the infinite risk such a world poses.

(And that goes back to the question of what is a utopia, and what is so good about it? Immortality? Growing the population as large as possible? Total freedom from physical want? Some impossibly amazing state of mind that we speculate is simply better in every way? I'm not entirely an anti-utopian Luddite, I acknowledge that such things might be nice, but they're far from making up for the inherent risk posed if it were even possible to implement them via magical means.)

As a corollary, I'd feel much worse about an AI apocalypse through known means than an AI apocalypse through magical means, since the former would at least have been our own fault for not properly securing the means of mass destruction.

My problem is really with your "there never was, and never will be" sentiment: I believe that only holds under the premise of the universe containing future surprises. I believe in fates far worse than death, but thankfully, in the unsurprising world that is our present one, they can't really be implemented at any kind of scale. A surprising world would be bound by no such limitations.

I think I understand. You're saying that you don't feel compelled to invite dangerous new possibilities into our lives to make them meaningful or even good enough. I'm not clear if you're mad at the accelerationists for desiring radical change, or for trying to achieve it.

In any case, I'm not an accelerationist, but I think we're in a fundamentally surprising world. On balance I wish we weren't, but imo it doesn't really matter how we wish the world was.