site banner

Culture War Roundup for the week of January 30, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

Two Tweets from OpenAI's Sam Altman: "eliezer has IMO done more to accelerate AGI than anyone else. certainly he got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc." "it is possible at some point he will deserve the nobel peace prize for this--I continue to think short timelines and slow takeoff is likely the safest quadrant of the short/long timelines and slow/fast takeoff matrix."

Eliezer Yudkowsky thinks that the rapid development of AGI will likely kill us and he has devoted his life to trying to stop this from happening, and Sam Altman almost certainly knows this. My personal guess is that quantum immortality means regardless of who is right, some branches of the multiverse will survive AGI, and the survivors will have enough computational power to know what percentage of the branches survived, and consequently whether Altman or Yudkowsky were right.

Edit: Eliezer's response Tweet, which I don't understand.

A common complaint I am seeing is that openai is an existential threat. Why can't tech billionaires pool together crate their own unbiased openai alternative? It is way easier than trying to make another facebook.

Not sure what you mean but tech companies have all got their own in-house GPT-3 equivalents.

Why wouldn't they provide them to general public to spy on people

because tech billionaires are not "unbiased".

The arguments I've seen that OpenAI is an existential threat are from people who think it's an existential threat in the same way that a business whose product was consumer-grade bioweapon development kits would be an existential threat.

If your problem is that a company is selling bioweapon development kits, starting a competing company that also sells bioweapon development kits does not help.

I have seen Cernovich and others argue it is a threat because of left-wing bias, not destroy-the-world threat .

You've seen him argue that it's an existential threat because of left-wing bias?

OpenAI is that company. Elon Musk and Peter Thiel are founders of it.

I've said it before and I'll say it again. To the degree that I believe that AGI present an existential threat to Humanity, I believe that is largely because of rather than in spite of people like Yudkowsky and the folks at MIRI. I believe that the so-called "AI Alignment Problem" has less to do with itelligence (artificial or otherwise) than it does the fundamental flaws of Utilitarianism as an ethical framework or model for decision making. While I actually do think that Scott means well, I find it kind of telling that he seems to more concerned with teaching rationalists "how not to sound like a killer robot" than he is how not to become one.

Please elaborate.

I'll repeat that I don't think this reflects a good understanding of Yudkowsky's concerns: the mainline ratsphere already considered and accepted the 'don't be a killer robot' problem; their problem is that there's a nice big shiny candy-like button labelled 'free money' that (may be) hooked up to a killer robot release gate.

And I will reply, that as I have before. that I am unconvinced. In my eyes, both you and Yudkowsky are trying to invoke a distinction without a difference.

The killer robot is a killer robot, and those who serve as the allies of killer robots are the allies of killer robots.

Do you think a deontological or virtue ethicist AI, or one programmed/designed by deontologists or virtue ethicists would be less likely to pose an existential risk? Or what moral framework do you have in mind that would make the AI alignment problem less of a problem?

I'm not sure if the main complaint is about utilitarianism, or any other firm ethical system, as much as it's just that Yudkowsky is a huge weirdo. Most people, upon learning that this is a guy who likes doing stuff like "writing a short story about ethics with an offhand mention that, oh, hey, in this universe rape is legal and the characters think that people were prudes for not legalizing rape sooner, isn't that interesting? Just a thought experiment guys!" might not be the best candidate to put in charge of ensuring that AI's act in an ethical way and that, indeed, an AI not aligned by that guy would be preferable to AI's aligned by that guy, whether that's a fair charge or not.

an offhand mention that, oh, hey, in this universe rape is legal and the characters think that people were prudes for not legalizing rape sooner

To be fair, in that story it's quite clear that the society of the time has a very different view of what rape is, has no idea what, historically, the reality of rape was, and if they heard about it would probably die from vomiting up everything they ever consumed in their entire lives.

That's why he has the character of the Confessor who comes from the before times before everyone was cured of being mean and wicked and evil and nasty and who does know what 'rape' meant, and that's why he evokes (successfully) the reaction of the reader about "They legalised rape???? What kind of perverted monsters are they?" and then the other shoe drops about "Oh hang on, they don't mean rape rape, they mean some dumb form of flirting that implies consent to sex without explicitly verbalising consent". It's clever, I'll give him that, but yeah - it does leave him open to precisely that kind of "did you know?"

Do you think a deontological or virtue ethicist AI, or one programmed/designed by deontologists or virtue ethicists would be less likely to pose an existential risk?

Yes, absolutely.

Edit to elaborate: A big part of Yudkowsky's problem is that he thinks that he can bypass the flaws of utilitarianism by having more information, by applying more intelligence/computing power, by being one iteration further along the recursive loop than everyone else. But the thing about recursive loops is that they are recursive, and as such being one iteration further along is the same as being as being one iteration behind.

I don't quite understand how we'd even begin to program a deontological or virtue ethicist AI. We're capable of giving things functions that they try and maximise, and we can call the subject of that function 'utility'. Whatever the flaws or virtues of utilitarianism, it does have the singular advantage of being computable. Compare to a virtue ethicist AI - how on earth do we begin building such a thing?

Even if it would be better, it seems like we're much closer to getting 'AI with a function it seeks to maximise' than we are getting 'AI who desires to fulfill virtues such as honour and charity'.

I agree that having an AI that believed in being virtuous according to human standards would be far, far better than one with a complicated mathematical function we try and map onto human utility and hope it doesn't kill us, but I've seen no reason to think the first is even possible.

We're capable of giving things functions that they try and maximise, and we can call the subject of that function 'utility'

Well, so far we're not capable of this. At best we build something that essentially modifies itself in response to rewards. It's not trying to maximize anything.

Given this, I don't think it's fair to describe current AIs as utilitarian. Their training reward functions were utilitarian, maybe, but it would be pretty easy to create reward functions that align more with virtue ethics.

Their training reward functions were utilitarian, maybe, but it would be pretty easy to create reward functions that align more with virtue ethics.

I am absolutely keen to hear more about this, because everything I know tells me this is a close-to-impossible problem. The notion of 'pretty easy' seems intuitively wrong to me, but if you have any reading to offer on the subject I'd love to go through it.

Well, emphasis on more similar to virtue ethics. All it would take would be to change the reward criteria.

While defining terms is its own challenge, working in binary true/false and yes/no evaluations is arguably easier from a programming perspective than dealing weighted averages or trying to maximize a given value. Sure, a deontological AI will inevitably be vulnerable to Asimovian/Aes Sedai-esqe fallacies and exploits but a deontological AI is also not going to try and tile the universe in paper-clips or try to exterminate all life to in the name of preventing future suffering unless it's creator explicitly programs it to do so.

While Yudkowsky sees this as a fatal flaw, how can AGI it be described as intelligent if it doesn't "shut up and do the math". I see this as a feature. Utilitarianism is a stupid and evil ideology that is fundamentally incompatible with human flourishing. You can have a benevolent AI or you can have a utilitarian/consequentialist AI. You can't have both.

Doesn't quantum immortality mean that we're likely to spend eternity in pain on our death beds, seemingly close to death but miraculously surviving? If AGI tries to wipe us out, aren't we likely to suffer in pain forever from a miraculously survived murder attempt, maybe lying as a blind and deaf quadriplegic with third degree burns buried in a garbage dump?

No. To quote a post I made in response to someone expressing the same concern:

Is the thing you're afraid of the idea that quantum immortality would involve something like a near-eternity of horrible lives where you're almost but not quite dead? Because if so, I think you're badly misjudging the probability distribution. Those situations are associated with quantum immortality only because they're so incredibly unlikely that if they happen it'll be obvious that quantum immortality is true - but by definition that means they are absurdly unlikely to happen! Something like "you get shot and almost die, but random quantum fluctuations cause a lump of graphite to spontaneously appear inside your chest and barely stop the bleeding" are unlikely on a truly cosmic scale, even under the logic of quantum immortality it only matters if it's the only future where you don't die. And that sort of quantum immortality would require it happen again and again, multiplying the improbability each time.

Even if quantum immortality is true, anything the slightest bit plausible will completely dominate the probability distribution. There is no reason that technology granting near-immortality is impossible, so in virtually every Everett branch where you survive the reason is just that the technology is invented and you use it. Which is generally going to correspond to a technologically advanced and prosperous society. Quantum immortality wouldn't feel like a series of staggering coincidences barely preserving your life, it would feel like living in a universe where everything went surprisingly well. Billions of years from now your society is harvesting energy from black holes and maybe occasionally during get-togethers with your friends you debate whether this outcome was unlikely enough that quantum immortality is probably true.

Billions of years from now your society is harvesting energy from black holes and maybe occasionally during get-togethers with your friends

Never going to happen. "Our society" is not going to exist a thousand years from now, much less a billion. If humans still exist, it won't be in the form we know as human, any more than our hominid ancestors would recognise us.

I too love Golden Age SF but the amount of damage it has done due to naive techno-optimism drives me batty. 'Science will keep improving, we'll know all there is to know, we'll create better and better machines and it will be trivial to solve things like war and poverty and mental illness!'

Okay, tell me right now how the heralded AGI would solve the current problem in the Ukraine. Turn over entire control of world governments to it and enable it to assassinate Putin if he even looks like he's thinking of doing something? Forcing a peace on the entire world by tighter and tighter surveillance where there is only one permissible set of things to think and do? Because forget 'human flourishing' and fancy notions of post-scarcity and everyone can live a perfect VR life of whatever they desire, changing bodies and being unaging and immortal and having fun forever, the fact will be that if we run it on utilitarian premises, as the prevailing philosophy seems to be, the greatest utility will be peace and all that good stuff. But how do we get peace and all that good stuff? Control the likes of which no human dictator could even dream of. "You have to let people make their own choices!" "But if I do that, some of them will make bad choices, which will result in human deaths, and you told me that was the worst thing ever and must be avoided at all costs, so I can't let humans make their own choices".

What are we going to get out of AGI? What are we expecting, hoping, dreaming to get? 'Oh we need AGI so it can avert existential crises for us'. Well, there is no free lunch. There is no harvesting fre energy from black holes so we can expend it like water in our games and pleasures, it will be carefully monitored and doled-out energy rations (like "Star Trek: Voyager" tried, before the writers realised this meant they couldn't do anything) from whatever resources we have remaining and available.

But most of all, it will be Scott's world of AI influencers selling us Pepsi. That's what is coming, because we have set up Mammon as our god and ruler and making money is the one thing that counts. A Red Queen's Race, where if the current quarter profits aren't as good as the projections, the share price drops, so we lay off 5,000 workers in order to improve cost-base and get that price back up, because if the price tanks the business goes under. Running to stand still.

That's what AGI will be used for: governments wanting to win wars that they aren't officially declaring (Chinese spy balloons, true or not? why is China doing that? is it because they're testing the waters seeing how the US is handling, or failing to handle, Ukraine and Russia?), businesses gobbling up shares of the market, and every breath we draw being monetizable while we have to work longer and harder to earn enough to manage any kind of standard of living.

There is no Fully Automated Luxury Gay Space Communism. We're richer and better off in every conceivable way than our ancestors, yet we're still not happy and we're still coping with the same problems of human nature. And no machine intelligence, however godlike, is going to solve our problems for us.

The point isn't whether such an outcome is particularly likely, it's that it's more likely than being kept barely alive by a series of staggeringly unlikely macroscopic quantum events. The idea behind quantum immortality is that, if many-worlds is true and all the worlds in it are truly "real", there will always be some small subset of worlds where you continue existing so long as this is physically possible. And a lot of things are physically possible if you get into extremely unlikely quantum fluctuations. Since you don't experience the worlds where you are already dead, an increasing percentage of your remaining future selves would have experienced whatever unlikely events are required to keep you alive. When I said "your society" that wasn't meant to refer to any current society, it was meant to refer to the idea of surviving as part of a society at all. As opposed to most of your future copies surviving as the only remaining human in your universe, floating in space after the destruction of Earth and staying alive only because in some tiny fraction of the Everett branches splitting off each instant some oxygen/etc. randomly appears and and keeps you alive. Any future that doesn't require such a continuous series of coincidences will be a much larger fraction of the branches where you survive, and the most obvious such future is one where people deliberately invent the required technology. So whether quantum immortality is true or not, and whether or not you decide to care about the fate of future selves even if they only exist in a small fraction of branches, the expected outcomes of quantum immortality being true aren't the "kept barely alive by randomness" scenarios.

Those are not my future/past/present selves, any more than my reflection or my shadow is another self. If it's true, it's an interesting notion, but there is no "other self", there are just different versions of how I could have been - suppose I were a different sex, or born in a different country, or at a different period in history. If it's all the same universe at the same point in time except that Version 1 turned right when leaving the house while Version 2 turned left, there isn't a "me" to be a self, there are just "a and b and c and d and e" who are all different people.

Okay, but most people want to classify the guy who wakes up tomorrow with their memory and personality as being themselves. (Or rather a sufficiently similar memory and personality, since those change over time.) If many-worlds is true and the worlds literally exist, then each instant you're splitting into countless copies, all of whom have your memory/personality/continuity-of-consciousness. Under your interpretation none of them are the same person they were, so nobody is the same person from moment to moment. Which doesn't seem like a terribly useful definition of selfhood.

Quantum immortality wouldn't feel like a series of staggering coincidences barely preserving your life, it would feel like living in a universe where everything went surprisingly well.

This doesn't make any sense. I already have information about the world I'm in. It's a world where comfortable immortality is far away and out of reach for me. Your argument is backwards, most of the probability mass with conscious humans will be in those world's where immortality is nice and easy, but I know which world I live in now. I am embodied in time right now. Most humans would live nice quantumly immortal lives in general, but I already know I won't be one of those people, because of the knowledge I have right now about the branch I am in.

(Also, more importantly I don't see why if by the Born rule I end up in a world where I am dead, I won't just be dead. There is nothing in physics that says that option is off limits; though, of course, other copies would still exist in agony.)

I already have information about the world I'm in. It's a world where comfortable immortality is far away and out of reach for me. Your argument is backwards, most of the probability mass with conscious humans will be in those world's where immortality is nice and easy, but I know which world I live in now. I am embodied in time right now.

Consider that you aren't 100% sure of being a reliable narrator, and that the uncertainty, however minuscule, is greater than odds of spontaneous physical miracles – as per @sodiummuffin's logic. Conditional on you invariably ending up alive, you will... not have had experienced lethal harms that cannot be survived without magic; and if it very convincingly looks to you as if you had experienced them, well, maybe that was just some error? A nightmare, a psychedelic trip, a post-singularity VR session with memory editing...

I woke up today from a realistic dream where I got crippled and blinded by a battery pack explosion. In its (and, in a sense, my own) final moments, I consciously chose the alternate reality relative to which that world was a dream, focused my awareness, and realized that this has happened many times before – in other worlds I had escaped by simply waking up into this one. (This reminded me: I've never read Carlos Castaneda but he probably wrote about this stuff? Sent me on a binge. Yeah, that's one of his topics, mages jumping between apparent universes that should be ontologically unequal).

Dreams aside, I feel like the idea of quantum immortality is unfortunately all tangled up with the idea of observer effect. As per QI, you aren't immortal across the board – you die, and soon, in the vast majority of timelines observed by any other consciousness, just like all humans who have died before our time. You are, right now, in a timeline you observe (though as noted above, only probably) – and presumably you aren't yet dying any more than any other person who's exposed to normal risks and aging. The idea is that you do indeed die in those scenarios where you eat an explosion, develop malignant tumors, are lying in a dump bleeding out all alone with no chance of survival, or are 80 years old in 1839; but those are counterfactuals, not real timelines, and the you who doesn't die, the person typing those comments, doesn't get into them. If it looks to you as if you did, and QI is right – you being wrong is more likely than a miracle.

In its (and, in a sense, my own) final moments, I consciously chose the alternate reality relative to which that world was a dream, focused my awareness, and realized that this has happened many times before – in other worlds I had escaped by simply waking up into this one.

Okay, now try that in the waking world. Shoot yourself or stab yourself or poison yourself and see if you can "consciously choose the alternate reality" where blowing your brains out was only a dream. It won't work, and if it does, come back and tell us and I'll then believe in quantum immortality.

(Really, the lengths to which people go in thought experiments are baffling, when those same people scoff at religious believers who believe in souls and afterlifes. "Heaven is just a fairy tale, but the idea that there are uncounted worlds where you live forever because immortality is easy is reasonable thinking!")

Shoot yourself or stab yourself or poison yourself and see if you can "consciously choose the alternate reality" where blowing your brains out was only a dream. It won't work, and if it does, come back and tell us and I'll then believe in quantum immortality.

You know, this snarky and borderline rules-breaking response makes me think that, back when Scott uncritically reposted the 4chan story about 90 IQ people not understanding conditional hypotheticals and was told that actually there's no way such failures happen at 90 IQ, that was dead wrong. Actually he should've been told that fairly smart people can't into conditional hypotheticals either. If their worldview depends on it, that is; cue Upton Sinclair. You miss the point in so many dimensions at once, and so smugly at that, it's pretty frustrating.

Hello! My story is precisely about such a scenario. I have accidentally or deliberately fucked myself up in dreams countless times – and probably an OOM more in forgotten ones. "Coming back" and telling is what I am doing. That it is not persuasive because the ontological status of the event is inherently low is the fucking point – if it happened «for real», I'd have been in no condition to reply.

What I'm describing is not a «quantum immortality theory» but a much less speculative, let's call it, «probabilistic-phenomenological immortality theory» that does not depend at all on there existing, in some sense other than metaphorical, bona fide alternative worlds, universes, timelines, any weird physics: it's explicitly about alternative mundane explanations for subjective experiences founded on the premise of human fallibility, especially with regard to ontological status of events. If I commit to killing myself and succeed, the most robust way for me to have subjective awareness after that (assuming materialism), and report on it, is if it turns out I have not even tried and have simply dreamed of doing it, or got otherwise confused about what's going on. This is, in fact, what happens, because while awake, I'm not really suicidal, prone to get baited into killing myself by an online troll, or interested in risky metaphysically motivated experiments.

This is relevant to @Hyperion's and @Glassnoser's arguments because it suggests the solution in the strongest case. Namely: given an observer's a) subjective observation that he's heading to certain death only a miracle could prevent and b) the assumption that he will not in fact die and cease observing, it is more plausible that he's wrong about his situation than that physics-breaking miracles (even evil ones, like a biologically implausible neverending agony) will happen and undo the death. Given that old people die, that he strongly believes «comfortable immortality is far away and out of reach for me», and conditional on him staying alive – between «eternal Tithonus torture because quantum timelines something something» and «nah man, it turns out technological immortality wasn't that hard» the latter is overwhelmingly more probable. People fail at reasoning infinitely more often than laws of physics fail to apply.

Really, the lengths to which people go in thought experiments are baffling, when those same people scoff at religious believers who believe in souls and afterlifes

Right. The cool part is, this logic works the same way for afterlife and any religious miracle as for the sci-fi version of quantum immortality. You're shooting yourself in the foot here and I'm not sure it's possible for me to make that clear. I'm equally unsure if you are reflexively condescending without fully understanding the implications of this logic, or if you see them – and defend your views in such an indirect way.

"I consciously chose the alternate reality relative to which that world was a dream, focused my awareness, and realized that this has happened many times before – in other worlds I had escaped by simply waking up into this one."

You are the one recounting a dream where you willed yourself into another reality. If it's all only a dream, then poison yourself in this dream and will yourself into another reality - it's easy, you've already done it by report!

Your argument is backwards, most of the probability mass with conscious humans will be in those world's where immortality is nice and easy, but I know which world I live in now.

The chance of quantum fluctuations repeatedly keeping you barely alive through random chance is incredibly unlikely, far more unlikely than them resulting in a world where someone develops the necessary technology faster than you think is plausible. In his scenario you're lying "with third degree burns buried in a garbage dump", that means we need absurd quantum events happening continuously for years to prevent you dying of shock, infection, suffocation, starvation, etc. Each unlikely event multiplies the improbability further. Even under the logic of quantum immortality, this only matters if they're the only branches where you survive. Far more probable is that, for instance, quantum fluctuations in some neurons results in someone trying the right ideas to develop an AI that can do superhuman medical research or develop brain-uploading. Indeed, even if it was somehow truly unreachable through normal research, I think it would be more likely that fluctuations in a computer's RAM result in file corruption that happens to correspond to a functioning file containing correct information on the required technology. Because at least that only really has to happen once, rather than happening again and again in the conventional form of quantum immortality. Eventually the sun is going to expand into a red giant and similarly worlds where you survive through your society developing space-travel are going to dominate worlds where you survive being inside the sun through unlikely quantum events happening many times per second.

Also, more importantly I don't see why if by the Born rule I end up in a world where I am dead, I won't just be dead. There is nothing in physics that says that option is off limits; though, of course, other copies would still exist in agony.

The premise of quantum immortality is that if 1+ copies of you still exist, then you are still alive even if you no longer exist in the vast majority of worlds. If many-worlds is true and corresponds to worlds that are all "real", then there will virtually always be surviving copies. You don't "end up" in any individual world, all the copies diverging from your current self which haven't been destroyed (or altered in ways you consider incompatible with being yourself) are you.

It's not necessary to the argument but I would argue that under a sensible definition some of the copies that have already diverged are you as well. People don't consider it death when they get drunk and don't retain hours of memories. This isn't too relevant now but it's potentially relevant to a future self on the verge of death, since under that definition most of your selves that survive are ones that already diverged, rather than more obvious but unlikely quantum immortality scenarios like "in some worlds your brain is preserved in a freak accident and then used to reconstruct your mind centuries later". But ultimately these definitions are an arbitrary decision, humans intuitions regarding wanting to live aren't well-equipped to deal with multiple future selves in the first place, whether due to many-worlds or something like multiple software copies. However under many-worlds you can't just go with the "my current brain is me and copies aren't" option, because all your future selves are copies diverging from your current self.

On average no, if what you consider "you" includes people with the exact same brain who live in different parts/branches of the multiverse.

Why not?

The odds of you dying on your death bed goes up exponentially with time so the measure of use that has somehow survived 1 million years on your death bed will be much, much less than the measure of you that is in a branch of the multiverse where someone cured death. Plus, in a big enough universe new yous keep being born.