site banner

Culture War Roundup for the week of March 25, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

I would classify myself as an AI anti-doomer. I think I recognize all the things you're pointing out, and maybe a few you haven't thought of. The question is, do the proponents of AI Doom offer a plausible path forward around these problems? It seems obvious to me that they do not, so what's the point of listening to them, rather than buying a few more poverty ponies and generally buckling up for the crash?

The thing that makes the path forward plausible is people acknowledging the problem and contributing to the solution, just like any other problem that requires group action.

I don't think you actually live your life this way. You're just choosing to do so in this case because it's more convenient / for the vibes.

Think of every disaster in history that was predicted. "We could prevent this disaster with group action, but I'm only an individual and not a group so I'm just going to relax." Is that really your outlook?

If there was an invading army coming in 5 years that could be beaten with group action or else we would all die, with nowhere to flee to, would you just relax for 5 years and then die? Even while watching others working on a defense? Are the sacrifices involved in you contributing to help with the problem in some small way really so extraordinary that you don't feel like making a token effort? Is the word 'altruism' such a turn-off to you? How about "honor" or "pride" or "loyalty to one's people"? How about "cowardice" or "weakling"? Do these words shift anything for you, regarding the vibes?

Edit: I'm not trying to be insulting, just trying to call attention to the nature of how vibes work.

People do pro-social things not just because of the fear of punishment for not doing them, but because they understand that they are contributing to a commons that benefits everyone, including themselves.

For the record, it wouldn't be that hard to solve this problem, if people wanted to. Alignment is pretty hard, but just delaying the day we all die indefinitely with a monitoring regime wouldn't be that hard, and it would have other benefits, chiefly extending the period where you get to kick back and enjoy your life.

Question: Are there any problems in history that were solved by the actions of a group of people instead of one person acting unilaterally that you think were worth solving? What would you say to someone who took the same perspective that you are taking now regarding that problem?

And the "Are the sacrifices involved in you contributing to help with the problem in some small way really so extraordinary that you don't feel like making a token effort?" question is worth an answer to, I feel.

The thing that makes the path forward plausible is people acknowledging the problem and contributing to the solution, just like any other problem that requires group action.

I don't think the AI doomers have a solution, and I don't think their actions are contributing to a solution. I've seen no evidence that they're making any meaningful progress toward Alignment, and I'm fairly skeptical that "alignment" is even a coherent concept. I'm quite confident that Coherent Extrapolated Volition is nonsense. I don't agree that their efforts are productive. I don't even agree that they're better than nothing.

Think of every disaster in history that was predicted. "We could prevent this disaster with group action, but I'm only an individual and not a group so I'm just going to relax." Is that really your outlook?

I note that you haven't actually named relevant disasters. How about the Population Bomb? How about the inevitable collapse of Capitalism, to which Communism was the only possible solution? The war on poverty, the war on alcohol, the war on terror, the war on kulaks and wreckers, the war on sparrows? The large majority of disasters predicted throughout history have been mirages, and many "solutions" have straightforwardly made things worse.

It is not enough to predict dire outcomes. The problem you're trying to solve needs to be real, and the solution you're implementing needs to have evidence that it actually works. The AI doomers don't have that, and worse, the methods they're looking for, stuff like CVE and "pivotal acts" are either fallacious or actively dangerous. The whole edifice is built on Utilitarianism run amok, on extrapolation and inference, and specifically formulated to be as resistant to skepticism as possible.

In any case, I'm not thinking as an individual. I am explicitly thinking as part of a group. It's just not your group.

If there was an invading army coming in 5 years that could be beaten with group action or else we would all die, with nowhere to flee to, would you just relax for 5 years and then die?

Hell no. But who's "we", kemosabe?

Even while watching others working on a defense? Are the sacrifices involved in you contributing to help with the problem in some small way really so extraordinary that you don't feel like making a token effort?

The "defense" appears to involve implementing totalitarian systems of control, a digital tyranny that is inescapable and unaccountable, with the doomers and their PMC patrons on top. This is necessary to prevent an entirely hypothetical problem so disastrous that we can't afford to ask for empirical verification that it exists. Also, we shouldn't actually expect verifiable progress or results, because it probably won't work anyway so any failure is what we've already been conditioned to expect. Meanwhile, the tyranny part works just fine, and is being implemented as we speak.

No thanks.

Is the word 'altruism' such a turn-off to you? How about "honor" or "pride" or "loyalty to one's people"? How about "cowardice" or "weakling"? Do these words shift anything for you, regarding the vibes?

I doubt that we share a common understanding of what these words mean, or what they imply. They do not shift "the vibe", because they have no leg to stand on. I don't believe in the hell you're peddling, so its horrors do not motivate me.

Question: Are there any problems in history that were solved by the actions of a group of people instead of one person acting unilaterally that you think were worth solving?

Sure, many of them. The Civil War seems like a reasonable example. But such problems are a minority of the percieved problems actually demanding group action.

And the "Are the sacrifices involved in you contributing to help with the problem in some small way really so extraordinary that you don't feel like making a token effort?" question is worth an answer to, I feel.

With no actual evidence that the problem exists, and no evidence that, if it does, they're actually contributing to a solution, it seems to me that the appropriate sacrifice is approximately zero.

So we have two questions, and we should probably focus on one.

  1. Is the problem real?
  2. Is there a way to contribute to a solution?

Let's focus on 1.

https://www.astralcodexten.com/p/the-phrase-no-evidence-is-a-red-flag

What do you mean "no actual evidence that the problem exists"? Do you think AI is going to get smarter and smarter and then stop before it gets dangerous?

"Suppose we get to the point where there’s an AI smart enough to do the same kind of work that humans do in making the AI smarter; it can tweak itself, it can do computer science, it can invent new algorithms. It can self-improve. What happens after that — does it become even smarter, see even more improvements, and rapidly gain capability up to some very high limit? Or does nothing much exciting happen?" (Yudkowsky)

Are you not familiar with the reasons people think this will happen? Are you familiar, but think the "base rate argument" against is overwhelming? I'm not saying the burden of proof falls on you or anything, I'm just trying to get a sense from where your position comes from. Is it just base rate outside view stuff?

What do you mean "no actual evidence that the problem exists"? Do you think AI is going to get smarter and smarter and then stop before it gets dangerous?

It seems to me that there's three main variables in the standard AI arguments:

  • how quickly can iterative self-improvement add intelligence to a virtual agent? usually this is described as "hard takeoff" or "soft takeoff", to which one might add "no takeoff" as a third option.
  • how does agency scale with intelligence? This is usually captured in the AI-boxing arguments, and generally the question of how much real-world power intelligence allows you to secure.
  • what does the tech-capability curve look like? This is addressed in arguments over whether the AI could generate superplagues, or self-propagating nanomachines that instantly kill everyone in the world in the same second, etc.

On all three of these points, we have little to no empirical evidence and so our axioms are straightforwardly dispositive. If you believe intelligence can be recursively scaled in an exponential fashion, that agency scales with intelligence in an exponential fashion, and that the tech-capability curve likewise scales in an exponential fashion, then AI is an existential threat. If you believe the opposite of these three axioms, then it is not. Neither answer appears to have a better claim to validity than the other.

My axioms are that all three seem likely to hit diminishing returns fairly quickly, because that is the pattern I observe in the operation of actual intelligence in the real world. Specifically, I note the many, many times that people have badly overestimated the impact of these three variables when it comes to human intelligence, badly overestimating the degree of control over outcomes that can be achieved through rational coordination and control of large, chaotic systems, as well as the revolutions that tech improvements can provide. Maybe this time will be different... or maybe it won't. Certainly it has not been proven that it will be different, nor even has the argument been strongly supported through empirical tests. I'm quite open to the idea that I could be wrong, but failing some empirical demonstration, the question then moves to "what can we do about it."

You're just comparing human intelligence against other human intelligence. What about comparing human intelligence vs animal intelligence, or human chess players vs computer chess players? Does that give you pause for thought at all?

For bullet point 2: If you'll forgive the analogy, it's like saying "humans are intelligent and we still screw up all the time so I'm not that concerned about (let's say) aliens that are more intelligent than us and don't have any ethics that we would recognize as ethics." You're imagining that the peak of all intelligence in any possible universe is a human with about 160 IQ. How could that be? What if humans didn't need to keep our skulls small in order to fit through the mother's hips?

For bullet point 1: I don't think you have a basis to say that intelligence can't build on itself exponentially. Humans can't engineer our own brains, except in fairly crude ways. If there was a human that could create copies of himself using trial and error to toy around with its brain to get the best results, iterating over time, wouldn't you expect that to maybe be a different situation? Especially if the copies weren't limited by the size of the skull containing the brain and the mother's hips that the skull needs to fit through?

I also don't think it's required for the superintelligence to be able to come up with any super-nanotech or super-plague technology to beat us and replace us, although I expect it would. Humans aren't that formidable, superior tactics would win the day.

Bullet point 3 seems to imply that humans could never be much more advanced technologically than they are now, or that much more advanced technology wouldn't yield much in practical terms. Which are both wrong from both an inside and outside view. Through common knowledge and also through common sense.

You're just comparing human intelligence against other human intelligence.

Human intelligence, and super-human intelligence of the sort generated by coordination between many humans, sure. That's the intelligences we have available to us to observe.

What about comparing human intelligence vs animal intelligence, or human chess players vs computer chess players? Does that give you pause for thought at all?

Sure, as a hypothetical, it's certainly a scary one. As I said, if all three are exponential, then AI very well may be an X-risk. I don't actually think all three are exponential, though, and there's no evidence to really decide the question either way.

For bullet point 1: I don't think you have a basis to say that intelligence can't build on itself exponentially.

And you don't have a basis to say that it can. We haven't actually demonstrated that it's even possible to build a general intelligence with our current or foreseeable tech base. Maybe we're close to accomplishing that, and maybe we're not, though I'll readily admit that we seem to be making reasonably good progress toward that goal of late.

If there was a human that could create copies of himself using trial and error to toy around with its brain to get the best results, iterating over time, wouldn't you expect that to maybe be a different situation?

I think it's entirely possible that such a human would never find even a slight improvement, because the possibility space is simply too vast. Compare your model to one of a human having unlimited chances to guess a 64-character alphanumeric string. The standard assumption is that improving the human brain is simpler than guessing a 64-character alphanumeric string, but given that we've never actually done it, I'm completely baffled at where this assumption comes from in others. I certainly know where it came from when my younger self held it, though: I read a lot of sci-fi that made the idea seem super-cool, so I wanted to believe it was true, so I did.

As for a machine, I entirely understand the concept of a general intelligence undergoing recursive self-improvement. There's an actual concrete question there of how much room to grow there is between the minimum-viable and maximally-efficient code, and we don't know the answer to that question. Then there's the question of how much intelligence the maximally-efficient version provides, which we also don't know. Hard takeoff assumes that each improvement enables more improvements in an exponential fashion, but that's not actually how the world I observe works. All complex systems I observe involve low-hanging fruit and diminishing returns once they are exhausted.

Bullet point 3 seems to imply that humans could never be much more advanced technologically than they are now, or that much more advanced technology wouldn't yield much in practical terms. Which are both wrong from both an inside and outside view. Through common knowledge and also through common sense.

I disagree. It seems obvious to me that the rate of technological progress has slowed significantly over my lifetime, and I think it reasonable to suppose that this trend is likely to continue into the future. I think it's at least possible that we are already pushing up against basic physical constraints. A lifetime of observing technological mirages like the battery and fusion power breakthroughs that have been ten years away for seventy years and counting indicates to me that some of these problems are legitimately hard, and that the future ahead of us isn't going to look like the steam > electricity > electronics > code ages we've enjoyed over the last few centuries. The developments are observably slowing down.

Maybe AGI will change that. Alternatively, maybe it won't. We don't actually know. It's easy to see ways that it could, given certain assumptions, but that is not proof that it will.

But let's say I concede all of the above: AGI is probably coming, and it's at least a plausible X-risk that we should be concerned about. What then?

I'd rather not move on to the second question until you've actually conceded the first question, instead of just "let's say".

I think it's entirely possible that such a human would never find even a slight improvement, because the possibility space is simply too vast.

But... the AI systems we have today are capable of finding large improvements through the same principle of trial and error. Your "absence of empirical evidence" has already failed. For that matter, evolution already found out how to improve the human brain with trial and error.

The claim that the third exponential is necessary rests on the idea that humanity could only be beaten by something much smarter than us if it had much more advanced technology AND that much more advanced technology will never come.

The first half of that is something that I could imagine an average joe assuming if he didn't think about it too much or if his denial-systems were active, but the second half is extremely fringe.

But... the AI systems we have today are capable of finding large improvements through the same principle of trial and error. Your "absence of empirical evidence" has already failed.

Large improvements in a human mind or in a human-equivalent AI mind? I'm pretty sure they haven't.

For that matter, evolution already found out how to improve the human brain with trial and error.

Sure. But your assumption is that there's lots of headroom for further improvements, and in point of fact evolution hasn't found those.

The claim that the third exponential is necessary rests on the idea that humanity could only be beaten by something much smarter than us if it had much more advanced technology AND that much more advanced technology will never come.

The claim that the third exponential is necessary rests on the idea that humanity could only be beaten by something much smarter than us if it had much more advanced technology AND that much more advanced technology will never come.

I highlight the third exponential because it underlies so many descriptions of the AI endgame. IIRC, Yudkowski has publicly assigned a non-zero probability to the idea that an AI might be able to hack the universe itself exclusively through executing code within its own operating environment. I'm not arguing that a superintelligent AI can't beat humanity without an overwhelming tech advantage; maybe it can, maybe it can't, though I think our odds aren't uniformly terrible. I'm arguing that most AI doomer persuasion hinges on science-fiction scenarios that may not be physically possible, and some that almost certainly aren't physically possible.

I do not know whether much more advanced technology will come, and neither do you. I think that the more our reasoning is based on the imagination rather than the empirical, the less reliable it becomes. I observe that predictions about future technology are extremely unreliable, and do not see a reason why these particular predictions should be an exception. More generally, serious tech improvements appear to me to be dependent on our current vastly interconnected and highly complex global society maintaining its present state of relative peace and prosperity, and that seems unlikely to me.

More comments