site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Yes, but you're assuming there's a lot more even more dangerous things "out there" for a smarter entity to discover.

I repeat that, while I think this is true, it's still not necessary for a genius AI to be an existential risk. I've already explained why multiple times.

Nukes? They exist.

Pandemics? They exist. Can they be made more dangerous? Yes. Are humans already making them more dangerous for no good reason? Yes.

Automation? Well underway.

Our first day of Physics lab classes at Caltech, the instructor told us that it doesn't matter how many digits of pi we'd all memorized (quite a bunch), just use 3.14, or a scientific calculator's pi key, whichever was faster, because any rounding error would be swamped out by the measurement error in our instruments.

When it comes to modeling the physical world, sure, going from knowing, say, Planck's constant to two decimal places to knowing it to three decimal places will probably net you a bunch of improvements. But then going from, say, ten decimal places to eleven, or even ten decimal places to twenty, almost certainly won't net the same level of improvement.

I do not think that the benefits of additional intelligence as seen even in human physicists is well addressed by this analogy. The relevant one would be comparing Newtonian physics to GR, and then QM. In the domains where such nuance becomes relevant, the benefits are grossly superior.

For starters, while the Standard Model is great, it still isn't capable of conclusively explaining most of the mass or energy in the universe. Not to mention that even if we have equations for the fundamental processes, there are bazillions of higher-order concerns that are intractable to simulate from first-principles.

AlphaFold didn't massively outpace SOTA on protein folding by using QM on a molecule by molecule basis. It found smarter heuristics, that's also something intelligence is indispensable for. I see no reason why a human can't be perfectly modeled using QM, it is simply a computationally intractable problem even for a single cell within us.

In other words, knowing the underling rules of a complex system != knowing all the potential implications or applications. You can't just memorize the rules of Chess and then declare it's a solved problem.

That it's unsupported extrapolation to reason 'smart=nukes, therefore super-smart=super-nukes and mega-smart=mega-nukes.'

I'm sure there people who might make such a claim. I'm not one of them, and like I said, it's not load bearing. Just nukes is sufficient really. Certainly in combination with automation so the absence of those pesky humans running the machines isn't a problem.

I want you to at least consider, just for a moment, the idea that maybe we humans, with our "1.4 kg brain[s] in a small cranium," may have a good enough understanding of reality, and of each other, that a being with "900 more IQ points" won't find much room to improve on it.

I have considered it, at least to my satisfaction, and I consider it to be exceedingly unlikely. Increases in intelligence, even within the minuscule absolute variation seen within humans, is enormously powerful. There seems to be little to nothing in the way of further scaling in the context of inhuman entities that are not constrained by the same biological limitations in size, volume, speed or energy. They already match or exceed the average human in most cognitive tasks, and even if returns from further increases in intelligence diminish grossly or become asymptotic, I am the furthest from convinced that stage will be reached within spitting distance of the best of humanity, or that such an entity won't be enormously powerful and capable of exterminating us if it wishes to do so.

Nukes? They exist.

Pandemics? They exist. Can they be made more dangerous? Yes. Are humans already making them more dangerous for no good reason? Yes.

But I don't see why super-intelligent AI will somehow make these vastly more dangerous, simply by being vastly smarter.

I consider it to be exceedingly unlikely.

Based on what evidence?

There seems to be little to nothing in the way of further scaling in the context of inhuman entities that are not constrained by the same biological limitations in size, volume, speed or energy.

Again, I agree, but again, further scaling in intelligence≠further scaling in power.

They already match or exceed the average human in most cognitive tasks

Again, so what? What part of "greater ability in cognitive tasks"≠"greater power over the material world" are you not getting — beyond, apparently, your need for it to be so based on you tying so much of your own ego to your own higher-than average intelligence?

I am the furthest from convinced that stage will be reached within spitting distance of the best of humanity.

Based on what evidence?

The relevant one would be comparing Newtonian physics to GR, and then QM. In the domains where such nuance becomes relevant, the benefits are grossly superior.

My degree is in physics. Yes, there's problems with the Standard Model. But there's far from any guarantees that whatever might replace it will provide anything like as revolutionary as those previous changes, particularly when it comes to practical effects.

Maybe I've just listened to Eric Weinstein go on about needing to put vast amounts of funding into physics too many times, because he never stops to consider that the "revolutionary new physics" we "need" to become "interplanetary" just aren't there. And then what?

Do me a favor, while I'm perfectly happy addressing your points, the amount of effort it would take exceeds what I'm willing to do for the sake of just one person or the handful who are reading week old threads this deep, so I suggest you make a new top-level post on the new thread, where I will happily continue the debate.

I'll check back shortly after I'm done studying, you're welcome to either link or post excerpts from my comments or summarize them as you see fit, I see no particular reason to think you'd twist them in bad faith.

If you want, I can do the same myself, but like I said, I really should be studying, if only till the Ritalin wears off haha.