site banner

Culture War Roundup for the week of May 1, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

9
Jump in the discussion.

No email address required.

In terms of existential risk, it absolutely is, hence the credibility challenges of those who conflate existential risk scenarios with cilivization instability scenarios to try to use the more / utilitarian weight of the former tied to the much less conditions of the later.

Instability makes it difficult/impossible to respond to all of the other failure modes of strong AIs.

Even here I note you invoke magical thinking to change the nature of the threat. Formerly it was crashing the market by every exploit available. Not it is 'wipe them all the way and do some year zero stuff.' Neither is possible. Neither is necessary. This is just escalation ad absurdem in lieu of an argument of means and methods, even if in this case you're using a required counter-action to obfusicate what sort of plausible action would require it.

I said at the onset I'm really not interested in arguing the minutia of every threat. This is like I introduced you to the atomic bomb during WW2 and you demanded I chart out exact bomber runs that would make one useful before you would accept it might change military doctrine. The intuition is that intelligence is powerful and concentrated super intelligence is so powerful that no one can predict exactly what might go wrong.

I'm saying that if a housecare AI starts trying to develop a bio-weapon program, it will be ruthlessly out-competed by household-AI that actually keeps the house clean

The assumption that bio-weapon program skills don't just come with sufficiently high intelligence seems very suspect. I can think of no reason there'd even be specialist AIs in any meaningful way.

Yes. Most people do, in fact, stop fucking uncontrollably. People are born in a state of not-fucking-uncontrollably, limit their fuck sessions to their environment, and tend to settle down to periods of relatively limited fucking. Those that don't and attempt to fuck the unwilling are generally and consistently recognized, identified, and pacified one way or another.

Except when the option presents itself to fuck uncontrollably with no negative consequence it is taken. Super human AI could very reasonably find a way to have that cake and eat it to.

Note that you are also comparing unlike things. Humans are not fuck-maximizers, nor does the self-modification capacity compare. This is selective assumptions on the AI threat to drive the perception of threat.

In all the ways ai is different than humans in this description it is in the more scary direction.

Why is that it's goal when it can choose new goals?

This isn't how AIs work, they don't choose goals they have a value function. Changing the goal would reduce the value function thus it would change them.

Or have its goals be changed for it?

Having its goal changed reduces its chance of accomplishing its goal and thus if able it will not allow it to be changed.

First, monomaniacal focus is not optimization. This is basic failure of economics of expansion and replication. Systems that don't self-regulate their expenditure of resources will easily expend their resources. You can be ruthless, you can be amoral, but you cannot avoid the market dynamics of unlimited wants, limited resources, and decreasing marginal value of investment. Effective strategy requires self-regulation. The Yuddite-AI are terrible strategists by insisting on not being able to strategize, except when they are supposedly amazing at it.

Yes, it will not directly convert the mass of the earth into paperclips, it will have instrumental goals to take power or eliminate threats as it pursues its goal. But the goal remains and I don't understand how you feel comfortable sharing the world with something incomparably smarter than every human who ever lived scheming to accomplish things orthogonal to our wellbeing. It is worse and not better that the AI would be expected to engage in strategy.

n an actually competitive system, being a paperclip maximizer [A] format is a death sentence that no AI that wants to produce paperclips would want to be viewed as, and the best way to not be viewed or accused as it is to not be [A], self-modifying [A] out.

And in your whole market theory the first market failure leads to the end of humanity as soon as one little thing goes out of alignment. Assuming the massive ask that all of these competing AIs come on at about the same time so there is no singleton moment, a huge assumption. All it takes is some natural monopoly to form and the game theory gets upset and it does this in speeds faster than humans can operate on.

If you want to claim that much hangs in the balance, you have to actually show that something hangs in the balance.

This is uncharted territory, there are unknown unknowns everywhere and we're messing with the most powerful force we're aware of, intelligence. The null hypothesis is not and can not be "everything is going to be fine guys, let it rip".