This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I mean I think the rub is that the alignment problem is actually two problems.
First, can an AI that is an agent in its own right be corralled in such a way that it’s not a threat to humans. I think it’s plausible. If you put in things that force it to respect human rights and dignity and safety, and you could prevent the AI from getting rid of those restrictions, sure, it makes sense.
Yet the second problem is the specific goals that the AI itself is designed for. If I have a machine to plan my wars, it has to be smart, it has to be a true AGI with goals. It does not, however have to care about human lives. In fact, such an AI works better without it. And that’s assuming an ethical group of people. Give Pinochet an AGI 500 times smarter than a human and it will absolutely harm humans in service of tge directive of keeping Pinochet in power.
Pinochet stepped down from power voluntarily. Like as a factual historical matter he clearly had goals other than 'remain in power at all costs'. I would point to 'defeat communism' and 'grow the Chilean economy', both worthy goals, as examples of things he probably prioritized over regime stability.
More options
Context Copy link
This is the danger that economists like Tyler Cowen say is most pressing, i.e. not some sci-fi scenario of Terminator killing us all, but of humans using AI as a tool in malicious ways. And yeah, if we don't get to omni-capable superintelligences then I'd say that would definitely be the main concern, although I wouldn't really know how to address it. Maybe turn off the datacenter access to 3rd world countries as part of sanctions packages? Maybe have police AI that counter them? It's hard to say when we don't know how far AI will go.
If the current state of the international arms market is any indication, large, relatively respectable countries like the US and Russia will give them to smaller sketchier allied countries like Pakistan and Iran, those sketchier allied countries will then give them to terrorist groups like Hezbollah and Lashkar-e-Taiba. So it might be pretty difficult to prevent. Also you have the problem of private firms in Europe and Asia selling them to rouge-ish nations like Lybia.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link