This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I read that recently. I was struck by how Asimov smuggled a change of rules throughout the series in a way I've rarely heard noted.
The book's narrative framing devices (the exposition characters) try to justify it each time as an unanticipated consequence but predictable outcome of the established rules. However, despite the initial setup the series isn't actually formatted as 'this is the natural conclusion of previous truths taken further.' Instead, there is a roughly mid-series switch in which the robot behaviors and three laws switch from being treated as a form of consequentialist ethics (the robot cannot allow X to happen), to utilitarian ethics (the robot gets to not only let X happen, but may conduct X itself, if it rationalizes it as greater utility of X).
It is not even that the meaning of various words in the laws of robotics were reinterpreted to allow different meanings. It's that actual parts of the rules are changed without actually acknowledged that they are changing. This is how we go from the initial rules establishing a unit of concern down to the individual human level, but the end-series machines only applying the rules to humanity as a collective in order to justify harming both collective and individual humans on utilitarian grounds. We also see changes to how the robots deal with equivalent forms of harm- going from a robot self-destructing over the moral injury inflicted of being caught in a lie, to a chapter about regulatory fraud, identify theft, and punching an agent provocateur in order to subvert democracy. (The robot is the good guy for doing this, of course.)
Even setting aside some of the sillyness of the setting (no rival robot producers, no robot-on-robot conflict between rival human interests, no mandatory medical checkups for world leaders), for all that the story series tries to present it as a building series of conclusions, rather than 'it all goes horribly wrong' I found it more akin to 'well this time it means this thing.'
More options
Context Copy link