This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
I think that pure altruism is only impossible under the one definition that also renders "selfless behaviour" trivially impossible. You choose your own actions, so you naturally choose the ones you like the best, even if what you like the best is something like "to mistreat myself for the sake of others".
But let's talk psychology: Our mental states exist in a high-dimensional space, and one of the dimensions seems to be poverty-abundance. This is easy to miss if you haven't experienced the extremes of both. Do you know insecure people who are like black holes for compliments, affection and reassurance? That's the minus pole. But the plus pole also exists - a state where it becomes uncomfortable not to give. You'll usually have to be on drugs or to spend years doing spiritual practices in order to reach this state, but it's very real and basically a pure altruistic mindset.
And I have a reason to think that this behaviour will not disappear: It benefits the invidual, even when they do not do it for the sake of benefits. Positive states of being are psychologically healthy for the same reason that negative states of being are associated with dying earlier. And while altruism seems dangerous in that it's reverse eugenics, you can only be altruistic by improving yourself to be more than self-sufficient, so it's a kind of inverse parasitism.
On a negative note: I think it's literally impossible to protect society against exploitation without ruining every pleasant part of it. What follows from this is scary: Rules are bad. They're literally symptoms of problems rather than solutions to them. You cannot fix every loophole - you can only get rid of the type of person who would exploit a loophole.
Actually, I just realized a series of things - The way we're trying to reduce suffering is destroying all human experience. The way we're trying to minimize crime will result in the minimization of human freeedom. The way we're trying to model everything is destroying all mystery and wonder. Our attempts of reducing mistakes to zero is reducing meaningful actions taken to zero. You cannot solve all problems without killing all innovation. You cannot destroy competition without destroying growth. Many things are rotting because we refuse to let them die. I believe these are in the category of 'Complementarity Principles'.
Closing loopholes affects bad actors on the margin. Yes, someone who's sufficiently determined can find loopholes in almost anything. But it can be easier or harder to find loopholes, and it can be easier or harder to get those loopholes past that subset of judges who are actually fair.
The second amendment has probably done quite a bit for the right to bear arms even though state and Federal government constantly finds loopholes to work around it.
Closing loopholes affects everyone, not just bad actors. You degrade the whole system in order to harm a subset. Every rule and regulation does, at least when these rules and regulations only place new restrictions. I'm not sure about the effect of restrictions which limit other peoples ability to place restrictions, it's harder to solve the general case of that question.
Rules tend to limit things to the lowest common denominator, this doesn't just protect those below, it also harms those above. We're also part of a dynamic system, and these tend to balance themselves. If you find a way to make X half as dangerous, then people tend to be half as careful when they do X, and then you're back where you started. This "you're back where you started" seems to explain why introducing new rules for centuries haven't gotten us anywhere. We made laws in the 1500s to combat theft, and even today we're making new laws to combat theft. I think it's safe to conclude that laws do not work, and that further laws also won't work.
I recommend an entire new way of looking at these issues. Some rules are better than others, but I think we should look at these issues in a different perspective, one which is so different that our current perspective doesn't make any sense. I like this quote by Taleb:
"I am, at the Fed level, libertarian;
at the state level, Republican;
at the local level, Democrat;
and at the family and friends level, a socialist"
If the optimal level of trust in other people is inversely proportional to the size of the system, then the optimal system is different at every scale. And one way in which you can lessen restrictions is through decentralization (running many small systems in parallel). This is merely my own answer to the question, but it seems correct. After all, the amount of rules a system has (and perhaps needs) seems to depend on its size. Family members don't usually make rules for one another. This also explains why Reddit got worse as it grew larger, until themotte had to move to its own website. And this website is largely independent from larger systems (decentralized). If this website grew in size and popularity by a factor of 10 or 100, it would either need more rules and regulations, or be shut down.
Of course, this mathematical property is not set in stone - 4chan had few rules for its size at every scale. This is either because 4chan users are more tolerant of the tradeoffs of freedom, or because the social power of moral arguments was smaller on 4chan (less moralizers = less people suggesting that you ruin everything for everyone to prevent some kind of abuse going on).
These websites are merely examples, I'm trying to solve (or model, since no solution seems to exist) the most general case of imposing restrictions on behaviour order to prevent exploitation of a system. My conclusions so far are "there are only trade-offs" and "what systems are possible depends just as much on the people inside said systems as it does on the design of said systems"
More options
Context Copy link
More options
Context Copy link
mmmh!! top points, thanks
More options
Context Copy link
More options
Context Copy link