Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.
Jump in the discussion.
No email address required.
Notes -
Let's say that, for whatever reason, A wishes to publicly tweet some extreme hate speech about B. A wants the language used to be effective, i.e. to get as much hate as possible across to B, but A also wants the language used to be safe, i.e. A wants, as much as possible, to minimize any legal risks and preferably any social risks for himself. These desiderata trade off against each other: The maximally effective language would be a "true threat", but this would be entirely unsafe, because true threats aren't protected by any free speech laws.
What are some examples of language that A can use which best balances the competing desiderata of effectiveness and safety?
One idea that's occurred to me is language along the following lines: "If I'm crossing a bridge and see that B is drowning in the river, I will absolutely rescue B — but only after I've made sure any drowning cockroach within a 5 mile radius has been rescued, for though I value B's life, I value those of cockroaches more."
It seems clear that the language used here would be highly effective (A is saying that the life of a merely theoretic cockroach is more important to him than that of A). But it seems that it'd also be reasonably safe, since A did not express any wish for B to die (if anything, A says he will "definitely rescue B", only he needs to prioritize (the lives of cockroaches); perhaps his priorities are screwed up, but it's difficult to imagine legal troubles for having screwed up priorities).
Am I missing something here? Are there even better ways for A to get as much hate publicly across to B without overly exposing himself to legal and/or social risks?
You forget that the people interpreting whether you've broken any rules are people, not rule-enforcing automatons. They can see what you're trying to do. They could just as easily crack down on you more because they are annoyed by your cynical attempt at rules-lawyering.
I mean, that would be good news from my point of view! After all, I hope it's clear that what's being aimed for here is not an actual "true threat" but something that is just as effective as one in terms of psychological impact on the recipient. After all, someone who actually intends to issue a true threat to someone would simply choose the most direct language ("I will do X unless you do/stop doing Y") available. There's no reason for such a person to care about "plausible deniability".
So if the people who interpret the rules can "see what I'm doing", they should rationally decline to "crack down on me" (at all, much less "more"), because they can see no "true threat" is intended, only a very hateful message.
(Think of it this way. When someone becomes notorious for some controversial political position we often hear their claims to receive "death threats" in the mail or via phone. We all know that the vast majority of people sending such messages do not actually intend to make good on their threats, yet by wording their messages in the outward form of a "true threat" they make themselves vulnerable to criminal prosecution. They merely want to say something very hateful, very violent towards the recipient.)
I think in this instance the "cracking down" is social sanctions (moderators and other people), not legal sanctions. Your OP writes "legal and/or social risks" as if they are similar.
Your trick works well to evade the law and the poster replying to you was saying that this might lead to even more social sanctions. The less your messaging looks like a legal threat, the more it looks like hate speech, which you correctly note is clear to everyone involved.
I think something you might be missing -- or maybe I am -- is that moderation on most platforms doesn't protect hate speech. And committing hate speech is a big social risk everywhere, even if it isn't a legal risk.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link