This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
What are the limits of the weak man?
Note: Although this post cites specific real-life examples, the intent of the discussion is intended to be entirely at the meta level.
Scott Alexander's definition is apt to cite:
Also instructive is Bryan Caplan's gradation:
What Caplan is describing as "collective straw manning" seems to be a good scale for weakmanning's range. And lastly, consider also Julian Sanchez's disclaimer:
At least in theory, I think weakmanning should be avoided, but I struggle with how to draw the line exactly. If your goal is to avoid weakmanning, there's at least two axes that you must consider:
All the possible arguments for position X, ranked on a spectrum from least to most defensible.
All the possible arguments for position X, ranked on a spectrum from least to most *representative *of believers in X.
Weakmanning is not much of an issue if you're arguing against a single individual, because they either endorse the particular arguments or not. You can't showcase the error of one's ways by refuting arguments they never held.
But generally we tend to argue over positions endorsed by many different people, where each person may differ with regard to which argument they either advance or prioritize, so what should count as "representative"?
For example, many people believe in the theory of evolution, but some believers do so under the erroneous belief that evolutionary change occurs within an individual organism's lifespan. [I know some smartass in the comments will pipe up about some endangered tropical beetle or whatever does demonstrate "change-within-lifespan" evolutionary changes. Just remember that this is not an object-level discussion.] If you use a crude heuristic and only poll relevant experts (e.g. biology professors) you're not likely to encounter many adherents of the "change-within-lifespan" argument, so this could be a decent filter to narrow your focus on what should count as "representative" for a given position. This is generally an effective tactic, since it helps you avoid prematurely declaring victory at Wrestlemania just because you trounced some toddlers at the playground.
But sometimes you get a crazy position believed by crazy people based on crazy arguments, with a relatively tiny minority within/adjacent to the community of believers aware of the problems and doing the Lord's work coming up with better arguments. InverseFlorida coined the term "sanewashing" to describe how the meaning of "defund the police" (DTP) shifted [TracingWoodgrains described the same dynamic with the gentrification of /r/antiwork. Credit also to him for most of the arborist-themed metaphor in this post.] to something much more neutered and, correspondingly, much more defensible:
Keep in mind that this is not an object-level discussion on the merits of DTP. Assume arguendo that the "sanewashed" arguments are much more defensible than the "crazy" ones they replaced. If someone were to take a position against DTP by arguing against the now obsolete arguments, one of the sanewashers would be technically correct accusing you of weakmanning for daring to bring up that old story again. This fits the literal definition of weakmanning after all.
As Sanchez noted above, for most people for most positions, intuition predates rationality. They stumble around in the dark looking for any sort of foothold, then work backwards to fill in any necessary arguments. Both the sanewashers and the crazies are reliant on the other. Without the sanitization from the hygiene-minded sanewashers, the position would lack the fortification required to avoid erosion; and without the crazy masses delivering the bodies and zeal, the position would fade into irrelevance. The specific ratio may vary, but this dynamic is present in some amount on any given position. You very likely have already experienced the embarrassment that comes from a compatriot, purportedly on your side, making an ass of both of youse with their nonsensical arguments.
If your ultimate goal is truth-seeking, weakmanning will distract you into hacking away at worthless twigs rather than striking at the core. But sometimes the goal isn't seeking truth on the specific position (either because it's irrelevant or otherwise already beyond reasonable dispute) and instead the relevant topic is the collective epistemological dynamics [I dare you to use this phrase at a dinner party without getting kicked out.]. InverseFlorida's insightful analysis would not have been possible without shining a spotlight on the putative crazies — the very definition of weakmanning in other words.
Here's the point, at last. Normally someone holding a belief for the wrong reasons is not enough to negate that belief. But wherever a sanewasher faction appears to be spending considerable efforts cleaning up the mess their crazy neighbors keep leaving behind, it should instigate some suspicion about the belief, at least as a heuristic. Any honest and rational believer needs to grapple for an explanation for how the crazies managed to all be accidentally right despite outfitted — by definition — with erroneous arguments. Such a scenario is so implausible that it commands a curious inquiry about its origin.
It's possible that this inquiry unearths just another fun episode in the collective epistemological dynamics saga; it's also possible the probe ends up exposing a structural flaw with the belief itself. In either circumstances, a weakmanning objection is made in bad faith and intended to obfuscate. Its only purpose is to get you to ignore the inconvenient, the annoying. You should pay no heed to this protest and continue deploying the magnifying glass; don't be afraid to focus the sun's infernal rays into a burning pyre of illumination. Can you think of any reasons not to?
A sleep-deprived rant that may be missing your argument:
I agree with this meta-post. But, in fairness, I suspect that the biggest intellectual deficit of the rationalsit culture is precisely this preoccupation with the meta. Half a high school debate club, half a nerd cult of reason, with very little exposure to empirical matters and gratitious amounts of speculative fiction plus a sprinkling of Talmudic education – this is our foundation. This hothouse allowed nuanced, hypothetical concepts to thrive, concepts which would not have lasted a day in a challenging object-level environment; indeed, some even become apex predators, like «infohazard». Or steelmanning. More terms and tools aren't always good; they can get reified and multiply confusion.
I don't think that arguments which get called out as «weakmanning» typically address a weak point, or a nonrepresentative point, per se. They address, and attack, that which their speaker think the other party actually believes, on basis of reasoning about their broader philosophy; specific illustrations are only for convenience. If it doesn't, at the moment, match the other party's consensus (however we establish it), that is immaterial. We can protest brining subpar evidence for an assertion about such essential belief, but it's important to know whether the assertion is meant to depend on the evidence presented.
How is «weakman» different from «the bailey»? «Steelman» from «sanewashing» or «the motte»? Those are all loaded terms (sanewashing implies the insanity of the main body of belief) and some comments offer a sensible perspective, eg @DuplexFields here. But in real scenarios each politically significant group has a wide gamut of opinions (as in, comprehensive interpretation of what the group «is about»), from the most idiotically extremist and unsubstantiated to fruitlessly anodyne to plain dumb (with tiny pockets of well-reasoned extremists); it's an exercise in futility to classify whether a particular take X' falls within the normal range for X-affiliated distribution of takes, or somewhat on its fringes (weakman) or just at the very edge of self-parody and not recognized as a legitimate variant by the consensus (strawman). (Language allows endless compositionality, so even a crazy rant can be construed as an inapt appeal to common sense.)
The futility goes deeper. Yes, there are ways for bizarre outliers to come to be which are basically unrelated to the position's essence, so strawmen do not tell us much about the big tent or merits of its ideology, and in this vein it could seem meaningful to identify them. But I'd argue it doesn't even matter if no such caricatures have proven existence. So long as the position is well-documented enough, it is appropriate to discuss its implications even if no proponent (again, recognized as such by the group's consensus) is currently willing to bite the bullet. And in fact this method is constantly applied to non-mainstream views.
The obvious example: consider the beeline from HBD to eugenics to racism and fascism; this is not so much a matter of historical association as it's recognition that these object-level views can be used to support illiberal policies; they don't have to, but it can work.
Consider, also, that merely a few months ago, back in the gentle age where plans like «inflating regulatory burden to obstruct AI research» were considered beyond the pale, Yudkowsky disavowed all violence in pursuit of AI risk reduction, and cranks like me were saying that utilitarianism + AI Doom doctrine imlies impossibility to trust such disawovals, that in the end AI Doomers will be willing to embrace totalitarianism. Lo and behold, now he advocates for airstrikes and worse. It wasn't the threat model that changed, it wasn't his philosophy; what changed was the Overton window, and the logic of his doctrine realigned his expressed views accordingly. Could this have been predicted by Yudkowsky?
I don't know. I don't even know if I could predict my own reaction to the invasion of Ukraine on February 23rd, 2022. But I was saying then that it probably won't happen, because it's so absurd, and I mocked Western intelligence that played up all the saber-rattling on the border; and many people who seemed to be on the same page as me then proceeded to enthusiastically support the Z operation, to the point of cheering for unhinged barbarism. It turned out that I was the fringe, not them; that the logic of our stated ideology, in a timeline like this, flows like that – just as predicted by our oh-so-despicable opponents. They were correct to interpret the outspoken outliers as signs of things to come; they were correct to dismiss me when I was saying that those strawmen or weakmen are not representative of the whole. I can steelman Russian Nationalism. I cannot redeem it.
I don't think people who can operate at many layers of abstraction are fools (and people who can't, generally can't benefit from appeals to these epistemological categories). They can understand positions fine, and they can see the distribution of voices as well as anyone, pointing out to them that this specific voice is X deviations from the median on the axis of quality or popularity is not a very good use of time. The disagreement is mainly about how a given position, adopted by a given group, works out in reality; what it collapses into.
More options
Context Copy link
More options
Context Copy link