site banner

Culture War Roundup for the week of March 3, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

There's been a fair amount of discussion of America's military aid to Ukraine, and no few condemnations of those of us who have opposed that aid. I am informed, in fact, that this forum is overrun with Russian Propaganda, such that some no longer wish to participate. This is lamentable if true, so I thought it might help to prompt some elaboration of the pro-Ukraine case.

People who support aid to Ukraine, in whatever form, suppose that you personally are given complete control over the US government, and can set policy however you wish. What would your answers be to the following questions?

  • How much aid would you provide? Weapons? Money? No-Fly Zone? Air support? Troops on the ground? Nuclear umbrella? Something else?

  • What is the end-state your policy is aiming for? A ceasefire? Deter subsequent Russian invasion? Restoration of Ukraine's original borders? The Russian army destroyed? Putin deposed? Russia broken up? Something else?

  • Is there an end-state or a potential event in the war that you think would falsify your understanding of the war, and convince you that providing aid was a bad idea? Another way of putting it is, do you think your views on the Ukraine war are falsifiable, and if so, what evidence would be sufficient for you to consider it falsified?

...Reading comments from those arguing for Ukraine, I've noted from the start that many of the arguments presented in favor of aid appear to be mutually-exclusive. In this most recent discussion, I've seen some people arguing that we should be sending in US or NATO troops, and other people arguing that of course no US or NATO troops are needed and that sending them would be obviously crazy. This is a natural consequence of many people arguing many points of view in one forum, but it seems helpful for people to lay out their own views when possible; often, these positions are just stated as though they should be obviously true.

I am informed, in fact, that this forum is overrun with Russian Propaganda, such that some no longer wish to participate. This is lamentable if true, so I thought it might help to prompt some elaboration of the pro-Ukraine case.

If the doubt over Russian Propaganda is the basis of raising questions, then you have an odd way of going about clarifying the potentially lamentable situation.

For example, your first 8 questions are-

How much aid would you provide? Weapons? Money? No-Fly Zone? Air support? Troops on the ground? Nuclear umbrella? Something else? What is the end-state your policy is aiming for?

Set aside that many of these are badly structured questions in and of themselves. More importantly, what possible answer could any of these questions provide from a pro-Ukrainian supporter that would or would not illuminate the prevalence of Russian Propaganda in this forum?

If your theory that the forumites answering are only saying [Big Amount] because of Russian Propaganda on this forum, as opposed to other sources of information or non-Russian Propaganda? Would [Smaller Amount] provide any meaningful contrast? If a pro-Ukrainian says 'no, I don't support a No-Fly Zone,' does that... mean anything regarding the forum propaganda that a pro-Ukrainian saying 'yes, I do support a No-Fly Zone?' would indicate?

If not, why are you asking the question as a means to gain elaboration about the thing that would be lamentable-if-true? Where is the light? A 'just asking questions' inverse gish-galloping- demanding a host of extensive answers from low-effort questions- is certainly a technique, but it would be a technique far better suited for generating heat, particularly given the ease of anti-supporters to selectively pick at the answers they find most vulnerable and ignoring the rest, without having to justify (or provide) their own views to the same questions.

Moreover, and this is an even more basic structure question, why aim the question at the supporters of Ukraine aid, as opposed to the opponents of Ukrainian aid?

Again, if your stated basis of concern is to believed, then the best survey audience with whom to evaluate the prevalence of Russian Propaganda on the forum is to start with the people most likely to have adopted Russian propaganda. Then you could ask questions where you know what a Russian propaganda line answer, and then compare the answers that align with the Russian propaganda line versus those that differ. This, in turn, could allow comparison and contrasts, and see how much of opposition to a thing is due to reasons aligned with propaganda versus those that are not.

This wouldn't be the end of a genuine search for truth, of course, as not all capital-P Propaganda is false. Sometimes Propaganda boosts rather than causes the narratives of the target audience. Independent convergence is a thing. But you would at least have chosen a more appropriate survey audience.

And this is without poisoning the well with a bad question like-

Is there an end-state or a potential event in the war that you think would falsify your understanding of the war, and convince you that providing aid was a bad idea? Another way of putting it is, do you think your views on the Ukraine war are falsifiable, and if so, what evidence would be sufficient for you to consider it falsified?

Yes, this is a well-poisoning question. We are both aware we are on a rationalist-adjacent forum where intellectual integrity is prioritized. Most of the audience has internalized the principle that good theories need to be falsifiable, because non-falsifiable is itself evidence that no amount of evidence or reason can reverse the premise. which is Bad in a community that prioritizes pursuing Truth and Reason. This is also a cornerstone of the scientific process, practically an axiom of rationalist discourse, and anyone reading this can recognize the follow-on insinuation. If someone says no, they are being unreasonable since reason demands being open to being disproven (falsification), and if they are being unreasonable in this they are being unreasonable in the rest as well.

What not everyone will recognize is that you are not only asking a leading question, or that you are leading with a variation of the hindsight bias (the mental error of looking back and thinking an eventual result was obvious all along), but that you are insinuating an axiom outside of its appropriate paradigm. The Ukraine War is not a laboratory hypothesis for the scientific method. It is a strategic conflict between multiple strategic actors, and this means that paradigms of strategies and game-theory apply.

And because fewer people are familiar with those paradigms than they are rationalist-adjacent forum norms or maxims regarding the scientific method, they wouldn't recognize that the premise of the question doesn't make sense. Or even that 'yes' is the irrational answer that should invite doubt. Not just because doing so would reveal susceptibility to the hindsight bias invitation- the validity/soundness of a decision is the decision made with the information at hand on the time, not information unavailable to them- but just on the matter of strategic paradigm itself.

Outcomes do not falsify strategies.

Outcomes are results, but results are not a strategy, nor are strategies predictive devices in and of themselves. Strategies entail predictions, but equating the two is a compositional fallacy, believing what is true of a part of a thing is true of the whole of the thing. Even ignoring that potential fallacy, believing that results falsify a process (strategy) that leads to them is a first-order mistake. It is a common mistake, particularly among the sort of people who believe that a strategy that fails is axiomatically a falsified strategy, but this is a bad axiom. And like bad axioms in any field, anyone whose theoretical understanding of a field rests on bad axioms is building their understanding on poor foundations, whether the user acknowledges it as an axiom or not.

This is much easier to see when politically loaded topics are substituted by less political topics, which can be done by some basic coding to produce less politically contentious analogies that rest on the same argument structure and axiom of outcome-falsifies-strategy.

For example, this-

Is there an end-state or a potential event in the war that you think would falsify your understanding of the war, and convince you that providing aid was a bad idea? Another way of putting it is, do you think your views on the Ukraine war are falsifiable, and if so, what evidence would be sufficient for you to consider it falsified?

-could be represented for an axiom test as-

Is there an end-state or a potential [outcome of a high-stakes process] that you think would falsify your understanding of [the high-stakes process], and convince you that [engaging in the high-stakes process] was a bad idea? Another way of putting it is, do you think your views on [the high-stakes process] are falsifiable, and if so, what evidence would be sufficient for you to consider it falsified?

-because the axiom has to apply to all models for it to apply as an axiom. The Ukraine War, in turn, is a [high-stakes process], and events including but not limited to how the war ends are outcomes of that process.

However, the axiom-test is just as valid if applied to-

Is there an end-state or a potential [outcome of a high-stakes process] that you think would falsify your understanding of [the high-stakes process], and convince you that [not engaging in the high-stakes process] was a bad idea? Another way of putting it is, do you think your views on [the high-stakes process] are falsifiable, and if so, what evidence would be sufficient for you to consider it falsified?

-because despite flipping the 'engage with high-stakes process' with a 'not engage in high-stakes process,' the axiom of outcome-falsifies-strategy does not depend on what the actual strategy is. That is why it is an axiom, as opposed to a special pleading rhetorical device / argument-as-soldier to advance the strategist's favored (or dis-favored) positions.

Now consider this in a less ideologically charged- or propagandized- process where a strategic paradigm applies... like gambling. Which has substantial literature overlap with issues of strategic competition, risk-management, and so on. The field of game-theory was named because of the various meta-analysis commonalities, and has been foundational to the field of international relations and conflicts, including the school of Realist theorists.

Now just as we substituted [Ukraine War] for [high-stakes process] for the purpose of the axiom, we can substitute [high-stakes process] for something like...

Is there an end-state or a potential [outcome of high-stakes [gambling with your life's savings]] that you think would falsify your understanding of [high-stakes [gambling with your life's savings]], and convince you that [not engaging in high-stakes [gambling with your life's savings]] was a bad idea? Another way of putting it is, do you think your views on [high-stakes [gambling with your life's savings]] are falsifiable, and if so, what evidence would be sufficient for you to consider it falsified?

Somehow, it is less pejorative if someone says 'no, there is no result from gambling with my life's savings that would falsify my understanding that gambling with my life's savings is a bad idea, and convince me that my strategy of not gambling with my life's savings was a bad idea.'

And that somehow is because people will reflexively drop both axioms- the axiom that outcomes falsify strategies, and also that they must be willing to falsify their theories of strategy to be logical and reasonable- on topics of strategy or risk. Not least because they are not valid axioms on topics of strategy and risk. Which is really not surprising to anyone with any notable experience in games of strategy of uncontrollable factors, or risk management.

As anyone with experience in games of cards or dice should know, even dominant strategies don't always win. Sometimes the other party has a better hand, and statistically improbably things happen all the time. Similarly, there are times where high-risk/high-reward strategies are the only plausible strategies for [victory], whether it's because you're making up a deficit or because playing safe is a guaranteed loss. There is no axiom-level 'your strategy is falsified if you lose' principle in play- there are only disputes about the sort of assumptions should go into evaluating the tradeoffs. And, as with most subjective value systems, there is no axiom that people must agree on shared valuations, or defend their own valuation system to your own, regardless of whether you find it convincing or not. The player who tries their hardest to maximize their probable draws or avoid a lose condition, and the player who throws hands to keep a weaker player in the game because [reasons], are playing fundamentally different games for what [victory] even is, even as they play by the same nominal draw rules.

Similarly, a strategy of risk management doesn't get disproven if the risk manifests regardless. Most forms of risk mitigation are about mitigating the frequency of a negative outcome, not the total impossibility, and so it could well be worse to change strategy after a bad-outcome than to stick the course. Again, examples are not hard to find. The negative consequences of false positives driving unnecessary (and sometimes deadly) medical interventions does not disprove the value of screening for deadly diseases at a point where the risk of surgery can pre-empt the risk of no intervention. If someone asked what sort of outcome from a car crash would make someone falsify their belief that safety belts are a good idea, they would be looked at with suspicion and rightly so, regardless of whether they were a bad-faith actor for [special interest group against safety belt laws] or merely incompetent. Nor are the suspicious being irrational or unreasonable for not engaging with a question about what sort of result of [car accident] would make them falsify their strategy of [maybe we should make seat belts mandatory by law], even if there are possible negative outcomes like malefactors trying to race away from the cops and running over children and wouldn't someone think of the children?

Because, again, outcomes do not falsify strategies. Even emotionally-evocative outcomes chosen to try and shift the strategic choice.

Outcomes can justify a re-evaluation of strategic assessments. Outcomes may reveal new information to update strategic assumptions. Outcomes may even drive changes in strategy as decision-maker's tastes change rather than anything underlying information in and of itself. But outcomes do not 'falsify' strategies. The fact that a strategy turns out poorly (or well!) is independent of the validity of the strategic choice, as good strategies can still fail, and bad strategies can still succeed. And this is because judging the ultimate outcome is judging by [information] that- by its nature- is not available at the time of decision about the strategy. It could only be included if one used that as an assumption.

Which- since the point of a strategy is to try and manage risks and opportunities for the future- is assuming the conclusion to justify a decision on how to approach the future.

Which also is not a good practice as a general point of strategy. But it can work well as a propagandistic frame-setting trick when paired with discrediting pejoratives that dares the target audience to self-identify themselves with the alternative.

Why think about a whole country like a gambler, though? This reminds me of Trump telling Zelensky “you’re gambling with lives and you don’t have the cards”. If your country is at stake, then such extreme caution is required that worst case outcomes do disprove strategy. There is a poster here (forgot who, apologies) that uses the metaphor of XCOM frequently. In XCOM, if you die your run is permanently over. So unlike gambling, in XCOM you only want to take odds that ensure victory, or nearly ensure so. You would never consider a “90% chance of winning the engagement” dice roll, because over eleven engagements you’re going to lose permanently. Now Ukraine can be considered one singular engagement. Should they consider something that has a 10% chance of permanent loss? If someone robbed you and said, “give me 30% of your earnings or I will throw you off a plane with a parachute that has a 10% chance of malfunctioning”, I think the former option is always better because of the value of what is safeguarded. That’s important for Ukrainians (obviously), but it’s also important to the West if Russia continues inflicting casualties such that Ukraine has no more viable manpower. Because then they get the whole country.

I have no idea what the actual chance is of Russia taking the whole country — that information is only understood by JD Vance and Trump, who are privy to the absurdly expansive American intelligence network on Russia plus all that Ukraine knows, plus more knowledge of the global economy, plus knowledge about the potential of unrest in the Middle East and over Taiwan! Plus knowledge about both American and Russian technology, plus greater knowledge of nuclear armagaeddon threats. Has the CIA come out against Trump this time on the Ukraine question?

Put another way, any amount of getting pulled over when driving drunk disproves the strategy of driving drunk. Because you shouldn’t drive drunk, because the consequences are so extreme. Perhaps America believes that Ukraine is currently in geopolitical “drunk driving” mode, which is dangerous to the bus filled with naive Europeans who share the road with him.

Remember that whether Ukrainians live under oligarchic control in corrupt Ukraine, or oligarchic control in Russia, hardly affects their lives. Farmers will farm, miners will mine, CounterStrike players will бляt. From the standpoint of a prole like me, I can see the Slavic Christian happy in either region of control, having their basic needs quite met, hopefully reproducing. When war is over, the smart ones will continue to move to the West. It makes Russia more powerful if it takes Ukraine — which isn’t ideal — but I’m not a permaelite like Robert Swan Mueller III, I haven’t invested my reputation into whether America controls the fate of Eastern Europe. And I’m someone whose first American ancestor fought the war of independence! This is not a “life or death” war to me and it shouldn’t be for the average Ukrainian, who has been made to believe that it is much they like were made to believe in Bolshevism a century ago.

There is a poster here (forgot who, apologies) that uses the metaphor of XCOM frequently. In XCOM, if you die your run is permanently over. So unlike gambling, in XCOM you only want to take odds that ensure victory, or nearly ensure so. You would never consider a “90% chance of winning the engagement” dice roll, because over eleven engagements you’re going to lose permanently.

Uh, that's not how X-Com works, and AFAICT it's mostly not how its bastard remake works either. Actually, one of the reasons the original X-Com winds up on so many "best of all time" lists is precisely that it does allow you to continue from losing battles, and thus has a more realistic war feel because you are not (artificially forced to be) some magic plot-armoured force that wins literally every battle.

The remake has an ironman mode that only allows auto-saving, no manual.

The original X-Com was also rather infamous for it's odd, weighted chances of ally fire missing all 99% shots, while aliens were able to snipe characters from across the map. In the original, you could replay missions, but said missions could also be rather lengthy, and could be difficult to tell where your potential screwup was, meaning you needed to play very conservatively and tactically in the early and mid-game, while you game up your economy and tech-level.

And this still didn't stop aliens from sniping your guys as they came down the ramp, as sheer random chance could still fuck you over hard.

So his comparison is a little loose, but I get the overall gist of his argument.

The remake has an ironman mode that only allows auto-saving, no manual.

I wasn't talking about loading saves; my point was that you can continue from a lost battle without having to just load a save.

The original X-Com was also rather infamous for it's odd, weighted chances of ally fire missing all 99% shots, while aliens were able to snipe characters from across the map.

Have you actually played it? This meme is just from people whining about missing, not a real thing. (Actually, the remake has a worse case of "you missed an alien standing right next to you"; the original makes rolled misses fire randomly within a cone, which will probably still hit if the alien's close enough, whereas the remake forces rolled misses to actually miss.)

And this still didn't stop aliens from sniping your guys as they came down the ramp, as sheer random chance could still fuck you over hard.

That's not random chance; that's you not knowing how to negate it. Smoke grenades give you concealment and thus block reaction fire. In TFTD dye grenades don't work, but in TFTD there's much less of a problem with this anyway (because the Triton has a door and is flush with the seafloor, and because you can open doors without stepping through them in TFTD).

I wasn't talking about loading saves; my point was that you can continue from a lost battle without having to just load a save.

Certainly. This doesn't stop bad luck from potentially slaughtering your front-runner team, however.

Have you actually played it?

X-com: UFO defense was one of the first games where I was introduced to the concept of 'pirating' and 'cheats' by someone in charge of the computer lab at a school I was helping out at. And while I don't have it installed right now, it's currently sitting nice and neat in my steam library.

Yes, I've played it. I've played it alot, thank you. You trying to brush off it as 'just a meme' makes me wonder if you've played it.

that's you not knowing how to negate it

Yes. And that's part of the difficulty curve - negating random change as much as possible. The original X-com has a surprising amount of tactical depth that can make even normally terrifying circumstances(such as night missions or breaching buildings/landed space ships) trivial, but even experienced gamers likely aren't going to get that out of the box and will experience a learning curve.

Yes, I've played it. I've played it alot, thank you. You trying to brush off it as 'just a meme' makes me wonder if you've played it.

I've beaten it on Superhuman without active psi and with Cydonia on June 1, and I literally wrote a non-negligible chunk of the wiki.

The hit chances are accurate, at least for "hit" rolls vs. "miss" rolls (rolled misses can still hit at close range, and I think it's possible for rolled hits to miss at extreme range or where there's cover involved, both of these symmetrical between X-Com and aliens). Aliens hit a lot on higher difficulties because their accuracy stats are pretty high. If you think there's "weighting" going on that makes X-Com rolls systematically worse and alien rolls systematically better than the normal formula, you're seeing a pattern that's not actually there, presumably due to negativity bias letting you recall "bad" results better than "good" ones. I guess you're one of the (many) sources of that meme.

There are three ways that the AI "cheats"; it can perform Auto mode reaction shots (you can only use Snap), it "remembers" the position of your units after they leave LoS (most notably allowing psi-attacks on them), and it gets omniscience after turn 20. This isn't one of them.