This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
There's been a fair amount of discussion of America's military aid to Ukraine, and no few condemnations of those of us who have opposed that aid. I am informed, in fact, that this forum is overrun with Russian Propaganda, such that some no longer wish to participate. This is lamentable if true, so I thought it might help to prompt some elaboration of the pro-Ukraine case.
People who support aid to Ukraine, in whatever form, suppose that you personally are given complete control over the US government, and can set policy however you wish. What would your answers be to the following questions?
How much aid would you provide? Weapons? Money? No-Fly Zone? Air support? Troops on the ground? Nuclear umbrella? Something else?
What is the end-state your policy is aiming for? A ceasefire? Deter subsequent Russian invasion? Restoration of Ukraine's original borders? The Russian army destroyed? Putin deposed? Russia broken up? Something else?
Is there an end-state or a potential event in the war that you think would falsify your understanding of the war, and convince you that providing aid was a bad idea? Another way of putting it is, do you think your views on the Ukraine war are falsifiable, and if so, what evidence would be sufficient for you to consider it falsified?
...Reading comments from those arguing for Ukraine, I've noted from the start that many of the arguments presented in favor of aid appear to be mutually-exclusive. In this most recent discussion, I've seen some people arguing that we should be sending in US or NATO troops, and other people arguing that of course no US or NATO troops are needed and that sending them would be obviously crazy. This is a natural consequence of many people arguing many points of view in one forum, but it seems helpful for people to lay out their own views when possible; often, these positions are just stated as though they should be obviously true.
If the doubt over Russian Propaganda is the basis of raising questions, then you have an odd way of going about clarifying the potentially lamentable situation.
For example, your first 8 questions are-
Set aside that many of these are badly structured questions in and of themselves. More importantly, what possible answer could any of these questions provide from a pro-Ukrainian supporter that would or would not illuminate the prevalence of Russian Propaganda in this forum?
If your theory that the forumites answering are only saying [Big Amount] because of Russian Propaganda on this forum, as opposed to other sources of information or non-Russian Propaganda? Would [Smaller Amount] provide any meaningful contrast? If a pro-Ukrainian says 'no, I don't support a No-Fly Zone,' does that... mean anything regarding the forum propaganda that a pro-Ukrainian saying 'yes, I do support a No-Fly Zone?' would indicate?
If not, why are you asking the question as a means to gain elaboration about the thing that would be lamentable-if-true? Where is the light? A 'just asking questions' inverse gish-galloping- demanding a host of extensive answers from low-effort questions- is certainly a technique, but it would be a technique far better suited for generating heat, particularly given the ease of anti-supporters to selectively pick at the answers they find most vulnerable and ignoring the rest, without having to justify (or provide) their own views to the same questions.
Moreover, and this is an even more basic structure question, why aim the question at the supporters of Ukraine aid, as opposed to the opponents of Ukrainian aid?
Again, if your stated basis of concern is to believed, then the best survey audience with whom to evaluate the prevalence of Russian Propaganda on the forum is to start with the people most likely to have adopted Russian propaganda. Then you could ask questions where you know what a Russian propaganda line answer, and then compare the answers that align with the Russian propaganda line versus those that differ. This, in turn, could allow comparison and contrasts, and see how much of opposition to a thing is due to reasons aligned with propaganda versus those that are not.
This wouldn't be the end of a genuine search for truth, of course, as not all capital-P Propaganda is false. Sometimes Propaganda boosts rather than causes the narratives of the target audience. Independent convergence is a thing. But you would at least have chosen a more appropriate survey audience.
And this is without poisoning the well with a bad question like-
Yes, this is a well-poisoning question. We are both aware we are on a rationalist-adjacent forum where intellectual integrity is prioritized. Most of the audience has internalized the principle that good theories need to be falsifiable, because non-falsifiable is itself evidence that no amount of evidence or reason can reverse the premise. which is Bad in a community that prioritizes pursuing Truth and Reason. This is also a cornerstone of the scientific process, practically an axiom of rationalist discourse, and anyone reading this can recognize the follow-on insinuation. If someone says no, they are being unreasonable since reason demands being open to being disproven (falsification), and if they are being unreasonable in this they are being unreasonable in the rest as well.
What not everyone will recognize is that you are not only asking a leading question, or that you are leading with a variation of the hindsight bias (the mental error of looking back and thinking an eventual result was obvious all along), but that you are insinuating an axiom outside of its appropriate paradigm. The Ukraine War is not a laboratory hypothesis for the scientific method. It is a strategic conflict between multiple strategic actors, and this means that paradigms of strategies and game-theory apply.
And because fewer people are familiar with those paradigms than they are rationalist-adjacent forum norms or maxims regarding the scientific method, they wouldn't recognize that the premise of the question doesn't make sense. Or even that 'yes' is the irrational answer that should invite doubt. Not just because doing so would reveal susceptibility to the hindsight bias invitation- the validity/soundness of a decision is the decision made with the information at hand on the time, not information unavailable to them- but just on the matter of strategic paradigm itself.
Outcomes do not falsify strategies.
Outcomes are results, but results are not a strategy, nor are strategies predictive devices in and of themselves. Strategies entail predictions, but equating the two is a compositional fallacy, believing what is true of a part of a thing is true of the whole of the thing. Even ignoring that potential fallacy, believing that results falsify a process (strategy) that leads to them is a first-order mistake. It is a common mistake, particularly among the sort of people who believe that a strategy that fails is axiomatically a falsified strategy, but this is a bad axiom. And like bad axioms in any field, anyone whose theoretical understanding of a field rests on bad axioms is building their understanding on poor foundations, whether the user acknowledges it as an axiom or not.
This is much easier to see when politically loaded topics are substituted by less political topics, which can be done by some basic coding to produce less politically contentious analogies that rest on the same argument structure and axiom of outcome-falsifies-strategy.
For example, this-
-could be represented for an axiom test as-
-because the axiom has to apply to all models for it to apply as an axiom. The Ukraine War, in turn, is a [high-stakes process], and events including but not limited to how the war ends are outcomes of that process.
However, the axiom-test is just as valid if applied to-
-because despite flipping the 'engage with high-stakes process' with a 'not engage in high-stakes process,' the axiom of outcome-falsifies-strategy does not depend on what the actual strategy is. That is why it is an axiom, as opposed to a special pleading rhetorical device / argument-as-soldier to advance the strategist's favored (or dis-favored) positions.
Now consider this in a less ideologically charged- or propagandized- process where a strategic paradigm applies... like gambling. Which has substantial literature overlap with issues of strategic competition, risk-management, and so on. The field of game-theory was named because of the various meta-analysis commonalities, and has been foundational to the field of international relations and conflicts, including the school of Realist theorists.
Now just as we substituted [Ukraine War] for [high-stakes process] for the purpose of the axiom, we can substitute [high-stakes process] for something like...
Somehow, it is less pejorative if someone says 'no, there is no result from gambling with my life's savings that would falsify my understanding that gambling with my life's savings is a bad idea, and convince me that my strategy of not gambling with my life's savings was a bad idea.'
And that somehow is because people will reflexively drop both axioms- the axiom that outcomes falsify strategies, and also that they must be willing to falsify their theories of strategy to be logical and reasonable- on topics of strategy or risk. Not least because they are not valid axioms on topics of strategy and risk. Which is really not surprising to anyone with any notable experience in games of strategy of uncontrollable factors, or risk management.
As anyone with experience in games of cards or dice should know, even dominant strategies don't always win. Sometimes the other party has a better hand, and statistically improbably things happen all the time. Similarly, there are times where high-risk/high-reward strategies are the only plausible strategies for [victory], whether it's because you're making up a deficit or because playing safe is a guaranteed loss. There is no axiom-level 'your strategy is falsified if you lose' principle in play- there are only disputes about the sort of assumptions should go into evaluating the tradeoffs. And, as with most subjective value systems, there is no axiom that people must agree on shared valuations, or defend their own valuation system to your own, regardless of whether you find it convincing or not. The player who tries their hardest to maximize their probable draws or avoid a lose condition, and the player who throws hands to keep a weaker player in the game because [reasons], are playing fundamentally different games for what [victory] even is, even as they play by the same nominal draw rules.
Similarly, a strategy of risk management doesn't get disproven if the risk manifests regardless. Most forms of risk mitigation are about mitigating the frequency of a negative outcome, not the total impossibility, and so it could well be worse to change strategy after a bad-outcome than to stick the course. Again, examples are not hard to find. The negative consequences of false positives driving unnecessary (and sometimes deadly) medical interventions does not disprove the value of screening for deadly diseases at a point where the risk of surgery can pre-empt the risk of no intervention. If someone asked what sort of outcome from a car crash would make someone falsify their belief that safety belts are a good idea, they would be looked at with suspicion and rightly so, regardless of whether they were a bad-faith actor for [special interest group against safety belt laws] or merely incompetent. Nor are the suspicious being irrational or unreasonable for not engaging with a question about what sort of result of [car accident] would make them falsify their strategy of [maybe we should make seat belts mandatory by law], even if there are possible negative outcomes like malefactors trying to race away from the cops and running over children and wouldn't someone think of the children?
Because, again, outcomes do not falsify strategies. Even emotionally-evocative outcomes chosen to try and shift the strategic choice.
Outcomes can justify a re-evaluation of strategic assessments. Outcomes may reveal new information to update strategic assumptions. Outcomes may even drive changes in strategy as decision-maker's tastes change rather than anything underlying information in and of itself. But outcomes do not 'falsify' strategies. The fact that a strategy turns out poorly (or well!) is independent of the validity of the strategic choice, as good strategies can still fail, and bad strategies can still succeed. And this is because judging the ultimate outcome is judging by [information] that- by its nature- is not available at the time of decision about the strategy. It could only be included if one used that as an assumption.
Which- since the point of a strategy is to try and manage risks and opportunities for the future- is assuming the conclusion to justify a decision on how to approach the future.
Which also is not a good practice as a general point of strategy. But it can work well as a propagandistic frame-setting trick when paired with discrediting pejoratives that dares the target audience to self-identify themselves with the alternative.
Why think about a whole country like a gambler, though? This reminds me of Trump telling Zelensky “you’re gambling with lives and you don’t have the cards”. If your country is at stake, then such extreme caution is required that worst case outcomes do disprove strategy. There is a poster here (forgot who, apologies) that uses the metaphor of XCOM frequently. In XCOM, if you die your run is permanently over. So unlike gambling, in XCOM you only want to take odds that ensure victory, or nearly ensure so. You would never consider a “90% chance of winning the engagement” dice roll, because over eleven engagements you’re going to lose permanently. Now Ukraine can be considered one singular engagement. Should they consider something that has a 10% chance of permanent loss? If someone robbed you and said, “give me 30% of your earnings or I will throw you off a plane with a parachute that has a 10% chance of malfunctioning”, I think the former option is always better because of the value of what is safeguarded. That’s important for Ukrainians (obviously), but it’s also important to the West if Russia continues inflicting casualties such that Ukraine has no more viable manpower. Because then they get the whole country.
I have no idea what the actual chance is of Russia taking the whole country — that information is only understood by JD Vance and Trump, who are privy to the absurdly expansive American intelligence network on Russia plus all that Ukraine knows, plus more knowledge of the global economy, plus knowledge about the potential of unrest in the Middle East and over Taiwan! Plus knowledge about both American and Russian technology, plus greater knowledge of nuclear armagaeddon threats. Has the CIA come out against Trump this time on the Ukraine question?
Put another way, any amount of getting pulled over when driving drunk disproves the strategy of driving drunk. Because you shouldn’t drive drunk, because the consequences are so extreme. Perhaps America believes that Ukraine is currently in geopolitical “drunk driving” mode, which is dangerous to the bus filled with naive Europeans who share the road with him.
Remember that whether Ukrainians live under oligarchic control in corrupt Ukraine, or oligarchic control in Russia, hardly affects their lives. Farmers will farm, miners will mine, CounterStrike players will бляt. From the standpoint of a prole like me, I can see the Slavic Christian happy in either region of control, having their basic needs quite met, hopefully reproducing. When war is over, the smart ones will continue to move to the West. It makes Russia more powerful if it takes Ukraine — which isn’t ideal — but I’m not a permaelite like Robert Swan Mueller III, I haven’t invested my reputation into whether America controls the fate of Eastern Europe. And I’m someone whose first American ancestor fought the war of independence! This is not a “life or death” war to me and it shouldn’t be for the average Ukrainian, who has been made to believe that it is much they like were made to believe in Bolshevism a century ago.
Uh, that's not how X-Com works, and AFAICT it's mostly not how its bastard remake works either. Actually, one of the reasons the original X-Com winds up on so many "best of all time" lists is precisely that it does allow you to continue from losing battles, and thus has a more realistic war feel because you are not (artificially forced to be) some magic plot-armoured force that wins literally every battle.
Eh, there's definitely points in the campaign of both the original and remake where a squad wipe is game over. If you can't get a B-team up, armed, and (in og) psi-resist tested before you wipe, you'll end up trying to fight ethereals with a truck full of laser rifle rookies who get mind-controlled into killing each other every turn.
The remake made it even worse with the 4 soldier limit. Very easy to not get any backup team trained, because how are you gonna take a dead weight rookie as 1/4th of your squad? Long War raising that to 6-10 and adding a fatigue system to make you rotate troops was a huge improvement.
About the only truly-irrecoverable squad wipe in the original (besides Cydonia, of course) is if your main base gets raided early on. This is how I lost my first attempt at TFTD's Superhuman difficulty: a 1 January FBA proc wiped my base while my Triton and main squad were out (because I might be good, but I'm not good enough to beat an Aquatoid Dreadnought crew with three guys), and while I didn't lose immediately from last-base-killed (because I'd placed another at game start) there wasn't enough time to get my second base operational before I lost from poor score.
While you do need substantial amounts of equipment/training to fight Ethereals, by the time they show up you should have a worldwide interception grid, which means that if you're stuck without an Ethereal-ready squad, you can just stop fighting Ethereal missions until you have one (with perhaps an exception for an Ethereal Small/Medium Scout, if you haven't gotten a psionic capture yet; they aren't too bad). You won't lose from poor score, because shootdowns themselves give bucketloads of points and if you're shooting down incoming UFOs the aliens won't score from succeeding at their missions (in particular, because you're shooting down the Terror Ships, they won't be able to create Terror Sites, which means no massive penalty for ignoring/fleeing from them). Quite recoverable, if tedious.
I haven't actually played the remake, to be clear, and that "mostly" was doing some work because I did know about soldiers being less expendable and there being more hardcoded-game-over missions.
EDIT: "Craft is lost" is painful enough that it's usually not worth risking one, but there are cases where I think optimum play is to take substantial chances of one. The most notable is the first Sectoid terror site in UFO, which will be an absolute ball-buster of a mission (this is the only mission in most games of UFO that approaches Cydonia in relative difficulty; early base defences can be worse but are rare) but which is your first chance to get a Sectoid Leader capture. It's worth taking a substantial chance of failure, including craft loss, to get that capture; you don't know when you'll get another chance and you really want psi to at least be in the works when the Ethereals show up in July. In TFTD there aren't many cases where it's both an incredibly hard mission and a chance to get something rare. The first Terror Site is quite hard, and is technically the only guaranteed appearance of Deep Ones, but there's an 80% chance to get Deep Ones in both the February and March Terror Sites so it's probably not worth risking a wipeout; the first Aquatoid land mission (and thus Calcinite corpse chance) isn't quite the horror show of the first Sectoid terror site in UFO (due to Calcinites being a hell of a lot weaker than Cyberdiscs, and due to a lack of need for rank identification/live capture) so while I think it's probably worth risking a craft loss if you have to (to get drills for cracking open Lobster Men), and I have thus risked it, you usually aren't in dire enough straits that it's necessary (at least, not if you're as good at the game as I am, and if you're not playing some kind of challenge run). There are, of course, plenty of hard missions in TFTD, but most of them can be aborted without missing out on anything irreplaceable.
I guess you're a lot better at the actual "UFO defense" part of UFO defense than I ever was lol. I always struggled with the air war, and couldn't have pulled off dominance like that with interceptors.
Basically, just drop bases in Germany, North Dakota, Beijing, South Africa, Argentina and Tasmania, and give them all Hyper-Wave Decoders (I beeline HWDs immediately after the no-brainer Laser Rifle; I usually deploy the first three bases with Large Radars but wait until HWDs for the last three). This is cash-hungry, but because it lets you recover more UFOs it pays off very fast. Normally I only keep three Interceptors and wait until Avengers to have interception everywhere (because I don't do much interception, preferring ground assaults when possible, and because you need advanced craft to shoot down Battleships anyway), but if I were on the back foot like that and I didn't have Avengers I would spam them (they arrive in four days and Plasma Beams build fast). And yeah, there's a bit of micro with sending Interceptors to where the UFO is going rather than directly at the UFO to avoid long tail chases.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link