@07mk's banner p

07mk


				

				

				
1 follower   follows 0 users  
joined 2022 September 06 15:35:57 UTC
Verified Email

				

User ID: 868

07mk


				
				
				

				
1 follower   follows 0 users   joined 2022 September 06 15:35:57 UTC

					

No bio...


					

User ID: 868

Verified Email

I had a heated 3 month fling with FTL back when it came out before I went somewhere for vacation and quit cold turkey. It was first roguelike (and one of my last - definitely not a good genre for me), and adjusting to the expectations of extreme punishment plus cruel RNG took a while. I remember it took me dozens of tries to win the game for the first time, and then I beat it immediately on the next run. And from then on, it was like a 50/50, which really surprised me, because of how utterly wrecked I used to get, and it's not as if the challenge had changed by leveling up or something.

In the middle of it, it felt to me like roleplaying in a very pure way, creating a narrative of a desperate ship captain in this scifi setting who needs to pull on all resources and luck to barely edge out survival for one more node. I don't think I've gotten quite the same experience from other games. And, unfortunately, I think I'd prefer to keep it that way, given how much time I'm likely to waste if I found something similar.

Nick Fuentes is probably the second most important person to watch on the Republican side after Trump himself. He has a lot of "energy", and has the benefit of being extremely online. People keep making the mistake that the "real world" is more important than a small fringe of online crazies, and they keep getting proven wrong over and over and over (e.g. with woke, the alt right, gender identity on Tumblr). The arc of MAGA is long, but it bends towards Based.

I know very little about Fuentes himself, but the analysis here seems wrong. The alt-right, as best as I can tell, has had pretty much no impact in actual policy and very little in terms of national discourse around politics and ideology. Which is as expected from a small fringe of online crazies.

The "woke," and gender identity on Tumblr (subset or, at best, nearly fully overlapping set with "woke"), on the other hand, have obviously had immense and consequential influence in both, and this is due to the fact that they weren't a small fringe of online crazies. Rather, by the time this sort of argument was created to shut down the people trying to bring attention to the anti-liberalism of the ideology that would go on to evolve to something called "woke," ie around early 2010s, it had already been hegemonic in academia for at least a decade and nearly ubiquitous for multiple decades, with plenty of signs of mainstream journalism and mainstream entertainment getting bought in.

So things correctly labeled as a small fringe of online crazies had little impact on real world politics and the everyday life that it influences, while things incorrectly labeled as such did have big impact.

Maybe this Fuentes character's ideas will break into the mainstream over the next 3 years, but so far, him being just a big fish in a small, fringe, online, crazy pond doesn't make me think he's particularly worth paying attention to with respect to national politics.

There are some people that treat advice as a full on gift giving process. They expect accolades for giving the gift. They expect the receiver to at least pretend that they liked the gift. And the gift they'd always like in return is for the receiver to act on their advice. This seems like a toxic approach to me.

As an aside, this seems like a toxic approach to gift-giving, not just advice-giving. The entire point of a gift is that you're giving it to someone without expectation for anything in return; that's the very nature of the gift that makes it a gift, as otherwise it would be an implicit bribe or payment. The gratitude and pomp and circumstance can be pleasant and even appreciated when they're there, but expecting it in return for a gift means that it wasn't a gift in the first place, it was a payment, in order to get the receiver to play-act the part of "grateful gift recipient" for the gift giver's satisfaction.

I saw on Twitter someone comment that the idea that the company that owns Marvel, Star Wars, and Indiana Jones would need to look for a new IP to help draw in young male audiences in their teens & 20s is pretty hilarious and absurd. You could probably have put any random 8th grade boy in charge of any one of these franchises back when Disney acquired them and turned them into at least good draws for that crowd, if not great. Yet the actual executives in charge appear to have less competence than that (or, perhaps, different incentives than making the best product or most money).

Makes me think there could be a modern remake of Big where he becomes a studio exec instead of a toy store VP and greenlights hits over the adult execs. Would have to be a longer timeframe and also, I'm guessing Big probably won't get a remake anytime soon given the implied statutory rape.

I beat Stellar Blade recently, with all the achievements, collectibles, etc. It caused a minor culture war kerfuffle in the video game community due to having a conventionally attractive female protagonist being highly sexualized in costumes and camera angles and such. It was also the first 3D action game of this type by Shift Up, which is better known for Goddess of Victory: Nikke, which is a gacha game definitely on the "gooner" side of the spectrum, so it was actually reasonable to wonder if it was just going to be shallow eye candy, but it turned out to be right up there with the best action games I've played recently, like Elden Ring or Lies of P (latter of which was also the 1st 3D action game by that dev, IIRC).

Looking at trailers, I remember wondering if it was going to be a DMC-like or Ninja Gaiden-like, something sorely lacking in the industry these days (we'll see how NG4 does soon). Turned out to be a Sekiro-like more than anything, with a similar perfect parry-based posture system, except it's discrete perfect parry counts, and it doesn't recover automatically over time, and it's not a deathblow but rather big hit like a visceral in Bloodborne. It doesn't feel as natural as, nor does it incentivize aggression quite as much as Sekiro's, but it also had its own quirks that I appreciated, like being able to count to set up for viscerals right after boss phase transitions. It also had perfect dodges, which slowed down time during the dodge like Witch Time in Bayonetta, though that didn't extend to giving you time to punish.

These mechanics only work if the enemies are designed properly for them, and that's where the game really shone. The bosses were the highlights, but for every enemy, it was clear the devs thought carefully about how to communicate timings to the player via animations and attack patterns. It wasn't as well executed as From Soft's best work both in terms of telegraphing attacks and pushing the player to really tight openings, but it was only a step or two behind.

I found Normal difficulty too easy after the 1st 2 bosses and restarted the game on Hard, which was originally not available until NG+ with an upgraded health bar. It took me 1-2 hours of sometimes dozens of deaths for most bosses like this, but the design of the bosses was such that it was a fun learning experience the whole time. Regular mobs in the overworld were also 2-shotting me, so exploration was slow and almost souls-like in pace, so it took me about 70 hours to beat the game with all side quests, but played normally, I've heard it's about 20-30 hours.

Like Sekiro, it had skill trees instead of stat upgrades using souls, and also you didn't drop your Exp when you died, so the souls-like "enemies revive when you rest" system didn't really mean a whole lot. Besides weak and strong attacks on Square and Circle which could be chained in different ways for combos, there were special attacks called Bursts and Beta Attacks used via L1 or R1 + face button, which used independent but related resources that recharged through actions during combat. I think what made the combat so satisfying, besides the parries, was the managing of these special attacks and their unique abilities, like i-frames, self-heal, or attack speed-up.

So recommended highly to anyone who enjoys 3rd person action games. Especially on PC where the mods are aplenty, and also, it's incredibly well optimized and bug-free. Zero crashes in 120+ hours and solid consistent 60fps+ on my 4090. I'm glad they decided not to contribute yet another souls-like to the flood of them in the industry right now. Again, it's heavily souls-inspired, but it also draws from many other games, creating its own thing. I just wish it drew more from the crazy action games like DMC, since crowd control and 1-on-many combat in general was mostly not great.

I've heard people criticize the story for being too predictable, but I thought it was exactly right for this kind of game, which almost feels like a throwback to mostly straightforward action games from 360/PS3 era. I found it funny just how much the game took inspiration from Nier: Automata, what with the sexy woman flying down from space to the post-apocalyptic wasteland that is Earth to fight off the beings that took it over from humans, but then discovering the deep dark secret of what really happened, etc. They even hired the same composer to do a lot of the soundtrack, so I'm pretty sure they knew what they were doing.

Thanks for clarifying. Your comment makes sense. Your belief that crushedorange's comment above isn't a case of action which is intended to "lead to more people being free to express themselves" seems almost certain to be true based on humans in general and my vague, fuzzy memory of his comments in the past specifically.

Fair enough, crushedorange's comment indicates pretty clearly that in his specific case, he abandoned his principles. An excessively charitable reading would be that he learned that his naive implementation of free speech principles actually harmed free speech and, as such, abandoned those principles and replaced them with ones that would increase free speech. But there's no way to actually figure out if he's upset that following his previous principles meant that free speech as a principle was being failed, or he's partisanly upset that following those meant that his side was losing, and though the former would be charitable, the latter seems far more likely.

But on this:

My claim was that for a commitment to free speech/intellectual freedom/etc. to count as a "moral principle", it must be an axiomatic belief, not a context-dependent one. You must believe that all else being equal, it is wrong to suppress speech, in and of itself. You can't just believe that it's inadvisable to do so if you want a certain kind of society; and you certainly can't just believe that being pro-free speech lead to good life outcomes for you personally. You have to believe, consistently, that censorship is in itself an evil which you should try to minimize.

This seems like a straightforward way of restating what I said:

claiming that following principles deontologically are better than doing so consequentially

If a commitment to free speech doesn't count as a "moral principle" if you implement it by taking action that leads to more people being more free to express themselves instead of taking action that leads to any particular instance of someone you observe speaking being unpunished, then that's just straightforward supremacy of deontology over consequentialism as a way of doing morality.

Because there are such things as moral imperatives which you should follow even if they do not bring you material benefits; indeed, even if following them costs you dear. Having been persecuted does not give you a license to persecute in turn, any more than having been raped give you a license to rape your rapist. It's not about what it gets you - it's about right and wrong.

This seems to be claiming that following principles deontologically are better than doing so consequentially. Which may be the case, but not really argued for. I do think there's a strong argument for it, in that consequentialist calculations are irredeemably fraught with bias in such a way as to be meaningless, since people will always, in good faith, calculate the consequences in a way that is biased in their favor.

But the case for taking principles consequentially isn't weak, either. If naively following some principle in a deontological way provably/reliably/logically/etc. reduces [Good Thing], then how do we justify calling the principle "Good?" Well, we don't need to follow it in some naive deontological way, but rather by following consequences.

Let's say a doctor has a personal principle that he will endeavor to make his patients no worse than the counterfactual of if they never saw him. Counterfactuals are intrinsically hard to predict and fraught with bias. So he might decide to avoid his personal bias and just take the deontological position that any action that harms the patient's health is out of bounds for him. Puncturing someone's skin certainly harms the patient's health, even if it's nearly trivial, and so he never draws blood for tests or gives his patients IV (or allows his staff to). This doctor would be less effective than a doctor who follows the exact same principle, but thinks in longer time horizons and figures that the harm of a syringe prick on a patient is outweighed by the benefits of what it enables, in terms of leaving his patient no worse off than otherwise. And in society at large, people who believe in the same principle would commonly prefer the latter consequentialist doctor as fulfilling their principles better than the former deontologist one.

So we could follow the principle of free speech by just never punishing anyone for saying anything (with rare exceptions, etc.) and let the chips fall where they may. I would prefer this, personally. We could also follow it by checking how certain behaviors affect people's ability to exercise free speech in society and then take the action that seems most likely to increase it (or not reduce it or maximize some metric or etc.). I would prefer not this, personally, because, again, this sort of prediction is so fraught with bias that I don't know that there's a way to do it credibly. But I think it's perfectly reasonable to disagree with me on that.

It's a few hundred millions, max. After that, the sun will slowly increase its irradiance by a relatively small percentage, resulting in a runaway greenhouse effect from atmospheric water vapor, which will end the carbon cycle on Earth.

Ah, I had thought we had at least a billion, but I hadn't done that much research. I'll take your correction at face value. You also answer here the question you asked earlier in this comment about what catastrophes I'm worried about. I'm worried about the big one.

So, those timelines are so extremely long, we can worry about them when we get really, really bored. The problems we have to solve before that need to be solved here, because solving them here is cheaper than living in space or on Mars.

I disagree. We won't ever get really, really bored, at least that's my prediction based on our evident ability to find extremely banal and inconsequential problems extremely interesting when there's a dearth of consequential problems that are nipping at our heels. And escaping boredom is a really bad motivator for accomplishing something as difficult as sustainable life off Earth. If we take the attitude that the timeline is just so long that we can worry about it in the future, that's a formula for just never doing it at all and letting humanity get snuffed out. One might hope that the human spirit would overcome and survive when push comes to shove, and I'd guess that it would, but I think things would be more pleasant if push didn't come to shove. Plenty of people survived the Titanic and made it to America, but I think it would have been more pleasant for everyone involved if that had been accomplished by the ship just reaching its destination safely instead of having to rely on lifeboats and another ship coming around to pick those up. If we can clearly see an iceberg in our path, it's best to plan for it now instead of relying on future us to solve it when there's less time to work out the kinks.

And there's no need to solve cheaper problems before expensive problems. Our problem-solving abilities aren't fungible like money, and we can devote resources both to expensive and cheap problems at the same time in a way that's more beneficial overall for humanity. Obviously no one can actually work out a credible measure of "benefit to humanity" or whatever, there are arguments to be made about the details, including the notion that, in 2025, all resources devoted to researching and accomplishing space travel would be better spent on something else on Earth, which I disagree with but which I think isn't unreasonable. But that's a different notion than the one that there's no point to humans living in space. Even before a planet/solar system-destroying catastrophe, there's a point, because living in space will force individuals living there to innovate and learn the things we don't even know that we don't know about how to live in space, so that we can actually get it right when shit hits the fan for all of Earth (some of them may will die along the way as they encounter these unknown unknowns, but that's a sacrifice I'm willing to make).

besides escaping the "single planet trap" which hedges against catastrophes that are extremely unlikely, many of which would still leave the surviving humans on earth better of than the humans surviving in our potential colonies

This isn't true, though. These catastrophes that would literally leave no humans (or any life as we know it) alive on Earth aren't extremely unlikely, they are basically guaranteed according to our best understanding of physics and astronomy. Now, Mars is close enough to Earth that it's not an effective hedge against these catastrophes, but one must step into one's entrance way before one steps out one's front door.

Fortunately, we likely have millions, if not billions, of years, to get human civilization sustainable on another planet that's safe from these guaranteed catastrophes on Earth, which is a lot of time to research and develop innovations to enable us getting off Earth. But it's still a very finite amount of time, and these innovations aren't going to just happen over time without humans trying to come up with solutions to problems that get in the way of a goal. Dunno if terraforming is the right idea, but certainly some form of self-sustaining human colony on Mars seems like a reasonable intermediate goal for motivating the necessary innovations.

What Robinson is highlighting with his trilogy about colonizing Mars, perhaps the ultimate scientific endeavor, is that unless this changes, the science is not going to get done properly in the real world. As Miguel Unamuno once said, perhaps apocryphally, vencer no es convencer (to defeat is not to convince). The strain of liberal (and perhaps now woke) thought that currently dominates universities is not going to be able to beat the world into submission to its ideas, it has to learn how to participate in the political process and convince people (and perhaps be convinced in turn).

(Bolding mine) I don't think anyone ever in any side of politics is particularly good at the part I bolded, but certainly the last 10 or so years have been transformative for me in learning of how unimportant this was to self-described liberals. Of course, liberalism doesn't necessarily imply free exchange of ideas and discourse, but it's certainly something I used to associate with them, and too many times I'd hear from a friend about how he went into or wanted to go into an argument with someone with [wrongthink] ideas, with or looking for some tactic to cut through that person's defenses in order to convince them without also looking for tactics by which to allow that person to convince him that [wrongthink] is actually correct.

Going into a conversation or argument looking to convince someone else without allowing for the possibility of oneself being convinced in reverse is just not a winning strategy unless your goal isn't the truth and you have overwhelming force on your side to enforce what you believe is right anyway. Because people can tell when they're being lectured to instead of being engaged with, especially in the long run.

Which reminds me of 2 separate but related phenomena that I keep seeing over and over among the woke left. One is that of "wokeness didn't fail, it was failed by the bigoted populace that was just too bigoted to accept it." This is just a continuation of the "feminism didn't fail, it was failed by the misogynistic populace that was just too misogynistic to accept it," a common sentiment among feminists before "woke" as we understand it today was a common term. I see this commonly enough among both the left and the right and in non-political contexts as well. People love avoiding accountability and blaming others, everywhere and in all contexts.

However, it is entirely and only the responsibility of the ideologues who support an ideology that requires mass buy-in from large swathes of society due to the severe societal changes it pushes to make a convincing case for their ideology, and any failure of the ideology to take hold is entirely the ideologues' fault, and you'd think that these ideologues would be motivated to realize this, in order to more effectively push their ideological changes in society (that they don't seem to realize this indicates either they care more about feeling righteous than about accomplishing meaningful political change or they genuinely believe that they have overwhelming force, or both). It's like how it's always and only the movie studio/marketer/etc.'s fault when a movie bombs, even presuming that it bombed due to society being so filled with bigots who were bigoted against the movie's message/actors/directors/marketers/etc., since no one has an obligation to give money to movie studios.

The other phenomenon is that of ideologues gutting out credible organizations and wearing them as a skin suit in order to launder their ideas through the inertial credibility of the organization while the rest of the world catches up to notice. Academia is the obvious one that people are talking about right now here, but also mainstream journalism and also even fictional media, with stuff like the Hugo awards for science fiction or the Star Wars film franchise or game companies like Bioware or Bungie, where these organizations huff and puff as if people still respect them like before the more recent ideological takeovers without seeming to recognize that you can only wear the skin suit for so long before people notice that the underlying thing isn't delivering on the promise of the label and adjusts their credibility rating accordingly.

In both of these, there's this implicit idea of getting to do whatever one wants and then being apparently confused by the obvious consequence imposed on one by uncontrollable external forces. You can choose your message, but you have no vote on how other people respond to it. If you want other people to respond to your message in a certain way, it is only and entirely your responsibility to sculpt your message to get the results you want.

As much as I hate this analogy, it reminds me of the "nice guy" phenomenon that was all the rage on feminist think pieces about a decade ago, referring to men who appear to believe they're entitled to sex for following all the instructions they were given for attracting a woman, and then lashing out at women when the sex isn't delivered. Instead of taking responsibility for his own failures (i.e. believing the instructions he was given, instead of understanding that part of the test is correctly interpreting these instructions) and fixing it, he just blames women for not filling their role. The opposite gender counterpart would be a woman in her 30s or 40s who had her sexual fun with a large set of male partners while also building her career blaming high value men in their age range for not fulfilling their role of finding them attractive and instead going for younger, less successful, less sexually experienced women. In neither case, does blaming others actually help the person in question, not without overwhelming force to enforce it (which has arguably been the case over the last few decades for the latter case), and in both cases, taking accountability for one's own failure and learning from it seems to be - to me, anyway - the most likely method to bear fruit. But people hate taking accountability more than they love their ideology winning.

Do you expect demands of political loyalty to result in better science when they are coming from the nationalist right rather than the woke left? What would it even mean for academia to place America first?

Both would result (has resulted) in worse science than no political tests, but almost certainly some sort of jingoistic nationalist right America-first political test would result in better science than the woke left. The woke left has, as its basis, a rejection of concepts like "objectivity" and "logic," which are pretty fundamental to doing science. I expect that testing for nationalist right would filter out more intelligent people, but filtering for the woke left filters for more people who are willing and able to reject the fundamental basis of science. Filtering for scientists based on their commitment to the woke left is like a straight guy going to a lesbian bar to hit on women. You've pre-filtered specifically for people who have made visible commitments to behavior that is specifically antithetical to the role they're supposed to fill.

If the main observable action when in power is to further the downward trend against academic freedom, why should anyone trust the claims being made? Actions speak louder than words after all.

If. Notably, that is not the main observable action, since academic freedom isn't being suppressed by defunding academic organizations that violate civil liberties law or by defunding academics that support explicitly anti-academic ideologies. Even if academic freedom were being suppressed, most people don't observe academic freedom as some sort of scalar value that increases when the sum of all academics practicing their academic freedom goes up or something. These are vectors where the specifics matter, and, as such, to say that this is the "main observable action" - even presuming that it were an observable action in the first place, which it isn't - is wrong.

If we want academic freedom we should make moves towards academic freedom, not be indistinguishable from the censors

I don't think most people have a difficult distinguishing between the behavior of Trump and his ilk in this context and the behavior of the censors that have been running roughshod throughout academia's veins. Notably, this does make moves towards academic freedom, by punishing organizations and people who have demonstrated and/or made commitments to suppressing academic freedom. If we want academic freedom, we should punish such people so as to provide an incentive not to do it further.

And empirically, one method that has absolutely not worked at all for increasing academic freedom - in fact, it has only resulted in things getting worse and worse over time until today, when academics not being free has become so common knowledge that academia has substantially discredited itself as a source for truth - is to not punish these people when you have power.

Oh so what you're saying is that the Dems should go nuclear next election and cut funding for all conservatives unless they go woke and we should go into an arms war of being the Serious Threat each time one group is in power?

I mean, I'm assuming Dase is Republican or anti-Dem, and I'd guess they'd be absolutely for this, though I'm not sure "should" is the right term to use. As a Democrat, I would say they absolutely shouldn't do it, at least from a completely cynical and selfish perspective. Woke ideas are unpopular enough nationally that Dems adopting an undeniable "any government function that's not woke must be destroyed" policy would severely hamper their electoral prospects nationally.

I could as easily argue that no religious people should be allowed to work in STEM, because if they believe in miracles, their epistemology is clearly compromised in a way that is fundamentally incompatible with scientific truth-seeking.

You could, but this would be a bad argument and fundamentally very different from the one I laid out. This is just an attempt at equivocating between very obviously different things. Believing in miracles indicates shoddy epistemology, but it doesn't explicitly commit oneself to rejecting the very idea of objective reality or logic. People can be shoddy in their reasoning, shoddy in their observations, etc. Academics can be and often are, because they're humans like anyone else. We should hold them to high standards, but not inhumanly high standards. Never making an epistemological error, especially when it comes to things in religious life that can be compartmentalized away from academics and profession, is an inhumanly high bar. Never signing off on a document that supports an ideology that explicitly rejects the very basis of one's professional academic endeavors isn't an inhumanly high bar.

I do guess that religious people likely, on average, make for less effective STEM academics, but I think empirical evidence indicates that whatever handicap they have isn't that severe, considering the achievements made by religious scientists and engineers. If we had enough qualified atheists on-hand to fully substitute current religious STEM academics with them, it could be worth the transaction cost, though I think the effects of introducing a religious test would generally be severely negative.

However, if an evolutionary biologist or astronomer or geophysicist loudly and proudly signed on to Young Earth Creationism, then that would be more analogous to this situation (though not quite, since YECs haven't practically taken over academia like this ideology has, and YEC is merely one "theory" (lol) about reality, rather than an entire epistemology of how we understand reality itself). The core beliefs of YEC is just fundamentally incompatible with our academic understanding of these fields in a way that does raise reasonable questions about qualification to do the job, in a way that merely "being religious" doesn't. Even then, one can reasonably argue that someone's ideological commitments to YEC should be excluded from consideration of their work as an evolutionary biologist, because of their ability to perform [task] that isn't hindered by YEC. But that's a different argument than saying that this is just as much "cancel culture" as firing someone for "being religious" or whatever.

You are fighting the hypothetical in a way that seems in bad faith. The ideology in question that refuses to be named does not share this characteristic of "anywhere near as far away from taking over US as Nazis or Communists;" it has already taken over the institution in question, i.e. academia, and if it hasn't, then it's certainly caused severe transformations to it, with plans to make even more. If signing off on Nazism or Communism is "innocuous" only or primarily due to circumstance of these ideologies being so weak as to be unworthy of consideration, that certainly doesn't apply to this real case.

This is isomorphic to left-wing cancel culture

It isn't, though, that's the thing. Presuming Tao were being targeted (something which I don't think is evident, but we can presume it), it would be for signing off on an ideological document that commits to him to an ideology that explicitly rejects things like objectivity and reason as tools of White Supremacy for oppressing minorities; the ideology openly and proudly prefers personal testimony labeled as "lived experience" to ascertain "their truth" which is just as valid as anyone else's. This is fundamentally incompatible with mathematics, which relies on logic to make objective statements, or generally the academic pursuit of truth, at least for commonly understood meanings of "math" and "truth." Now, people can paper over and ignore or mitigate fundamental incompatibilities for a long time, but likely not forever, and so such ideological commitments a major handicap in an academic being able to credibly produce truth.

This is the general problem that this ideology runs into when trying to claim victimhood of cancel culture. By its very nature, the ideology is about redefining and subverting our understanding of basic concepts like "truth" or "reason" in a way that's incompatible with academia as we know it. And if academia were transformed according to the ideology, it would no longer be academia in terms of the functions it serves our society (i.e. knowledge generation & education), but rather a church. And so rejecting ideologues of this stripe from academia isn't cancel culture in a symmetrical way to the now-traditional leftwing cancel culture, which has to do either with opinions that are orthogonal to the person's ability to do a job or with chains of "logic" that fall apart under the smallest scrutiny (e.g. this CEO disagrees with me on gay marriage, which means he must have bigoted antipathy, possibly subconsciously, against gay people, which means he cannot be relied on to be their boss in a fair way).

Now, one could argue that the benefits of these ideologues, given their ability to still pursue truth thanks to compartmentalization and cognitive dissonance, outweighs the harms of them also laundering ideological falsities under the label of truth, as well as the harms of the continued self-discrediting of academia due to placing trust in someone who's ideologically committed against fundamental principles of academia, and therefore, these people ought not be "canceled" from academia. The strength of that argument would depend heavily on the specifics of the benefits and harm. But that's a different argument than one around symmetry.

How are we, as a society, supposed to do any universally-beneficial or long-term research with politicization? When one side demands loyalty, then the other side attacks you for said loyalty, how do we ensure the safety of, if not ongoing, then at least future research?

This appears to be a dilemma. We can have universally-beneficial/long-term scientific research, or we can have politicization of science, but we can't have both at the same time. And this isn't a black-or-white thing, but rather a spectrum where one trades off against the other. I don't think the politicians and ideologues can be counted on to keep their grubby fingers off the superweapon that is the credibility that comes with the label "science," and so if such research is a higher priority than politicization, then the way to accomplish this would have to be from the scientists and academics rejecting the politicization from within. The only alternative is that it doesn't get accomplished at all, and we get corrupted non-credible "research" that serves some ideology at the cost to society at large.

If one is saying "just add this line of text to your grants" and the other is saying "we will destroy you and your ability to do science and math", I'm not sure why they'd start siding with the second.

Many - not all, but certainly many - scientists and academics in general care about discovering the truth and thus understand that being forced to add such a line to their grants (among many, many other things) fundamentally corrupts their ability to ascertain the truth in a way that lack of funding doesn't. Funding is a real problem, but money is fungible. Principles aren't. Different people will have different lines where they're willing to betray their principles for money.

Optimistically, academia has enough people committed to truth that they transform academia from within so as to earn credibility back enough to justify public finding. Pessimistically, there are enough ideologically committed and/or unprincipled in commitment to truth that academia will choose to self-immolate. Which would be bad for everyone, but still better than pouring money into nonsense production that gets laundered as true due to inertia of pre-existing credibility. That's actively malignant to society in a way that dried up funding isn't.

Signing an open letter and writing an article that attacks Trump is pretty innocuous behavior, in my opinion.

Surely the contents of the open letter would matter, wouldn't it? Would signing an open letter committing oneself to help the 4th Reich take over the United States also be pretty innocuous?

Of course, this letter isn't that. Rather, it's an open letter espousing an ideology that's specifically anti-logic, which I don't think is innocuous for a mathematician. The most innocuous and, IMHO quite likely, explanation for his behavior is that he unthinkingly followed sociopolitical pressure to sign that document. And caring so little about what he puts his signature on that he's willing to sign off on a belief system that rejects the very basis of what he's studying is at least as concerning as it is innocuous. If a bus driver was known to openly support an ideology that rejected the notion of left and right or red and green, the bus company would be justified in not considering that all that innocuous, even if the bus driver was merely doing it to look cool for his peers.

And the "endangering women" thing is even worse. Are there credible accusations of people abusing their trans status to rape or grope women in their protected spaces, above the base rate? This seems to be a moral panic like the D&D satanism thing.

It's actually a physics thing. The nature of most common team sports in America is such that, if college aged trained athletes attempted to play at the best of their abilities in mixed-sex format, the odds of the women being injured due to inevitable contact with men who are far bigger and faster than them skyrockets relative to just women-only. If we decided to mix the NBA and WNBA and have them play in mixed format, that would also endanger women, ie the WNBA players. No rape or groping required or implied.

We theorize about creating self replicating intelligent machines. We are, once properly aligned, self replicating intelligent machines.

This comment makes me feel like there's a scifi story or alternate universe somewhere where humans, on the cusp of inventing AGI, get invaded by intelligent aliens, somehow miraculously defeat them, and discover that raising and reproducing these aliens is actually much cheaper on a per-intelligence basis than building servers or paying AI engineers, leading to AI dev being starved of resources in favor of advancing alien husbandry. Conveniently, the AI label/branding could remain as-is, for Alien Intelligence.

Especially since policy can always change. You don't wanna say something will happen only for the underlying causes to disappear underneath your claim.

Hedging oneself with careful verbiage about one's predictions about the future (which I hear are quite hard to get right) is indeed good practice. However, this argument doesn't make that case. Because there's nothing wrong or shameful or embarrassing or negative at all about saying that something will happen if [underlying cause] holds true. This is a positive claim about cause and effect which could be proven false if the underlying cause continues to hold but that thing doesn't happen. Unlike saying something could happen, which is really just a nothing statement that is almost entirely unfalsifiable.

I tend to perceive progressive strains of liberalism as making the assumption that civilization as they know it is tge default state of humanity and you can’t really destroy it. It’s not “sacrifice survival for thriving” it’s “survival is a given, so let’s thrive.”

I perceive the same, but I disagree with that last sentence. One is the other. If you care so little about survival that you haven't done the research to learn just how unusual and precarious modern society is, then you're deciding that sacrificing survival for the sake of thriving is worth it.

Human extinction is 100% inevitable.

I don't think anyone knows this with any meaningful level of confidence. The heat death of the universe through entropy is the only thing that I can think of that could guarantee this, but I don't believe we have a complete-enough understanding of physics and cosmology to state with 100% confidence that that's inescapable.

Because of that I am sympathetic to the idea that acting on one's values is ultimately more important than survival. It's the same as preferring to live a beautiful short life over a pointlessly prolonged one in a state of senility.

This is perfectly cromulent, but also, I think most people would prefer to live a beautiful long life over a beautiful but pointlessly short one. And the thing about prolonging versus ending life is that it's asymmetrical; if you prolong life when human civilization is barely lumbering along in a state of senility, there's always the chance in the future that that civilization becomes beautiful and prolonged. If it ends in a blaze of beauty, then no one ever gets to discover if there was a way to have a prolonged beautiful civilization. Believing that the end of civilization/humanity is worth it as long as my own principles and values got met by the last generation requires a God-like level of confidence in the correctness of one's own values. Which points to faith.

Which is also perfectly cromulent! I just wish people would talk about this honestly and openly.