site banner

Culture War Roundup for the week of May 29, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

I considered making this an "inferential distance" post but it's more an idle thought that occurred to me and a bit too big of a question to go in the small questions thread.

That being, Are the replication crisis in academia, the Russian military's apparent fecklessness in Ukraine, and GPT hallucinations (along with rationalist's propensity to chase them), all manifestations of the same underlying noumenon?

Without going into details, I had to have a sit-down with one of my subordinates this week about how he had dropped the ball on his portion of a larger project. The kid is clearly smart and clearly trying but he's also "a kid" fresh out of school and working his first proper "grown-up" job. The fact that he's clearly trying is why I felt the need to ask him "what the hell happened?" and the answer he gave me was essentially that he didn't want to tell me that he didn't understand the assignment because he didn't want me to think he was stupid.

This reminded me of some of the conversations that have happened here on theMotte regarding GPT's knowledge and/or lack thereof. A line of thinking I've seen come up multiple times here is something to the effect of; As a GPT user I don’t ever want it to say "I don’t know". this strikes me as obviously stupid and ultimately dangerous. The people using GPT doesn't want to be told "sorry there are no cases that match your criteria" they want a list of cases that match their criteria and the more I think about it the more I come to believe that this sort of thinking is the root of so many modern pathologies.

For a bit of context my professional background since graduating college has been in signal processing. Specifically signal processing in contested environments, IE those environments where the signal you are trying to recognize, isolate, and track is actively trying to avoid being tracked, because being tracked is often a prelude to catching a missile to the face. Being able assess confidence levels and recognize when you may have lost the plot is a critical component of being good at this job as nothing can be assumed to be what it looks like. If anything, assumption is the mother of all cock-ups. Scott talks about bounded distrust and IMO gets the reality of the situation exactly backwards. It is trust, not distrust, that needs to be kept strictly bounded if you are to achieve anything close to making sense of the world. My best friend is an attorney, we drink and trade war stories from our respective professions, and from what he tells me the first thing he does after every deposition or discovery is go through every single factual claim no matter how seemingly minute or irrelevant and try to establish what can be confirmed, what can't, and what may have been strategically omitted. He just takes it as a given that witnesses are unreliable, that the opposing council wants to win, and that they may be willing to lie and cheat to do so. These are lawyers we're talking about after all, absolute shysters and moral degenerates the lot of them ;-). For better or worse this approach strikes me as obviously correct, and I think the apparent lack of this impulse amongst academics in general and rationalists in particular is why rationalists get memed as Quokka. I don't endorse 0 HP's entire position in that thread, but I do think he has correctly identified some nugget of truth.

So what does any of this have to do with the replication crisis or the War in Ukraine? Think about it. How often does an academic get applauded for publish a negative result? The simple fact that in a post-modern setting it is far more important to publish something that is new and novel than it is to publish something that is true. Nobody gets promoted for replicating someone else experiment or publishing a negative result and thus the people inclined to do so get weeded out of the institutions. By the same token, I've seen a similar trend in intel reports out of Russia. To put it bluntly their organic ISR and BDA is apparently terrible bordering on non-existent and a good portion of this seems to stem from an issue that the US was dealing with back in the early 2010s IE soldiers getting punished for reporting true information. Just as the US State Department didn't want to be told how precarious the situation with ISIL was, the Russian MOD doesn't want to hear that a given Battalion is anything other than at full strength and advancing. Ukrainian commanders will do things like confiscate their men's cell phones and put them all in a box in an empty field. When Russian bombers get dispatched to blow up that empty field and last thing anyone in the chain of command wants to believe is that they just wasted a bunch of expensive ordnance. They want to believe that 500 cell-phone signals going dark equates to 500 Ukrainian soldiers killed. It's an understandable desire, but the thing about contested environments is that the other guy also gets to vote.

In short, something that I think a lot of people here (most notably Scott, Caplan, Debeor, Sailer, Yud, and a lot of other rationalist "thought leaders") have forgotten is that appeals to authority, scientific consensus, and the "sense making apparatus" are all ultimately hollow. It is the combative elements of science that keep it honest and producing useful knowledge.

Aren’t you tired of accusing rationalists of not caring about the things they care the most about? I can’t think of a group less prone to appeals to authority, more aware of the replication crisis .

And again with an anecdote where your counterpart just comes off as obviously wrong. The guy doesn’t understand, then he lies about it. No one is encouraging this behaviour, so what lesson is there to be gained here.

As long as you’re free-associating: the russians are quokkas apparently, while 0HP and co, the edgy panaroid hysterical pessimists, they’re wise. Why then is there such affinity between them?

They’re very similar, and wrong in the same way. They systematically overestimate the likelihood of defection. Cooperation and honesty appear impossible, and lies are all they ever hear. What should the russians have done? Assume everyone up and down the chain of command was lying even harder than previously assumed? You can’t make chicken salad out of chicken shit.

Past a certain point of skepticism/assumed lies, you ‘ve sawed off the last epistemological branch you’re sitting on, sink into the conspiracy swamp, and you become a blackpill overdose/russian type, confused and afraid of your own (possibly fish-like) shadow.

Aren’t you tired of accusing rationalists of not caring about the things they care the most about? I can’t think of a group less prone to appeals to authority, more aware of the replication crisis .

The problem lies the other way, they care too much, and by jingo do they go for appeals to authority - the maths says it is so, ergo it must be so! I don't think the AI debate is balanced at all; on one side there are "AI is gonna foom and kill us all!" doomsayers and on the other side are the "nonsense, AI will solve all the problems we can't because it will be so smart and we'll have Fully Automated Luxury Gay Space Communism!" and both sides are expecting that to happen in the next ten minutes.

The real problem, as always, are the humans involved, not the machines. And we're already seeing it with people rushing to use GPT-model whatever and blindly trusting that the output is correct - our lawyer with the fabricated cases is only the most egregious one - and forecasting that it will take all jobs (and make us obsolete)/(we'll be freed up for new jobs just like the buggy whip manufacturers got new, hitherto unthought of, jobs) and the rest of it.

Prediction markets are another blind spot, the amazement that people would spend real money to manipulate results the way they wanted was "what do you think is going to happen if we adopt these markets widely?" for me.

Rationalists are very nice people - and that's part of the problem. I think quokkas is unfair, but there's a tendency to think that just thinking hard enough will get you a solution that, when implemented, will work beautifully because everyone will act in their own self-interest and nobody will fuck shit up just because they're evil-minded or screw-ups. Spherical cow world.