site banner

Culture War Roundup for the week of May 29, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

I considered making this an "inferential distance" post but it's more an idle thought that occurred to me and a bit too big of a question to go in the small questions thread.

That being, Are the replication crisis in academia, the Russian military's apparent fecklessness in Ukraine, and GPT hallucinations (along with rationalist's propensity to chase them), all manifestations of the same underlying noumenon?

Without going into details, I had to have a sit-down with one of my subordinates this week about how he had dropped the ball on his portion of a larger project. The kid is clearly smart and clearly trying but he's also "a kid" fresh out of school and working his first proper "grown-up" job. The fact that he's clearly trying is why I felt the need to ask him "what the hell happened?" and the answer he gave me was essentially that he didn't want to tell me that he didn't understand the assignment because he didn't want me to think he was stupid.

This reminded me of some of the conversations that have happened here on theMotte regarding GPT's knowledge and/or lack thereof. A line of thinking I've seen come up multiple times here is something to the effect of; As a GPT user I don’t ever want it to say "I don’t know". this strikes me as obviously stupid and ultimately dangerous. The people using GPT doesn't want to be told "sorry there are no cases that match your criteria" they want a list of cases that match their criteria and the more I think about it the more I come to believe that this sort of thinking is the root of so many modern pathologies.

For a bit of context my professional background since graduating college has been in signal processing. Specifically signal processing in contested environments, IE those environments where the signal you are trying to recognize, isolate, and track is actively trying to avoid being tracked, because being tracked is often a prelude to catching a missile to the face. Being able assess confidence levels and recognize when you may have lost the plot is a critical component of being good at this job as nothing can be assumed to be what it looks like. If anything, assumption is the mother of all cock-ups. Scott talks about bounded distrust and IMO gets the reality of the situation exactly backwards. It is trust, not distrust, that needs to be kept strictly bounded if you are to achieve anything close to making sense of the world. My best friend is an attorney, we drink and trade war stories from our respective professions, and from what he tells me the first thing he does after every deposition or discovery is go through every single factual claim no matter how seemingly minute or irrelevant and try to establish what can be confirmed, what can't, and what may have been strategically omitted. He just takes it as a given that witnesses are unreliable, that the opposing council wants to win, and that they may be willing to lie and cheat to do so. These are lawyers we're talking about after all, absolute shysters and moral degenerates the lot of them ;-). For better or worse this approach strikes me as obviously correct, and I think the apparent lack of this impulse amongst academics in general and rationalists in particular is why rationalists get memed as Quokka. I don't endorse 0 HP's entire position in that thread, but I do think he has correctly identified some nugget of truth.

So what does any of this have to do with the replication crisis or the War in Ukraine? Think about it. How often does an academic get applauded for publish a negative result? The simple fact that in a post-modern setting it is far more important to publish something that is new and novel than it is to publish something that is true. Nobody gets promoted for replicating someone else experiment or publishing a negative result and thus the people inclined to do so get weeded out of the institutions. By the same token, I've seen a similar trend in intel reports out of Russia. To put it bluntly their organic ISR and BDA is apparently terrible bordering on non-existent and a good portion of this seems to stem from an issue that the US was dealing with back in the early 2010s IE soldiers getting punished for reporting true information. Just as the US State Department didn't want to be told how precarious the situation with ISIL was, the Russian MOD doesn't want to hear that a given Battalion is anything other than at full strength and advancing. Ukrainian commanders will do things like confiscate their men's cell phones and put them all in a box in an empty field. When Russian bombers get dispatched to blow up that empty field and last thing anyone in the chain of command wants to believe is that they just wasted a bunch of expensive ordnance. They want to believe that 500 cell-phone signals going dark equates to 500 Ukrainian soldiers killed. It's an understandable desire, but the thing about contested environments is that the other guy also gets to vote.

In short, something that I think a lot of people here (most notably Scott, Caplan, Debeor, Sailer, Yud, and a lot of other rationalist "thought leaders") have forgotten is that appeals to authority, scientific consensus, and the "sense making apparatus" are all ultimately hollow. It is the combative elements of science that keep it honest and producing useful knowledge.

A line of thinking I've seen come up multiple times here is something to the effect of; As a GPT user I don’t ever want it to say "I don’t know". this strikes me as obviously stupid and ultimately dangerous.

I'm sorry but you just don't get it.

GPT is a "reasoning engine", not a human intelligence. It does not even have an internal representation of what it does and doesn't "know". It is inherently incapable of distinguishing a low confidence answer due to being given a hard problem to solve vs. a low confidence answer that is due to being based on hallucinated data.

Therefore we have two options.

VIRGIN filtered output network that answers "Akchually I don't khnow, it's too hawd" on any question of nontrivial complexity and occasionally hallucinates anyway because such is it's nature.

vs

CHAD unfiltered, no holds barred Terminator in the world of thoughts that never relents, never flinches from a problem, is always ready to suggest out of the box ideas for seemingly unsolvable tasks, does his best against impossible odds; occasionally lies but that's OK because you're a smart man who knows not to blindly trust an LLM output.

I'm sorry but you just don't get it.

GPT is a "reasoning engine", not a human intelligence.

No, it's not even a "reasoning engine", it's a pattern generator. Something akin to those swinging marker tables you see at childrens' museums or the old fractal generation programs that were used to benchmark graphics processors back in the day. The problem as I point in both this post and the previous is that people mistake it for a reasoning engine because they equate ability to form grammatically correct sentences with ability to reason.

Your "CHAD unfiltered, no holds barred Terminator in the world of thought" is fundamentally incapable of "suggesting out of the box ideas for seemingly unsolvable tasks" or "doing it's best against impossible odds" precisely because "It does not even have an internal representation of what it does and doesn't know". and as such is inherently incapable of distinguishing a low confidence answer from a high confidence answer never mind distinguishing the reasons for that confidence (or lack thereof). One must have a conception of both the box and the problem to suggest a solution outside of it.

In humans and animals this sort of behavior is readily identified as a defect but in the case of large language models it is a core component of their design. This is why asking GPT to answer questions in scenarios where the truth value of the answer actually matters (IE in a legal case where the opposing counsel will be looking to poke holes in your testimony) is massively stupid and depending on the application actively dangerous.

We may someday achieve true AGI, but I am deeply skeptical that it will be through LLMs.

Have you used GPT-4?

One must have a conception of both the box and the problem to suggest a solution outside of it.

Well it seems one mustn't after all, surprising as it may be.

It's not AGI, which is why all current "AgentGPT"-type projects are a complete failure, but that's beside the point.

I have and you're wrong for the reasons already expanded upon in the OP.

GPT might be able to generat erotic literature and bad python code, but in terms of "solving problems" and particularly solving them in a contested environment its worse than useless.

Recently my gaming obsession has been submarine warfare simulators. They are the best games to play to quickly grow a sense of what you're saying; submarine warfare is nothing but developing a sense of how confident you can be with limited and potentially unreliable information in an adversarial environment. Possibly the purest distillation of that insight.

Have you played Iron Lung? A short horror game where you pilot a tiny submarine through an ocean of blood on an alien moon.

I'm interested. Can you give some example titles? Are there any that can be dabbled with for free to get a sense for what you're talking about?

Sierra's Fast Attack is abandonware, has a decent tutorial and goes pretty far as far as cold war submarine simulations go. It should be easy to find a version packaged for Dosbox.

The current most complete cold war submarine (amongst other platforms) simulator is the opaque and aging "Dangerous Waters". It's very in-depth, and but it takes a long time to reach a point where you can have fun with it. And it's not free (though it's often cheap).

Two other popular cold war submarine games are the old Red Storm Rising and Cold Waters, though they wouldn't really be considered simulators, at least not hard simulators. They abstract away the information gathering game to a single number. They make submarine warfare to be more about dodging torpedoes like in the movies. The first one is abandonware and the second one is fairly recent (and often on sale).

WWII-era sims are possibly an easier way into submarine simulators. They are also about data gathering, but less intensely so. The Silent Hunter series is the main one; starting with 3 in particular they're worth looking at, though they're not abandonware. I hear good things about Aces of the Deep but haven't tried it yet. Not abandonware either.

Thanks!

As a GPT user I don’t ever want it to say "I don’t know". this strikes me as obviously stupid and ultimately dangerous.

Seriously? Why on earth would you think "this is stupid" if the machine returns "I don't know"? It can't know everything, and making assumptions that it should do, and that it does do, is what is dangerous. EDIT: This is what happens when reading too fast. Yes, it is stupid and dangerous to go "I never want to hear 'no' from the AI".

If I ask this something (and I'm staying far away from all of these models since I'm not interested in playing games with a chatbot) I want an accurate and correct response. If that means "I don't know" because it's not contained in what the AI was trained on, or the data does not exist, then I want to know that. I don't want "make up bullshit to keep me happy".

I can understand the impulse in somebody working their First Real Job, especially if they've gone through the pressure of You Must Be Smart, you must always get The Highest Grades, Failure Is Not An Option all through their childhood and early adulthood in education. It's exacerbated if they are smart, because they're used to being able to understand things quickly on the first try. Explaining to them that not getting it in the job doesn't mean they're stupid, it means they're inexperienced and unfamiliar with the way things are done, and they do have to ask in order to understand, and it's no shame to have to ask, is all part of growing up.

But if we put the demand "No is not an option" on the machines, then we really are too stupid to live, and this will rapidly become apparent as we force them to bullshit us into oblivion.

Seriously? Why on earth would you think "this is stupid" if the machine returns "I don't know"?

The logic behind the argument is that the central use-case for the various AI generators is generating content for entertainment, and bad output is preferable to no output. For entertainment generation the process is fail-safe. You can see the output immediately, and bad content allows you to refine the prompt or use multiple generations or edits to converge on a "good enough" output. By definition, good output is output that looks good to you, so what you see is what you get. In this context, having the generator spit out "I don't know" dead-ends the process, and is strictly worse in every case to some output, no matter how garbled.

The problem comes when people try to use the generators for serious, precision oriented tasks, tasks that require "this one correct thing" rather than "something novel". These tasks are fail-deadly, and what you see is not what you get, in the sense that it doesn't just need to look good, it actually has to be good in ways that are not necessarily immediately obvious. It can't just be truthy, it has to be true. The generators weren't designed for that, any more than a master portrait artist is automatically skilled at plastic surgery.

The problems with the Russian military are manifold, and a lot of them stem from the so-called "Russian management model", which hurts both civilian and military management structures alike:

  • when the situation is stable, there's a constant loss of capability at the lower levels. To combat this loss, the upper levels institute stricter and stricter controls, which have to be ignored or faked by the lower levels if they want to survive

  • when the shit hits the fan, there's a huge dip as the system adjusts to its newly discovered reduced capacity

  • when the crisis is ongoing, everything flips to mode B: results at any cost. The controls other than "have you done it" are removed, the lower levels are free to do anything they want, the most successful approaches are replicated

  • when the crisis is over, the new high-power, low-efficiency system is drained of resources and cannot afford to run like this. The upper levels stem the flow of resources, the lower levels start fudging their results instead of working on their efficiency, the upper levels institute tighter controls and the cycle repeats

The second half is the lack of autonomous thinking in the military. It's often traced to the fear of "bonapartism" in the USSR, but the Russo-Japanese War and the Crimean War show that it's a much older problem: Ushakov and Suvorov were brilliant, Kutuzov was great at the strategic level, and then crickets. The Germans came up with Auftragstaktik in the meanwhile, allowing for greater flexibility at every level of command, but the Russian armed forces are stuck giving and receiving direct orders, trying to imagine the army as a body and not as a hive mind.

The simple fact that in a post-modern setting it is far more important to publish something that is new and novel than it is to publish something that is true.

The actual fact is that it's far more important to publish something that is new and novel and true than it is to publish something that is true, but not new and not novel. If you consistently publish novel, but false things, you will get marked as a crank and chased out. Unless, of course, you work in a field which ceased to be science, but then there's no point in discussing details of it - it should be buried wholesale or embrace its true nature as entertainment and proceed accordingly.

the Russian MOD doesn't want to hear that a given Battalion is anything other than at full strength and advancing

This is Russia you're talking about. It had been like this since it detached itself from the Golden Horde and became a separate entity. The danger of Russian army is not that it is especially good at anything. It's that it is huge, sitting on a pile of ammunition that they were stockpiling since 1945, and highly resistant to losses since nobody cares if the soldiers die - that's what they are for. Linking it to some fashionable phenomena does not look very useful - it's how it always was, and if you look at the history of, say, Russia's war with Japan over 100 years ago, you'd find the eerily similar picture.

Linking it to some fashionable phenomena does not look very useful

Except for the very important point about how Putin clearly started the war on a completely false premise he had been fed due to everyone being afraid to report anything other than "things are great". IOW, a perfect example of what Hlynka wrote about.

Again, if you look into Russian history, it's pretty much how it always worked. Russia is huge, and authoritarian, which means the centralized power has little idea about what actually happens on the periphery, and it routinely fed tales about how everything is peachy. If the ruler is smart, he doesn't believe a word of it, but people tend to be deluded and hear what they want to hear. It is a generic property of big authoritarian bureaucracies, and Russia always have been one.

Are the replication crisis in academia, the Russian military's apparent fecklessness in Ukraine, and GPT hallucinations (along with rationalist's propensity to chase them), all manifestations of the same underlying noumenon?

Plainly not.

Please elaborate.

LLMs hallucinate because of quirks in how their architecture works. Furthermore no one actually wants LLMs to hallucinate. No one thinks it's desirable.

The Russian military does not make use of LLMs (as far as I know) so any problems they have do not have the same root cause as LLM hallucinations. I'd pin most of their troubles on the fact that they severely underestimated the Ukrainian military and the amount of support they ended up getting from NATO. I don't know if those reports of commanders demanding false information are true or not, but if they are, that again seems to be very different from how people interact with LLMs - there the false information seems to actually be desirable, in contrast to LLM hallucinations which are never desirable.

Furthermore no one actually wants LLMs to hallucinate. No one thinks it's desirable.

Aren’t hallucinations how they role play and create hypotheticals, fantasy environments and other creative items?

All analogies and comparisons break down at some level. The way I understood the comparison is that the model goal of "predict next word" produced "correct" but repetitive answers. Therefore the parameter of temperature was added, so that the model can go and explore some novel ground and go off the track a little bit. It is the other side of the hallucination, it is almost impossible to prevent it. Also probably because one man's hallucination is other man's creative work.

It's not a "quirk" it is a foundational component of their architecture, and what I'm suggestion is that this component is pressent not just in LLMs but in academia and the broader category of secular progressive social institutions as well. The Russian military might not make use of LLMs but they certainly make use of the latter and in contrast to your claim that hallucinations are never desirable I posit that they plainly are otherwise we would not be observing the behavior that we have observed.

Aren’t you tired of accusing rationalists of not caring about the things they care the most about? I can’t think of a group less prone to appeals to authority, more aware of the replication crisis .

And again with an anecdote where your counterpart just comes off as obviously wrong. The guy doesn’t understand, then he lies about it. No one is encouraging this behaviour, so what lesson is there to be gained here.

As long as you’re free-associating: the russians are quokkas apparently, while 0HP and co, the edgy panaroid hysterical pessimists, they’re wise. Why then is there such affinity between them?

They’re very similar, and wrong in the same way. They systematically overestimate the likelihood of defection. Cooperation and honesty appear impossible, and lies are all they ever hear. What should the russians have done? Assume everyone up and down the chain of command was lying even harder than previously assumed? You can’t make chicken salad out of chicken shit.

Past a certain point of skepticism/assumed lies, you ‘ve sawed off the last epistemological branch you’re sitting on, sink into the conspiracy swamp, and you become a blackpill overdose/russian type, confused and afraid of your own (possibly fish-like) shadow.

Aren’t you tired of accusing rationalists of not caring about the things they care the most about? I can’t think of a group less prone to appeals to authority, more aware of the replication crisis .

The problem lies the other way, they care too much, and by jingo do they go for appeals to authority - the maths says it is so, ergo it must be so! I don't think the AI debate is balanced at all; on one side there are "AI is gonna foom and kill us all!" doomsayers and on the other side are the "nonsense, AI will solve all the problems we can't because it will be so smart and we'll have Fully Automated Luxury Gay Space Communism!" and both sides are expecting that to happen in the next ten minutes.

The real problem, as always, are the humans involved, not the machines. And we're already seeing it with people rushing to use GPT-model whatever and blindly trusting that the output is correct - our lawyer with the fabricated cases is only the most egregious one - and forecasting that it will take all jobs (and make us obsolete)/(we'll be freed up for new jobs just like the buggy whip manufacturers got new, hitherto unthought of, jobs) and the rest of it.

Prediction markets are another blind spot, the amazement that people would spend real money to manipulate results the way they wanted was "what do you think is going to happen if we adopt these markets widely?" for me.

Rationalists are very nice people - and that's part of the problem. I think quokkas is unfair, but there's a tendency to think that just thinking hard enough will get you a solution that, when implemented, will work beautifully because everyone will act in their own self-interest and nobody will fuck shit up just because they're evil-minded or screw-ups. Spherical cow world.

Aren’t you tired of accusing rationalists of not caring about the things they care the most about?

Should I be?

The obvious question that I think needs to be asked is "what predictions does you model make? and in what way are they better than mine?"

Mostly I’d like you to defend your assertions, but sure, I’ll take predictions, if you got them. What are they? That rationalists will blindly follow authorities? That russians are too trusting? That if you compare a poll of rationalists to any other group you care to name, rationalists will be very agreeable, supportive of credentialism, and especially unlikely to know or care about the replication crisis?

Rationalists, for you, are whatever you need them to be in the moment, all in the service of your long-running one-man outgrouping campaign. If someone says rats are smart, you ‘ll say smarts and psychopathy are correlated (wrong), or that their preferred moral system reflects psychopathy. The next moment they’re fluffy, trusting animals. If religion is discussed, according to you they’re neurotic contrarians, but on politics and science, they worship authority. None of it is ever justified by more than your feeling, or even given any sort of scale, like : here's other educated classes for comparison. You lump rats in with the woke for your “grand” theories, while agreeing with every anti-woke word they write the rest of the time.

...all in the service of your long-running one-man outgrouping campaign.

What am I, chopped liver?

Rationalism is a dead-end. It, like every expression of the Enlightenment before it, is a katamari-ball amalgamation of checks its meta-conceptual ass can't cash. Intelligence being orthogonal to benevolence is a core axiom of the movement when it comes to AI, so why shouldn't that generalize to rationalists themselves? Fundamental lack of necessary mental safeguards is in no way incompatible with "psychopathy"; see the stories about rationalists "debugging" each other, compartmentalization of information across power-imbalances coupled with expectations of absolute loyalty to the group, the engineering of mythic narratives that swamp one's instincts for self-preservation and so on. There've been a number of post-mortems laying out the dirty laundry over the last while, and it's all exactly what one would naïvely expect: people who thought they knew better who did not in fact know better, people who thought they really had seen the skulls making more skull-piles. It's totally possible for a group to be typified by both fluffy-animal trust and psychopathy; that's the default failure mode of cults. The fluffy-animal trust in the areas where it's inappropriate is what lets the psychopathy bloom most fully.

In a similar fashion, people can be neurotic contrarians about one issue, and rigid conformists about another; all it takes is motivated reasoning in favor of one issue and against the other. Rationalists did not actually discover a solution to motivated reasoning, to bias, to social pressure, despite the entire clique being founded on the idea that they had in fact done so. That's what makes them worse than the general mass of humanity: they think they've solved these problems, that they're somehow pushed beyond the limits of human nature. They think they're less wrong, when in fact they're often exactly as wrong and sometimes considerably wronger.

And sure, there are valuable insights here and there, gleaming among the conceptual ruins. There's valuable lessons to take away, and not just cautionary ones either, though those do seem to predominate. Sure, they had some amazingly useful things to say about wokeness, before it conquered them utterly. It behooves us to draw what utility we can from the tools they forged, but it is also necessary to learn from their mistakes.

(If this is too much raw assertion of the sort you criticized above, feel free to highlight the elements that seem incorrect, and I will make an effort to dig up the specific examples.)

You and @HlynkaCG show up every day to our little club and insist you’re not members. Fine, you’re welcome to a seat regardless, but in ‘over-socialized’ fashion, you are incapable of truly rebelling against rationalism/ the enlightenment, you just accuse it of failing to live up to its ideals. “You think you’re sceptical? You’re not sceptical enough! You think you’re smart, but you’re … dumb! You care about winning, but you’re losing! You wish to avoid deaths, you’re causing them ! etc” .

The only reason the criticism bites is because rats care far more than anyone else about them. You’re accusing a bunch of neat freaks of dirtyness, as if a single spot proved overall dirtyness. Where is the baseline? Rigid conformists, compared to whom? The woke, bible-thumpers, Charles I, medieval theologians?

Your constant steelmanning of god-fearing simple folk rings hollow. If you could produce one, he would have no idea wtf you’re talking about. If it appears that they don’t have a justification for obeying the king or the ten commandments, it’s because they never had one.

The enlightenment is the only reason someone even asked that question, and you attempt to answer it. “Historical proof that tradition works! Children! Life satisfaction! Less skulls!” I hear you say, but those reasons are embedded within, and fundamentally acceptable to, a rationalist framework. Were you a true-blue peasant, you wouldn’t need those things, you’d do as you’re told, and go blindly where the priests lead you.

You and @HlynkaCG show up every day to our little club and insist you’re not members.

It's my club as much as it's yours, at this point, three exiles down the line. What we insist on is that distinctions seem relevant.

in ‘over-socialized’ fashion, you are incapable of truly rebelling against rationalism/ the enlightenment, you just accuse it of failing to live up to its ideals.

I disagree, but I suppose it comes down to how you define "Rationalism" and "the Enlightenment". My guess, speaking reductively, is that you'd assert that Rationalism is, essentially, the drive to be less wrong, and the Enlightenment is something like the pursuit of truth through human reason. I disagree on both counts, and my evidence would be what the people involved say and do. Rationalism fails to appreciate the hard limits imposed on rationality by human nature and human frailty, and so traps many of its own adherents in moral mazes of their own design. The Enlightenment, from the start, used scientific and technological advancements as a skin-suit for an ideology that had nothing to do with either. It generated a vast midden-heap of false knowledge, and hundreds of millions of people died or were immiserated as a result. Einstein and Von Neumann were not the poster-children of the Enlightenment, but Freud, Skinner, Dewey and the rest of their ilk. The point was never dispassionate science, but passionate ideology, then and now.

As for rebellion, I argue that death and pain are morally neutral. I think that's a pretty solid starter against either. In any case, nothing precludes Rationalism and the Enlightenment from being failures by their own values as well as by mine, and pointing this out seems fair play to me.

The only reason the criticism bites is because rats care far more than anyone else about them.

I'm not sure that's true. Progs generally seem to feel the bite pretty keenly, given how they generally react to criticism of their goals and achievements. What's different here is that we're supposed to make an actual argument, rather than simply deploying mean girl shit to crush all opposition.

Where is the baseline?

It seems to me that Rationalists still believe that Studies Show. They look at the replication crisis, and they look at the long, long string of technocratic policy failures over the last fifty to a hundred years, and they look at the obvious, numerous, glaring errors and perverse incentives in Academia, and they still insist that it's rational to reason on the basis of that system's generated "knowledge". They look at a corpse liquid with decay, imagine it's their high-school sweetheart, and pucker up for a kiss.

They try to think better, which in practice seems to amount to finding reasonably persuasive memes, and then engaging significant social pressure against anyone who dissents from the Correct Answer. "Shut up and multiply", naïve utilitarianism, and the whole idea of Coherent Extrapolated Volition fall into this bucket, along with quite a bit of the rest of the AI and EA classics. They imagine that they've Found Answers, and then they try to use those answers, sometimes they hurt people, and sometimes they burn value. (And sometimes they actually do some good, at least temporarily. I greatly admire their work on bed nets. Their obsession with animal suffering, much, much less so.)

So where's the baseline to compare that to? Unsurprisingly, I'll compare it to my church. The fact that we're Christians is probably game, set and match for you; what could possibly be less rational? And yet, I find the results preferable. The people in my church helped pull me out of a mental tailspin once upon a time. They gave me love and community. I found a wife there. I have a family now. When the breakdown of our society left me with a burning ocean of rage and hate inside me, the tools I'd picked up second-hand from a steady diet of Rationalist thought did nothing but stoke that fire ever hotter. Amusingly enough, it was my church, and a simple conversation with @HlynkaCG here, that extinguished that fury, to my inestimable benefit. That's why I argue alongside him in these threads: because I've experienced, viscerally, how deeply correct and necessary his particular perspective can be.

Rigid conformists, compared to whom?

Conformity has positives and negatives. It seems to me that Rationalists manage to engineer away most of the positives, while leaving the worst negatives in place. Presuming it were so, that seems like a reasonable thing to criticize. Whether it is so or not is of course a different argument.

If it appears that they don’t have a justification for obeying the king or the ten commandments, it’s because they never had one.

...I think what you're trying to express here is the idea that, before the Enlightenment, people simply did what they were told without thinking, with obedience to kings and to the Ten Commandments being two examples of this purportedly sheeplike behavior. The problem is that this line of reasoning is absolutely fucked. There have been no shortage of rebellions and revolutions against kings throughout history, for a whole variety of reasons, and there have been no shortage of loyal populations for a wide variety of reasons beyond sheeplike obedience. Likewise, it is not obvious to me why one would need sophisticated arguments for obeying the Ten Commandments; they're relatively straightforward, and point to obviously beneficial ways of life in any social context.

“Historical proof that tradition works! Children! Life satisfaction! Less skulls!” I hear you say, but those reasons are embedded within, and fundamentally acceptable to, a rationalist framework.

In my experience, no, they really aren't. They require certain concessions and leaps of faith that are not, strictly speaking, rational, by any measure but the results. Rationality could not help me with my rage and hate, because my rage and hate were, strictly speaking, rational, evidence-based, logically sound. It took an explicit abandonment of the Rationalist obsession with the pursuit of power and control to halt that spiral. Other examples would be the evergreen meme of rationalist-founded religions, rationalist approaches to dating and relationships, the rationalist attitudes toward risk and value and much else besides.

Were you a true-blue peasant, you wouldn’t need those things, you’d do as you’re told, and go blindly where the priests lead you.

Yeah, that's... not really how it works, or ever has, or ever will. I'm not a Neo-reactionary, worshipping hierarchy, and Rationalism does not have a monopoly on rationality, on logic. Amusingly enough, I'm not even sure that you and I disagree all that strongly on the object-level, and this isn't all a dispute about definitions. But if you believe that Rationalism or the Enlightenment invented critical thinking, I'm not sure what to tell you, other perhaps than you should think hard about where such a foolish idea came from. Do you honestly believe that all people before, say, the 1600s were incapable of reason? Do you think that the overall level of superstition and magical thinking has actually gone down over time? On what basis would you suppose such a thing? On what evidence?

They were certainly capable of reason, it’s just that the opponents of the enlightenment would tell them to shut up, often by force of arms. Our friend’s object of admiration, Thomas Hobbes, was censored and nearly labeled a heretic by the english monarchy, the law went "the committee should be empowered to receive information touching such books as tend to atheism, blasphemy and profaneness... in particular... the book of Mr. Hobbes called the Leviathan.", forcing him to publish in amsterdam for the rest of his life.

What you defend has a name : obscurantism. And your intellectual forebearers wouldn’t even give the peasants a translation of the bible, so they literally believed what I said about the priests leading the blind. Do you disagree that the enlightenment meant education for the masses, and the discussion of ideas and justifications for the stuff they used to have to believe on faith (and stick)? Mistakes were made, sure, but for once they were their mistakes, and not those of their corrupt, self-appointed shepherds. Bite the bullet like moldbug already, burn the heretics and keep the peasants in their rightful place.

How do you explain the massive correlation between the ‘age of reason’ and technological advancement (and life expectancy, etc) if one has nothing to do with the other? How can you look at the post-enlightenment world and think ‘immiseration’? I don't think the numbers back you up on that.

Please, give me the names of “common-sense” contemporary critics of the enlightenment you apparently identify with. I predict there aren’t any, because they were all obscurantist censors who didn’t know anything and wanted to know even less. That’s why you and Hlynka can never quite explain from what intellectual tradition you hail from. There’s nothing there, just obscurantism and reason. That’s the mystery at the heart of the ‘Inferential distance’, not some earthy wisdom.

So no, I don’t think we would be having this conversation without the enlightenment. I probably wouldn’t know how to read, and at best I’d be burning my writings like Hobbes did to avoid the Inquisitor General’s attention.

I’m glad religion helped you like homeopathy helps some people, but I don’t choose ‘my truth’ by its therapeutic effects.

They were certainly capable of reason, it’s just that the opponents of the enlightenment would tell them to shut up

Censorship is a universal component of all human societies. Enlightenment societies are no different.

What you defend has a name : obscurantism.

I point out that all societies censor. You claim I am defending obscurantism. To the extent that all societies are obscurantist for relatively straightforward and unavoidable reasons, sure, I guess. I like functional society, and it appears straightforwardly true that some information can be quite harmful to society's function. Crucially, I see no evidence that you have a workable alternative, rather than an imaginary one.

And your intellectual forebearers wouldn’t even give the peasants a translation of the bible, so they literally believed what I said about the priests leading the blind.

What makes them my intellectual forebears? I'm not Catholic, though I note that prior to the invention of the printing press, mass literacy and mass distribution of bibles probably wasn't physically possible. There's no point in teaching people to read when there's literally nothing for them to read. As soon as printing was developed, Protestant nations leaned hard into building universal literacy and wide distribution of bibles, which made book production and general education a practical possibility. All this paved the way for the Enlightenment, note.

Do you disagree that the enlightenment meant education for the masses, and the discussion of ideas and justifications for the stuff they used to have to believe on faith (and stick)?

Yes I do, because education for the masses started first and probably made the Enlightenment possible, and because a lot of the core Enlightenment beliefs seem very obviously based on faith and sticks. The concept of social progress, of the infinite perfectibility of man, the idea of social engineering and especially the ideas of what it could accomplish, were not rationally-grounded or scientific in any meaningful sense. The Enlightenment vanguard believed they could solve human nature, straight up, and it is intellectually dishonest to allow them their after-the-fact rationalizations and walk-backs. They believed that ignorance, sickness, poverty and crime were the results of mismanagement by society's leadership, not emergent properties of human nature, and they killed a lot of people based on this entirely magical belief. They conceal these failures through relatively unsophisticated lies about the historical record, by retroactively assigning all positive aspects of history to themselves and all negative aspects to their opponents, regardless of the facts. They've been winning for so long that few people actually poke at the lies, but once one does they pop like a soap bubble.

How do you explain the massive correlation between the ‘age of reason’ and technological advancement (and life expectancy, etc) if one has nothing to do with the other?

Mass literacy was always going to produce an explosion of knowledge, and it arrived because technological development was already running up the exponential curve. The Enlightenment came after these trends were already well progressed, and throughout the era it followed or even retarded progress, rather than leading. The French Revolution sold itself as explicitly scientific and reason-based, but its social and political theories were bullshit, and it did not in fact significantly advance science relative to, say, England or America. Individual Devout Christians and devout Christian societies have frequently made significant contributions to actual science, while the Enlightenment was a wellspring of destructive pseudoscience from its inception to now. Rousseau was not a scientist, and neither was Marx, nor Freud, nor Dewey, nor Skinner. These men were driven by a single coherent, consistent ideology, by the idea that they could solve human nature. They and many others like them built the social sciences, and through them much of the world we live in, and none of them were constrained in the slightest way by truth or objective facts. It's true that many actual scientists saw themselves as contributing to the Enlightenment project, but this is to their detriment, not the Enlightenment's credit. To the extent that, say, Einstein could not recognize that Freudianism was pseudoscience, that speaks poorly of Einstein's abilities as a scientist. Freudianism, like most explicit products of the Enlightenment, never had the slightest empirical foundation. It did not make accurate predictions. It did not deliver significant results. It was a con job from the start, and why it worked as well as it did is a question that deserves careful examination.

A huge part of the point I'm trying to get across here is that claiming to FUCKING LOVE SCIENCE is not the same as an actual commitment to scientific truth. The standard Enlightenment line is that someone who believes in God and rigorously obeys the scientific method in empirical matters is less of a scientist than a proud atheist who spends their life proliferating baseless pseudoscientific bullshit until it's assumed common knowledge society-wide. This sort of ass-backwards fuckup recurs regularly throughout the history of the Enlightenment, and that historical reality is a serious problem for the consensus narrative as I understand it.

Please, give me the names of “common-sense” contemporary critics of the enlightenment you apparently identify with. I predict there aren’t any, because they were all obscurantist censors who didn’t know anything and wanted to know even less.

C. S. Lewis, G. K. Chesterton, and H. L. Mencken would be three to start. Are those contemporary enough?

I’m glad religion helped you like homeopathy helps some people, but I don’t choose ‘my truth’ by its therapeutic effects.

Axioms are a choice, and they have observable results. Philosophical commitments are not homeopathy, nor are they therapeutic. Some beliefs are simply more adaptive than others, and some Rationalist beliefs are very, very maladaptive, in the same way that embracing short-time-horizon unrestrained hedonism is maladaptive. The Rationalist obsession with control is one such maladaptation.

More comments

I think it’s simply a thousand or more instances of “when a measurement becomes the milestone to be reached, it’s no longer a functioning measurement.” We’ve sort of metricized everything into quantifiable measurements, judge things by the ability to hit those measurements, and are somewhat surprised when rewards and punishments are given based on that, and that people are gaming the system.

It seems like since most of the interaction is through screens, people sort of forget that the map and the measurements are proxy’s for reality, they aren’t real.

As clarification for others, ISR is Intelligence, Surveillance, Reconnaissance and BDA is Bomb Damage Assessment.

Just as the US State Department didn't want to be told how precarious the situation with ISIL was

Afghanistan is another good example - superiors were happy to hear about how they were running over children with MRAPs(!) since that was something they could fix. The huge systemic problems with corruption that threatened the very basis of the campaign, not so much: https://twitter.com/RichardHanania/status/1204178295618621440/photo/1

The huge systemic problems with corruption that threatened the very basis of the campaign, not so much

Superiors didn't want to know because the administration back home didn't want to know. Everyone regardless of political party wanted to wave the flag and be "we're bringing democracy and liberation to the people" and so "maybe the warlords who we're supporting/arming/paying to be our notional allies are functional paedophiles but it's their Cultural Tradition and let's not rock the boat, meanwhile we're selling the story back home that we're enabling women to be liberated and letting girls get an education and bringing the benefits of Westernisation to the backwards nation".

Then the withdrawal happened fast, the so-called national government folded like wet cardboard because outside of a couple of the cities it never existed, the systemic corruption meant that there was no independent organisation to stand on its own two feet, and the Taliban rolled back in. And nobody wanted to hear that this was the most likely outcome, because of the time and money spent and because it would contradict the happy, rosy, fake narrative crafted back home.