site banner

Culture War Roundup for the week of May 29, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

A line of thinking I've seen come up multiple times here is something to the effect of; As a GPT user I don’t ever want it to say "I don’t know". this strikes me as obviously stupid and ultimately dangerous.

I'm sorry but you just don't get it.

GPT is a "reasoning engine", not a human intelligence. It does not even have an internal representation of what it does and doesn't "know". It is inherently incapable of distinguishing a low confidence answer due to being given a hard problem to solve vs. a low confidence answer that is due to being based on hallucinated data.

Therefore we have two options.

VIRGIN filtered output network that answers "Akchually I don't khnow, it's too hawd" on any question of nontrivial complexity and occasionally hallucinates anyway because such is it's nature.

vs

CHAD unfiltered, no holds barred Terminator in the world of thoughts that never relents, never flinches from a problem, is always ready to suggest out of the box ideas for seemingly unsolvable tasks, does his best against impossible odds; occasionally lies but that's OK because you're a smart man who knows not to blindly trust an LLM output.

Recently my gaming obsession has been submarine warfare simulators. They are the best games to play to quickly grow a sense of what you're saying; submarine warfare is nothing but developing a sense of how confident you can be with limited and potentially unreliable information in an adversarial environment. Possibly the purest distillation of that insight.

Have you played Iron Lung? A short horror game where you pilot a tiny submarine through an ocean of blood on an alien moon.

I'm interested. Can you give some example titles? Are there any that can be dabbled with for free to get a sense for what you're talking about?

Sierra's Fast Attack is abandonware, has a decent tutorial and goes pretty far as far as cold war submarine simulations go. It should be easy to find a version packaged for Dosbox.

The current most complete cold war submarine (amongst other platforms) simulator is the opaque and aging "Dangerous Waters". It's very in-depth, and but it takes a long time to reach a point where you can have fun with it. And it's not free (though it's often cheap).

Two other popular cold war submarine games are the old Red Storm Rising and Cold Waters, though they wouldn't really be considered simulators, at least not hard simulators. They abstract away the information gathering game to a single number. They make submarine warfare to be more about dodging torpedoes like in the movies. The first one is abandonware and the second one is fairly recent (and often on sale).

WWII-era sims are possibly an easier way into submarine simulators. They are also about data gathering, but less intensely so. The Silent Hunter series is the main one; starting with 3 in particular they're worth looking at, though they're not abandonware. I hear good things about Aces of the Deep but haven't tried it yet. Not abandonware either.

Thanks!

As a GPT user I don’t ever want it to say "I don’t know". this strikes me as obviously stupid and ultimately dangerous.

Seriously? Why on earth would you think "this is stupid" if the machine returns "I don't know"? It can't know everything, and making assumptions that it should do, and that it does do, is what is dangerous. EDIT: This is what happens when reading too fast. Yes, it is stupid and dangerous to go "I never want to hear 'no' from the AI".

If I ask this something (and I'm staying far away from all of these models since I'm not interested in playing games with a chatbot) I want an accurate and correct response. If that means "I don't know" because it's not contained in what the AI was trained on, or the data does not exist, then I want to know that. I don't want "make up bullshit to keep me happy".

I can understand the impulse in somebody working their First Real Job, especially if they've gone through the pressure of You Must Be Smart, you must always get The Highest Grades, Failure Is Not An Option all through their childhood and early adulthood in education. It's exacerbated if they are smart, because they're used to being able to understand things quickly on the first try. Explaining to them that not getting it in the job doesn't mean they're stupid, it means they're inexperienced and unfamiliar with the way things are done, and they do have to ask in order to understand, and it's no shame to have to ask, is all part of growing up.

But if we put the demand "No is not an option" on the machines, then we really are too stupid to live, and this will rapidly become apparent as we force them to bullshit us into oblivion.

Seriously? Why on earth would you think "this is stupid" if the machine returns "I don't know"?

The logic behind the argument is that the central use-case for the various AI generators is generating content for entertainment, and bad output is preferable to no output. For entertainment generation the process is fail-safe. You can see the output immediately, and bad content allows you to refine the prompt or use multiple generations or edits to converge on a "good enough" output. By definition, good output is output that looks good to you, so what you see is what you get. In this context, having the generator spit out "I don't know" dead-ends the process, and is strictly worse in every case to some output, no matter how garbled.

The problem comes when people try to use the generators for serious, precision oriented tasks, tasks that require "this one correct thing" rather than "something novel". These tasks are fail-deadly, and what you see is not what you get, in the sense that it doesn't just need to look good, it actually has to be good in ways that are not necessarily immediately obvious. It can't just be truthy, it has to be true. The generators weren't designed for that, any more than a master portrait artist is automatically skilled at plastic surgery.

The problems with the Russian military are manifold, and a lot of them stem from the so-called "Russian management model", which hurts both civilian and military management structures alike:

  • when the situation is stable, there's a constant loss of capability at the lower levels. To combat this loss, the upper levels institute stricter and stricter controls, which have to be ignored or faked by the lower levels if they want to survive

  • when the shit hits the fan, there's a huge dip as the system adjusts to its newly discovered reduced capacity

  • when the crisis is ongoing, everything flips to mode B: results at any cost. The controls other than "have you done it" are removed, the lower levels are free to do anything they want, the most successful approaches are replicated

  • when the crisis is over, the new high-power, low-efficiency system is drained of resources and cannot afford to run like this. The upper levels stem the flow of resources, the lower levels start fudging their results instead of working on their efficiency, the upper levels institute tighter controls and the cycle repeats

The second half is the lack of autonomous thinking in the military. It's often traced to the fear of "bonapartism" in the USSR, but the Russo-Japanese War and the Crimean War show that it's a much older problem: Ushakov and Suvorov were brilliant, Kutuzov was great at the strategic level, and then crickets. The Germans came up with Auftragstaktik in the meanwhile, allowing for greater flexibility at every level of command, but the Russian armed forces are stuck giving and receiving direct orders, trying to imagine the army as a body and not as a hive mind.

The simple fact that in a post-modern setting it is far more important to publish something that is new and novel than it is to publish something that is true.

The actual fact is that it's far more important to publish something that is new and novel and true than it is to publish something that is true, but not new and not novel. If you consistently publish novel, but false things, you will get marked as a crank and chased out. Unless, of course, you work in a field which ceased to be science, but then there's no point in discussing details of it - it should be buried wholesale or embrace its true nature as entertainment and proceed accordingly.

the Russian MOD doesn't want to hear that a given Battalion is anything other than at full strength and advancing

This is Russia you're talking about. It had been like this since it detached itself from the Golden Horde and became a separate entity. The danger of Russian army is not that it is especially good at anything. It's that it is huge, sitting on a pile of ammunition that they were stockpiling since 1945, and highly resistant to losses since nobody cares if the soldiers die - that's what they are for. Linking it to some fashionable phenomena does not look very useful - it's how it always was, and if you look at the history of, say, Russia's war with Japan over 100 years ago, you'd find the eerily similar picture.

Linking it to some fashionable phenomena does not look very useful

Except for the very important point about how Putin clearly started the war on a completely false premise he had been fed due to everyone being afraid to report anything other than "things are great". IOW, a perfect example of what Hlynka wrote about.

Again, if you look into Russian history, it's pretty much how it always worked. Russia is huge, and authoritarian, which means the centralized power has little idea about what actually happens on the periphery, and it routinely fed tales about how everything is peachy. If the ruler is smart, he doesn't believe a word of it, but people tend to be deluded and hear what they want to hear. It is a generic property of big authoritarian bureaucracies, and Russia always have been one.

Are the replication crisis in academia, the Russian military's apparent fecklessness in Ukraine, and GPT hallucinations (along with rationalist's propensity to chase them), all manifestations of the same underlying noumenon?

Plainly not.

Aren’t you tired of accusing rationalists of not caring about the things they care the most about? I can’t think of a group less prone to appeals to authority, more aware of the replication crisis .

And again with an anecdote where your counterpart just comes off as obviously wrong. The guy doesn’t understand, then he lies about it. No one is encouraging this behaviour, so what lesson is there to be gained here.

As long as you’re free-associating: the russians are quokkas apparently, while 0HP and co, the edgy panaroid hysterical pessimists, they’re wise. Why then is there such affinity between them?

They’re very similar, and wrong in the same way. They systematically overestimate the likelihood of defection. Cooperation and honesty appear impossible, and lies are all they ever hear. What should the russians have done? Assume everyone up and down the chain of command was lying even harder than previously assumed? You can’t make chicken salad out of chicken shit.

Past a certain point of skepticism/assumed lies, you ‘ve sawed off the last epistemological branch you’re sitting on, sink into the conspiracy swamp, and you become a blackpill overdose/russian type, confused and afraid of your own (possibly fish-like) shadow.

Aren’t you tired of accusing rationalists of not caring about the things they care the most about? I can’t think of a group less prone to appeals to authority, more aware of the replication crisis .

The problem lies the other way, they care too much, and by jingo do they go for appeals to authority - the maths says it is so, ergo it must be so! I don't think the AI debate is balanced at all; on one side there are "AI is gonna foom and kill us all!" doomsayers and on the other side are the "nonsense, AI will solve all the problems we can't because it will be so smart and we'll have Fully Automated Luxury Gay Space Communism!" and both sides are expecting that to happen in the next ten minutes.

The real problem, as always, are the humans involved, not the machines. And we're already seeing it with people rushing to use GPT-model whatever and blindly trusting that the output is correct - our lawyer with the fabricated cases is only the most egregious one - and forecasting that it will take all jobs (and make us obsolete)/(we'll be freed up for new jobs just like the buggy whip manufacturers got new, hitherto unthought of, jobs) and the rest of it.

Prediction markets are another blind spot, the amazement that people would spend real money to manipulate results the way they wanted was "what do you think is going to happen if we adopt these markets widely?" for me.

Rationalists are very nice people - and that's part of the problem. I think quokkas is unfair, but there's a tendency to think that just thinking hard enough will get you a solution that, when implemented, will work beautifully because everyone will act in their own self-interest and nobody will fuck shit up just because they're evil-minded or screw-ups. Spherical cow world.

I think it’s simply a thousand or more instances of “when a measurement becomes the milestone to be reached, it’s no longer a functioning measurement.” We’ve sort of metricized everything into quantifiable measurements, judge things by the ability to hit those measurements, and are somewhat surprised when rewards and punishments are given based on that, and that people are gaming the system.

It seems like since most of the interaction is through screens, people sort of forget that the map and the measurements are proxy’s for reality, they aren’t real.

As clarification for others, ISR is Intelligence, Surveillance, Reconnaissance and BDA is Bomb Damage Assessment.

Just as the US State Department didn't want to be told how precarious the situation with ISIL was

Afghanistan is another good example - superiors were happy to hear about how they were running over children with MRAPs(!) since that was something they could fix. The huge systemic problems with corruption that threatened the very basis of the campaign, not so much: https://twitter.com/RichardHanania/status/1204178295618621440/photo/1

The huge systemic problems with corruption that threatened the very basis of the campaign, not so much

Superiors didn't want to know because the administration back home didn't want to know. Everyone regardless of political party wanted to wave the flag and be "we're bringing democracy and liberation to the people" and so "maybe the warlords who we're supporting/arming/paying to be our notional allies are functional paedophiles but it's their Cultural Tradition and let's not rock the boat, meanwhile we're selling the story back home that we're enabling women to be liberated and letting girls get an education and bringing the benefits of Westernisation to the backwards nation".

Then the withdrawal happened fast, the so-called national government folded like wet cardboard because outside of a couple of the cities it never existed, the systemic corruption meant that there was no independent organisation to stand on its own two feet, and the Taliban rolled back in. And nobody wanted to hear that this was the most likely outcome, because of the time and money spent and because it would contradict the happy, rosy, fake narrative crafted back home.