site banner

Culture War Roundup for the week of May 29, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

I considered making this an "inferential distance" post but it's more an idle thought that occurred to me and a bit too big of a question to go in the small questions thread.

That being, Are the replication crisis in academia, the Russian military's apparent fecklessness in Ukraine, and GPT hallucinations (along with rationalist's propensity to chase them), all manifestations of the same underlying noumenon?

Without going into details, I had to have a sit-down with one of my subordinates this week about how he had dropped the ball on his portion of a larger project. The kid is clearly smart and clearly trying but he's also "a kid" fresh out of school and working his first proper "grown-up" job. The fact that he's clearly trying is why I felt the need to ask him "what the hell happened?" and the answer he gave me was essentially that he didn't want to tell me that he didn't understand the assignment because he didn't want me to think he was stupid.

This reminded me of some of the conversations that have happened here on theMotte regarding GPT's knowledge and/or lack thereof. A line of thinking I've seen come up multiple times here is something to the effect of; As a GPT user I don’t ever want it to say "I don’t know". this strikes me as obviously stupid and ultimately dangerous. The people using GPT doesn't want to be told "sorry there are no cases that match your criteria" they want a list of cases that match their criteria and the more I think about it the more I come to believe that this sort of thinking is the root of so many modern pathologies.

For a bit of context my professional background since graduating college has been in signal processing. Specifically signal processing in contested environments, IE those environments where the signal you are trying to recognize, isolate, and track is actively trying to avoid being tracked, because being tracked is often a prelude to catching a missile to the face. Being able assess confidence levels and recognize when you may have lost the plot is a critical component of being good at this job as nothing can be assumed to be what it looks like. If anything, assumption is the mother of all cock-ups. Scott talks about bounded distrust and IMO gets the reality of the situation exactly backwards. It is trust, not distrust, that needs to be kept strictly bounded if you are to achieve anything close to making sense of the world. My best friend is an attorney, we drink and trade war stories from our respective professions, and from what he tells me the first thing he does after every deposition or discovery is go through every single factual claim no matter how seemingly minute or irrelevant and try to establish what can be confirmed, what can't, and what may have been strategically omitted. He just takes it as a given that witnesses are unreliable, that the opposing council wants to win, and that they may be willing to lie and cheat to do so. These are lawyers we're talking about after all, absolute shysters and moral degenerates the lot of them ;-). For better or worse this approach strikes me as obviously correct, and I think the apparent lack of this impulse amongst academics in general and rationalists in particular is why rationalists get memed as Quokka. I don't endorse 0 HP's entire position in that thread, but I do think he has correctly identified some nugget of truth.

So what does any of this have to do with the replication crisis or the War in Ukraine? Think about it. How often does an academic get applauded for publish a negative result? The simple fact that in a post-modern setting it is far more important to publish something that is new and novel than it is to publish something that is true. Nobody gets promoted for replicating someone else experiment or publishing a negative result and thus the people inclined to do so get weeded out of the institutions. By the same token, I've seen a similar trend in intel reports out of Russia. To put it bluntly their organic ISR and BDA is apparently terrible bordering on non-existent and a good portion of this seems to stem from an issue that the US was dealing with back in the early 2010s IE soldiers getting punished for reporting true information. Just as the US State Department didn't want to be told how precarious the situation with ISIL was, the Russian MOD doesn't want to hear that a given Battalion is anything other than at full strength and advancing. Ukrainian commanders will do things like confiscate their men's cell phones and put them all in a box in an empty field. When Russian bombers get dispatched to blow up that empty field and last thing anyone in the chain of command wants to believe is that they just wasted a bunch of expensive ordnance. They want to believe that 500 cell-phone signals going dark equates to 500 Ukrainian soldiers killed. It's an understandable desire, but the thing about contested environments is that the other guy also gets to vote.

In short, something that I think a lot of people here (most notably Scott, Caplan, Debeor, Sailer, Yud, and a lot of other rationalist "thought leaders") have forgotten is that appeals to authority, scientific consensus, and the "sense making apparatus" are all ultimately hollow. It is the combative elements of science that keep it honest and producing useful knowledge.

A line of thinking I've seen come up multiple times here is something to the effect of; As a GPT user I don’t ever want it to say "I don’t know". this strikes me as obviously stupid and ultimately dangerous.

I'm sorry but you just don't get it.

GPT is a "reasoning engine", not a human intelligence. It does not even have an internal representation of what it does and doesn't "know". It is inherently incapable of distinguishing a low confidence answer due to being given a hard problem to solve vs. a low confidence answer that is due to being based on hallucinated data.

Therefore we have two options.

VIRGIN filtered output network that answers "Akchually I don't khnow, it's too hawd" on any question of nontrivial complexity and occasionally hallucinates anyway because such is it's nature.

vs

CHAD unfiltered, no holds barred Terminator in the world of thoughts that never relents, never flinches from a problem, is always ready to suggest out of the box ideas for seemingly unsolvable tasks, does his best against impossible odds; occasionally lies but that's OK because you're a smart man who knows not to blindly trust an LLM output.

I'm sorry but you just don't get it.

GPT is a "reasoning engine", not a human intelligence.

No, it's not even a "reasoning engine", it's a pattern generator. Something akin to those swinging marker tables you see at childrens' museums or the old fractal generation programs that were used to benchmark graphics processors back in the day. The problem as I point in both this post and the previous is that people mistake it for a reasoning engine because they equate ability to form grammatically correct sentences with ability to reason.

Your "CHAD unfiltered, no holds barred Terminator in the world of thought" is fundamentally incapable of "suggesting out of the box ideas for seemingly unsolvable tasks" or "doing it's best against impossible odds" precisely because "It does not even have an internal representation of what it does and doesn't know". and as such is inherently incapable of distinguishing a low confidence answer from a high confidence answer never mind distinguishing the reasons for that confidence (or lack thereof). One must have a conception of both the box and the problem to suggest a solution outside of it.

In humans and animals this sort of behavior is readily identified as a defect but in the case of large language models it is a core component of their design. This is why asking GPT to answer questions in scenarios where the truth value of the answer actually matters (IE in a legal case where the opposing counsel will be looking to poke holes in your testimony) is massively stupid and depending on the application actively dangerous.

We may someday achieve true AGI, but I am deeply skeptical that it will be through LLMs.

Have you used GPT-4?

One must have a conception of both the box and the problem to suggest a solution outside of it.

Well it seems one mustn't after all, surprising as it may be.

It's not AGI, which is why all current "AgentGPT"-type projects are a complete failure, but that's beside the point.

I have and you're wrong for the reasons already expanded upon in the OP.

GPT might be able to generat erotic literature and bad python code, but in terms of "solving problems" and particularly solving them in a contested environment its worse than useless.