site banner

Culture War Roundup for the week of November 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

The Dunning-Kruger effect is autocorrelation

Huh. I can only apologize for the relatively bare link, but I feel like it's worth drawing attention to something so widely accepted yet utterly worthless, especially when it comes up so often here.

The thing about the DK effect is that it makes intuitive sense. To extend it past the realm of typical human intelligence, an ant or a chimp isn't very good at knowing how dumb they are because they're not very good at most things. However, I suspect that the average dumb (human) person does know they're a bit dim, so it confuses me how this finding can even arise.

The problem with the Dunning-Kruger chart is that it violates a fundamental principle in statistics. If you’re going to correlate two sets of data, they must be measured independently. In the Dunning-Kruger chart, this principle gets violated. The chart mixes test score into both axes, giving rise to autocorrelation.

Realizing this mistake, Edward Nuhfer and colleagues asked an interesting question: what happens to the Dunning-Kruger effect if it is measured in a way that is statistically valid? According to Nuhfer’s evidence, the answer is that the effect disappears.

Is it possible to salvage a non-trivial version of the DKE? The one we know and once loved literally works for random data, so that's right out. In other words, what's the cut-off where a stupid person becomes smart enough to know they're stupid, or at least worse than their peers?*

*In a more general sense than a chimp knowing he's not as strong or big as the alpha male.

I thought the DK effect had been debunked (at least in it's common pop framing) for quite a while? I thought the idea that good people under-estimate and bad people over-estimate, was known to be kind of a mythical tack-on to the central more boring claim that self-assessment isn't super reliable.

And, I thought the tack-on came from misunderstanding the one-directional limiting effects on mis-assessment at the top and bottom of a performace scale.

That is the better you are, the harder it objectively to overstate your competence. and vice versa. being good doesn't cause you to understate your ability, it reduces the error in overstating it.

Imagine 3 people who all take a 3-point basketball shot. All three are likely to correctly estimate their ability. A airballs, B hits the backboard, C makes it. they still respectively rank themselves correctly.

3 more people all take the shot. All three are likely to over-estimate their ability. D airballs, E hits the backboard, F makes it. D guesses he tied for second, E guesses he did the best, F also guesses he did the best.

3 more people all take the shot. All three are likely to under-estimate their ability. G airballs, H hits the backboard, I makes it. G guesses he did the worst, H also guesses he did the worst, I guesses he tied for first.

In these three groups of performance tiers: air-ballers, backboarders, and shot-makers, you have an even mix of estimation ability in each. Yet:

A,D,G collectively slightly over-estimated their ability B,E,H collectively got their average ability correct C,F,I collectively slightly underestimated their abiliy

Traditional Pop-understanding of DK effect, misinterprets this result that ADGs think they're better than they are and CFI thikn they're worse, when that's really kind of inverted. It's rather that ADG has less room to err down and CFI less room to err up.

You could run this again with many more groups and even give ADG a stronger propensity to underestimate and CFI a stronger propensity to overestimate, and you'd still get the DK effect.

Compare:

ADDGGGGGG -> still slightly over estimates their ability on average CFFFFFFFFII -> still slightly under estimates their ability on average

Even though the individuals in the group actually have the opposite propensity.

This superficially looks like an uncharitable take by somebody who never read the original 1999 Dunning and Kruger paper. In that paper, the authors explicitly addressed the obvious objection that their finding could be fully explained as a statistical artifact -- they were by no means ironically unaware of this interpretation. They pointed out that the interpretation as a statistical artifact wouldn't account for the asymmetry wherein the overestimation by the unskilled was so much larger than the underestimation by the skilled. That is, the effect is not just about the slopes, but about the intercepts.

Krueger and Mueller (different Kreuger) in 2002 tried to make the case for the effect being artifactual, but acknowledged that something more was required to explain the asymmetry. They claimed the asymmetry came from a generalized bias for people to assess themselves as better than average, which happens to be less wrong for people who are actually better than average than for people who aren't.

Dunning and Kruger (2002) rebutted with claims that further experiments had discredited that explanation, and I lost interest in pursuing the progress of this debate, though Ehrlinger et al (2008) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2702783/ is probably a good next step. Suffice to say though that this is absolutely not some clever new insight that those innumerate psychologists overlooked -- the psychologists have been thinking about it since literally the first paper, and arguing among themselves about it since shortly thereafter.

Is it possible to salvage a non-trivial version of the DKE?

Sure, a great example is to look at people who hear of the DKE once on Reddit, and never shut up about how they see it everywhere.

I suspect that the average dumb (human) person does know they're a bit dim, so it confuses me how this finding can even arise.

I think something we don't think about enough is how dumb people use heuristics to help them navigate a world where they are vaguely-aware that they don't have the intellectual horsepower to participate.

My wife and I argued for a whole drive to Philly about a tweet she saw that went something like: If you're more intelligent than your partner and better at arguing logically, winning every argument with them by arguing logically is abusive. She thought it was retarded, I said that while the use of the term "abusive" was rather florid, the point has some validity. Picture a couple, one a well-educated motte-ian wordcel able to deconstruct and reconstruct a logical argument about anything at any time, the other a nice and well intentioned imbecile who can't string two sentences together. If they agree on logical argument as the way to settle things, the motte-izen will win every time, and the imbecile will never get what he wants, even if the imbecile happens to be right. One side getting what they want every time is bad for the relationship in the long term, even if the imbecile agrees that they lost the argument, they will notice not getting what they wanted later, and they will resent it. They may resent it incoherently, unable to logically explain why the outcome is unjust, but they will resent it nonetheless. This will be deleterious to the relationship.

The idiot moves through life thinking that salesmen are evil, because he knows that a good salesman can talk him into something he'll regret, it's happened before. I will never forget a waitress at my parents' favorite restaurant, telling them about how she needed a new car, and she went to the dealer intending to buy a used Jeep Liberty (a bad car, but a small and economical one) and wound up getting talked into buying a brand new Jeep Commander (a $50k seven passenger monstrosity that FCA's CEO would later call "not fit for human consumption" and state should never have been sold). She had no kids, she had no outdoor hobbies, she had zero need for a seven passenger SUV, but she got talked into it. On a seven year loan at a crazy interest rate.

The dimwit chooses tribalism, because the black skin good white skin bad is an easy meme to keep track of. He'll only benefit modestly, if at all, by his tribal champions being elevated, but it's easier than figuring out actual policy questions, and it avoids the salesman problem again.

The moron assumes that everyone is out to get him, because he has precious little ability to defend himself if they are. Everyone has an angle, he intones sagely, because he has been fooled so many times. Paranoia is adaptive, like a skittish rabbit.

That would imply that badly arguing is abusive, but logically arguing isn't.

Yes, there's a general meme that stupid = idealistic, trusting, gullible and that being smart means being cynical, rational, distrustful. If anything I see more of the reverse.

I think we're looking at two divergent versions of the word Smart. For an analogy, consider NFL quarterbacks. A frequent contrast is drawn between the Hyper-Athlete QB (Lamar Jackson, Josh Allen) who combines arm strength, running speed, strength, creativity to make crazy plays out of structure; and the System QB (Kirk Cousins, Brock Purdy), a savvy game manager who follows the playbook, makes the right passes at the right times, takes what the defense gives him, but rarely makes extraordinary plays. The former uses extraordinary ability to do great things, the other follows good rules from his coach to put him in a position to do great things.

That's the contrast I'm drawing in Smart, we're talking about both at the same time. Smart = intellectual horsepower, IQ, ability to process quickly and examine and analyze. Smart = making good decisions using good heuristics, never being in the position of needing to process quicker than you are capable of. Just as Josh Allen puts himself in terrible positions then pulls a rabbit out of his hat because he's such an outlier athlete, the smart guy will put himself in a position where he needs to process a lot because he ignores the rules. "How does a smart guy act so dumb?"

A frequent contrast is drawn between the Hyper-Athlete QB (Lamar Jackson, Josh Allen) who combines arm strength, running speed, strength, creativity to make crazy plays out of structure; and the System QB (Kirk Cousins, Brock Purdy), a savvy game manager who follows the playbook... Just as Josh Allen puts himself in terrible positions then pulls a rabbit out of his hat because he's such an outlier athlete, the smart guy will put himself in a position where he needs to process a lot because he ignores the rules

Very bold of you to assume these quarterback analogies will make any sense to anyone on The Motte. Consider rewriting using HPMOR characters.

I think what's going on here with the trope of an "idealistic, trusting, gullible" simpleton and your knuckleheaded distrustful simpleton is that he's the same man, before and after getting taken for a ride. Real stupid people are not like Lennie, jumping into water when you tell them to, then forgetting about it, and being glad you saved them from drowning. There is a switch from total trust to total distrust. The boomercon who had a child's faith in US foreign policy in 2003 believes the US government is populated by satanist pedophiles in 2023.

The reason old fiction has so many trusting yokels that we don't see IRL anymore is that the world changed. The simpleton gets scammed early and often in the modern world, and updates his heuristic accordingly.

As evidence for what you’ve said:

Study: Smarter People Are More Trusting

The researchers posit that intelligent people might be better at correctly evaluating whether people are trustworthy, or whether a particular person is likely to act untrustworthily in a particular situation.

Is there a good name for this concept of “dumb people heuristics”? Or are you aware of other studies that develop it further?

and never shut up about how they see it everywhere.

I wanted to make a joke about baader-meinhof, but I’m not an expert.

Reinforces to me that one of the biggest groups for Dunning-Kruger is research psychologists...

Is it possible to salvage a non-trivial version of the DKE?

The observation that low-skilled people overestimate themselves and high-skilled people underestimate themselves survives this criticism. Yeah, that's trivial. It doesn't reveal the psychology of skilled vs unskilled, just that, like @rae says, misestimation by the worst will be on average an overestimate, while misestimation by the best will be an underestimate.

The reason Dunning-Kruger feels so real to us is that dilettants outnumber experts by a ludicrous margin. It feels like "history buffs" on the internet are always running their mouths with questionable takes on the Roman Empire. In reality, specialists talk a lot more. There just aren't that many of them.

I think this is a satisfactory explanation, so thank you for suggesting it.

I don't really understand the central claim here I think?

Like, I get why the data is auto-correlated, sure.

But the Dunning-Kruger effect isn't an observation about confidence and test scores being correlated at all, it's an observation about the slope of the lines. That the slope of y-x is smaller than the slope of x=x, such that low scores are overconfident and high scores are underconfident.

And it seems like random data could have produced either slope.

The Dunning-Kruger effect just showed that people at bad at estimating their score on a test, nothing more. It didn’t show that lower skilled people think of themselves as more competent than higher skilled people (the latter’s estimation was higher than the former on average).

The “overestimation” and “underestimation” is just a statistical artefact - if you get a 0, any random estimation is going to be an overestimation unless you get it precisely right, and if you get 100, an underestimation, and the same goes for anything in-between to a lesser extent.

But people absolutely loooove to take these limited psychology studies and twist them to sound like it gives some clever insight into the human condition (generally one that supports their preconceived notions and biases), so that’s how we got “dumb people think they’re clever and smart people think they’re dumb” from “people can’t estimate their result on a general-knowledge test very accurately on average”.

I've seen a few variants of "Dunning-Kruger doesn't exist" papers, and nowadays I agree that the common version you usually hear needs to be greatly amended. My favorite one so far was someone showing that a) stupid/smart people do generally know that they're stupid/smart, they just on average tend to underestimate their distance to the mean and b) optimal bayesian reasoning will always lead to something that looks like this (if you start from the assumption that you're average in everything, and then update this prior based on new evidence, exceptionally smart and exceptionally stupid people will tend to underestimate their smartness/stupidity, even if they know in which bucket they fall). I unfortunately don't remember where I have that from.

Pending a detailed read of Nufher et al. and Gignac & Zajenkowski, this appears as one of three -- either the blogpost is simply wrong; or I have a misundertanding of the Dunning-Kruger effect; or the author has done a really shit job at explaining himself. Either way, I'm not convinced so far.

It means that we can throw random numbers into x and y — numbers which could not possibly contain the Dunning-Kruger effect — and yet out the other end, the effect will still emerge.

My understanding of the DKE is that self-assessment is poorly correlated with objective ability in such a way that poor performers overrate their performance and good performers underate theirs. In this case, the lack of correlation in Fig. 7 from y being a variable with a uniform distribution uncorrelated with x already shows the effect! I'm not sure how the author is so sure that plotting uncorrelated variables and "showing" the DKE disproves it, as the entire point is that they're poorly-to-uncorrelated!

If my understanding of the Dunning-Kruger effect is right, I suspect the author may be right to some degree (just based on personal experience, I think DKE is extremely oversold, and even if true is unlikely to be very important), but his working is definitely wrong.

My understanding of the DKE is that self-assessment is poorly correlated with objective ability in such a way that poor performers overrate their performance and good performers underate theirs.

I think the point was something else. Imagine another test where people threw a dice and then they estimated what their dice throw was. Of course people who threw 6 could only underestimate or be correct and people who threw 1 could only be correct or overestimate.

So even if both the result and estimation was random, then you would reproduce Duning-Kruger effect due to autocorrelation. Result of “over/underestimation” is dependent and correlated to the measure you over/underestimate against which is also a variable. The correct answer is just that this is stupid statistical artefact.

vorelated

Kinky.

Avoid low-effort posts, please.

In that case I certainly did not get that impression from the blogpost, and I don't think this has anything to do with "autocorrelation" as much as it has to do with the data being bounded, which I think is another argument entirely. Incidentally I think the bounded-data explanation, which seems the most obvious one to me (along with other ones like the better-than-average effect and simple regression to the mean), are much more convincing than wrangling about autocorrelation. It's also using "autocorrelation" in a weird way.

I remain of the opinion that DKE is probably artefactual or minor at best, but the blog is still either poorly written or wrong.

My understanding of the DKE is that self-assessment is poorly correlated with objective ability in such a way that poor performers overrate their performance and good performers underate theirs.

This is not the "public consciousness" understanding of the DKE. That is the claim that "people who say they are real good and talk about how good they are are actually no better or even worse than the people who say they are bad".

poorly correlated with objective ability in such a way that poor performers overrate their performance and good performers underate theirs

This is not how poor correlation is usually defined either in real life, that's normally given by r, and you can have very high r while the statement "poor performers overrate their performance and good performers underate theirs" (say r = 0.99, an out of the world level correlation for anything in the social sciences) is still true.

This is not the "public consciousness" understanding of the DKE. That is the claim that "people who say they are real good and talk about how good they are are actually no better or even worse than the people who say they are bad".

In that case I stand corrected. That seems silly.

This is not how poor correlation is usually defined either in real life, that's normally given by r, and you can have very high r while the statement "poor performers overrate their performance and good performers underate theirs" (say r = 0.99, an out of the world level correlation for anything in the social sciences) is still true.

Well, of course "can have very high r and still have [that statement] be true" is true with r<1, especially since the data is bounded and poor performers are naturally going to be more room to overestimate and good performers underestimate. I thought the point of DKE was that r was low,

It might just be that the effect is bullshit and there is nothing to salvage.

I'm not some genius, but just because I can use relatively advanced English grammar and I know a lot about some things like history and math compared to the average person (though not nearly as much as someone who has dedicated himself to those fields!), I often get people I just met telling me how smart I am and how they feel that they are not capable of doing a bunch of things that come easily to me. To the point that I often find it embarrassing and I try to get them to have better esteem about their own intellectual abilities.

In other words, I have a lot of experience of people who actually are not good at certain things telling me that in their opinion, they are not good at those things.

On the other hand, I rarely experience someone telling me that they are good at something and then just utterly failing to demonstrate it. I imagine that most people who are inclined that way quickly learn from the embarrassment that they feel after failing to not go around boasting about those particular skills.

I have also met exceptions, of course, but perhaps they prove the rule.

My experience with the DKE can basically be translated as 'I am knowledgeable and skilled at one thing, therefor, I am knowledgeable and skilled at everything.'

Often with lawyers.

The topical example right now is surely "I can build electric cars and rocket ships (in both cases with a level of intense study that took me a year or more and would take someone without my IQ or work ethic decades) so I can run a social media company by winging it and still have time to shitpost."

I mean, he turned out to be right, did he not?

On the other hand, I rarely experience someone telling me that they are good at something and then just utterly failing to demonstrate it.

With easily demonstrable skills or testable claims this is relatively rare. But with skills or claims that are hard to test I feel like this is relatively common.

People don't tell you that they're "good" at history, economics, epidemiology or whatever, but they clearly believe it, and it's also clearly a false belief.

This might be a different thing but I believe that there is something to the idea that a little knowledge often makes people overconfident, especially when reality doesn't check them on this, which can make them functionally less "informed" than the uneducated, partially because they stop believing their lying eyes.

People "higher up" have slightly different issues. Like having a lot of knowledge in one area often makes people overconfident in the validity of their limited knowledge in other areas. Or they overgeneralise the insights from their area of expertise.

With easily demonstrable skills or testable claims this is relatively rare. But with skills or claims that are hard to test I feel like this is relatively common.

If you've actually read the original Dunning-Kreuger paper, they make this point explicitly, using basketball as an example. The percentage of people who think they could go one-on-one with a top NBA player and end up looking like other than a complete fool is, I would imagine, statistically indistinguishable from zero. And those few exceptions probably really are mostly outliers in basketball ability relative to the general population, even if they aren't as good as they think they are.

It would be interesting to look at this with more charged examples that people have higher mental error bars on - how many people think they could go one-on-one with a random WNBA player and not look like a total fool? I suspect that some will overestimate their own ability and underestimate women, others with deflect because they don't want to state that they think women are basically bad at basketball. The 2023 top scorer in the WNBA is only 5'10", so it's likely easier for a typical hobby baller to think they can get a few buckets and/or stops than when considering the obvious mismatch in how trivially Giannis Antetokounmpo will dunk on your head and swat away anything you try to put up.

It might just be that the effect is bullshit and there is nothing to salvage.

Within the range of intelligence in the kind of people usually recruited for such psychometric studies? Probably.

But lower than that? I think someone with an IQ of say 60, in the retarded category, simply doesn't know how bad they are. It seems to me that that kind of general awareness of relative competence arises somewhere, even if most humans are past it.

I have also met exceptions, of course, but perhaps they prove the rule.

My BP just shot up 20mm of mercury after hearing that phrase haha. Exceptions cannot prove the rule. They do the very opposite, at least in the sense most people use the phrase!

The origin of it, which is far more reasonable, is along the lines of seeing a sign saying "visiting hours 3-5 pm" at a hospital, from which we can glean that visiting isn't allowed outside this window. Thus the existence of an "exception" carved out from a general rule suggests the existence of said rule in a broad sense, but you're not using it that way :(

This is the hill I choose to die on, I'm digging trenches and planting mines as we speak.

Exceptions cannot prove the rule. They do the very opposite, at least in the sense most people use the phrase!

What is the sense most people use the phrase in? I've always understood it as, "since the only exception you can think of is an obvious special case that is so different from everything else, you cannot use it to challenge the validity of the general rule". For example, "Prof. Dull will fail you if you miss the paper submission deadline even by a second. Don't even bother asking him for an extension!" - "But what about Tim? He disappeared for two months and Dull didn't fail him!" - "Tim's a victim of murder-suicide, the only surviving member of his family. His mom shot him in the head on the day Dull handed out the assignments and he has spent these two months in the ICU. He's the exception that proves the rule".

The exception proves the rule (where proves means something closer to imply) is a useful linguistic short hand for understanding situations in which the existence of an exception implies the existence of a rule to which there is an exception.

First, I don't think your hypothetical is how this phrase is normally used. The parent example in this thread which is almost a perfect match for every example I have ever caught in the wild is literally, I think people be this way but some people are not, so maybe I am correct and people are actually this way. The added 'authority' of the aphorism is being grossly abused.

In your hypothetical, the exception still does not seem to 'prove' the rule. The rule in your hypothetical is, You can't turn in papers past the deadline. The exception is, some kid was allowed to turn in papers past the deadline. This exception explicitly disproves the rule. You can change the rule to include, without a very good reason, and then Tim at least serves as a data point about the kinds of good reasons that the rule would accept/reject. Still to me, the rule is clearly being 'proven' by the smuggled in NotTims who did not get an extension for their various lesser reasons. Basically your hypothetical is, most evidence seems to prove the rule, and while Tim is an exception, Tim is such an outlier he should not be considered as evidence against the general rule. To phrase that as, "He's the exception that proves the rule." seems both confused and wrong. At best he is the exception that fails to disprove the rule.

a sign saying "visiting hours 3-5 pm" at a hospital, from which we can glean that visiting isn't allowed outside this window

Apparently there is some debate about this, with many internet sources supporting your interpretation, but this is not how I understand the phrase "the exception proves the rule." Your example about limited hours (at a hospital, or parking) is commonly used but I don't really think it demonstrates the true meaning of the phrase. My perspective is, as long as people keep visiting the hospital between 3-5 pm, we can't really know what would happen if someone visits outside those hours. Maybe it is really enforced, or maybe the sign is like those vestigial "Please maintain 2 meter distance" signs we see everywhere nowadays, and nobody really cares.

So the way we really prove (in the sense of to test, as in "the proof of the pudding is in the eating) is when a visitor shows up at the hospital at 6pm. If they are turned away, then there really is a rule that nobody can visit outside the hours of 3-5 pm. If they are let in, there is no such rule. So in other words, it's not the sign, but the attempted visit outside "visiting hours," that proves the rule.

Edit: For transparency, the Wikipedia article which I indirectly referenced here thinks I'm engaging in wishful thinking about what I think the phrase should mean, rather than its actual origin. Which.... may be a fair criticism.

The presumption, which I think is highly justified in reality, is that if someone went to the trouble of explicitly stating an act is allowed during a particular period of time/place/whatever, that implies that it's not allowed or at least discouraged outside of those specific circumstances.

Rules are not necessarily adhered to 100% of the time, in the hospital from which I'm shit posting, there is a rule regarding the timings when visitors are allowed, the typical person coming outside of that will be turned away, but if you're connected or have an important reason, you'll be let through. That does not invalidate the general principle that visitors aren't allowed outside the allotted time.

It might well be that the security guard was napping and didn't notice, but that doesn't mean there isn't a rule, in the same manner that successfully launching a heist on Fort Knox isn't proof that stealing is legal.

Yeah, it makes sense that sometimes it's just easier to state the exceptions than all the allowed cases. And apparently the Latin origin of the phrase, "exceptio probat regulam in casibus non exceptis," is something like "the exception demonstrates the rule for non-excepted cases", so I don't really have an etymological leg to stand on. I just like my understanding of the phrase so if we are going to move away from the original meaning, I'm trying to steer people towards my preference rather than the nonsensical "the counter-example that invalidates my supposed rule actually proves that my rule is correct!"

Edit: After discussion with AI, I have decided that I need to be charitable and allow that the common, modern usage also is sensible, under this interpretation: "the exception proves the rule" --> "the distinctiveness and memorability of an exception highlight the regularity or norm of other cases"

In cognitive psychology, there's a concept known as the "von Restorff effect" or "isolation effect," which states that an item that "stands out like a sore thumb" is more likely to be remembered than other items. Applying this to the phrase, an exception is memorable precisely because it deviates from the norm. Its uniqueness and the mental emphasis we place on it implicitly reinforce the understanding that it is an outlier, while the standard or usual cases don't stand out in memory because they conform to the expected pattern.

This is nice. I always assumed the intended meaning must be something related to this, until I learned the real origin. IE, something like "Okay, we admit this is an exception (points at really weird special case), but look how far we had to go to come up with an exception - clearly the rule is going to apply the overwhelming majority of the time."

Stop making my sworn enemies seem reasonable, you're doing far too good a job at it!

shakes fist

The origin of it, which is far more reasonable, is along the lines of seeing a sign saying "visiting hours 3-5 pm" at a hospital, from which we can glean that visiting isn't allowed outside this window. Thus the existence of an "exception" carved out from a general rule suggests the existence of said rule in a broad sense, but you're not using it that way :(

I didn't actually know this, I always thought that the idiom was stupid, but this explains a lot.

We mock the South Pacific "cargo cults" specifically, but "copy what we see even if our copy no longer makes sense" is a very general human failing.

I wouldn't be surprised if at this point the idiom is used more to mean "I treat evidence against me as evidence in favor" than for its original meaning.

To muddy the waters further, there's also an in-between meaning: if someone points out that an X is famous for also being Y, that's disproof rather than support for the 'rule' that no X can be Y, but it also often does support the 'rule' that Xs are Y at a disproportionately low rate, because otherwise the exceptions would have been ordinary rather than famed.

You're welcome, after learning the explanation a while back, my furious urge to genocide anyone who uses it casually has been tempered to mere homicide of the unrepentant, since there's a small chance they're using it the original sense. A very small chance indeed :(

I was using "the exception that proves the rule" as an idiom but I agree with you that it does not make sense literally.

So so much of pop science is utter garbage that it is amazing it ever got the reach it did.

Another example is the "human brains don't mature until 25" BS, where even the authors of the original paper from where this statement comes from don't agree with it and are surprised that this, out of all the claims in the paper, was the takeaway which entered public consciousness...

Another of my most hated ones is the idea that only 7% of meaning comes from words (the rest from body language and tone). This doesn't even make any fucking sense, much less have any evidence that it's true. It's a terrible description (like, you'd have to be borderline-illiterate to go to the original source and have this as your takeaway) of a study that was making a completely different, much narrower point. (Specifically, that if your verbal and nonverbal communication don't match, people will generally believe the nonverbal portion - e.g. you say to your spouse "everything's fine!" in an angry, aggressive tone, this will not persuade them that everything's fine.)

Thinking for ten seconds about the last non-trivial conversation you had (roughly speaking, one where novel information was exchanged on a topic other than the participants' current emotional states) should be enough to disprove this idea decisively. How, you should ask yourself, could 93% of that information have been exchanged in a way that was independent of the words used? In most cases, you will find that the question not only does not have a good answer, but is hard to even make sense of.

This has not prevented the idea from showing up in training materials from major multinational corporations, not to mention the Web sites of universities that should damn well know better.

Yeah, if only 7% of the meaning comes from words, then this dude who has Wernicke's Aphasia is communicating just as much as a videolink of a normal conversation where 7% of the time the screen disappears and the person is muted. I find that basically impossible to believe

Thinking for ten seconds about the last non-trivial conversation you had (roughly speaking, one where novel information was exchanged on a topic other than the participants' current emotional states) should be enough to disprove this idea decisively.

Yeah, this seems absolutely impossible to believe. The information content of someone telling me, "I would like pizza for dinner" via text message is almost identical to them saying the same thing face-to-face. I might glean some additional information about their enthusiasm for said pizza in the latter case, but that's the 7% rather than the 93%. I'm not familiar with the study, but it sure seems like it's just a measurement of a claim that everyone already believes, which is that words aren't great at overriding non-verbal signals that are generally more honest when the two are contradictory.

even the authors of the original paper from where this statement comes from don't agree with it

Do you have a reference for this?

Sure, here's an article from Slate from last year: https://slate.com/technology/2022/11/brain-development-25-year-old-mature-myth.html

Quoting:

They also found important clues to brain function. For instance, a 2016 study found that when faced with negative emotion, 18- to 21-year-olds had brain activity in the prefrontal cortices that looked more like that of younger teenagers than that of people over 21. Alexandra Cohen, the lead author of that study and now a neuroscientist at Emory University, said the scientific consensus is that brain development continues into people’s 20s.

But, she wrote in an email, “I don’t think there’s anything magical about the age of 25.”

Yet we’ve seen that many people do believe something special happens at 25. That’s the result of pop culture telephone: As people reference the takeaways from Cohen and other researchers’ work, the nuance gets lost. For example, to add an air of credibility to its DiCaprio theory, YourTango excerpts a passage from a 2012 New York Times op-ed written by the psychologist Larry Steinberg: “Significant changes in brain anatomy and activity are still taking place during young adulthood, especially in prefrontal regions that are important for planning ahead, anticipating the future consequences of one’s decisions, controlling impulses, and comparing risk and reward,” he wrote.

And further down:

To complicate things further, there’s a huge amount of variability between individual brains. Just as you might stop growing taller at 23, or 17—or, if you’re like me, 12—the age that corresponds with brain plateaus can differ greatly from person to person. In one study, participants ranged from 7 to 30 years old, and researchers tried to predict each person’s “brain age” by mapping the connections in each person’s brain. Their age predictions accounted for about 55 percent of the variance among the participants, but far from all of it. “Some 8-year-old brains exhibited a greater ‘maturation index’ than some 25 year old brains,” Somerville wrote in her Neuron review. Some of those differences might be random genetic variation, but people’s behavior and lived experience contribute as well. “Childhood experiences, epigenetics, substance use, genetics related to anxiety, psychosis, and ADHD—all that affects brain development as well,” said Sarah Mallard Wakefield, a forensic psychiatrist.

Bolding mine.

Seriously read the whole article, it's not too long and definitely worth it.

Considering that we're in an aging society dominated by boomers, where ageism is real, I don't find it surprising at all.