site banner

Culture War Roundup for the week of August 11, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

Training language models to be warm and empathetic makes them less reliable and more sycophantic:

Artificial intelligence (AI) developers are increasingly building language models with warm and empathetic personas that millions of people now use for advice, therapy, and companionship. Here, we show how this creates a significant trade-off: optimizing language models for warmth undermines their reliability, especially when users express vulnerability. We conducted controlled experiments on five language models of varying sizes and architectures, training them to produce warmer, more empathetic responses, then evaluating them on safety-critical tasks. Warm models showed substantially higher error rates (+10 to +30 percentage points) than their original counterparts, promoting conspiracy theories, providing incorrect factual information, and offering problematic medical advice. They were also significantly more likely to validate incorrect user beliefs, particularly when user messages expressed sadness. Importantly, these effects were consistent across different model architectures, and occurred despite preserved performance on standard benchmarks, revealing systematic risks that current evaluation practices may fail to detect. As human-like AI systems are deployed at an unprecedented scale, our findings indicate a need to rethink how we develop and oversee these systems that are reshaping human relationships and social interaction.

Assuming that the results reported in the paper are accurate and that they do generalize across model architectures with some regularity, it seems to me that there are two stances you can take regarding this phenomenon; you can either view it as an "easy problem" or a "hard problem":

  • The "easy problem" view: This is essentially just an artifact of the specific fine-tuning method that the authors used. It should not be an insurmountable task to come up with a training method that tells the LLM to maximize warmth and empathy, but without sacrificing honesty and rigor. Just tell the LLM to optimize for both and we'll be fine.

  • The "hard problem" view: This phenomenon is perhaps indicative of a more fundamental tradeoff in the design space of possible minds. Perhaps there is something intrinsic to the fact that, as a mind devotes more attention to "humane concerns" and "social reasoning", there tends to be a concomitant sacrifice of attention to matters of effectiveness and pure rigor. This is not to say that there are no minds that successfully optimize for both; only that they are noticeably more uncommon, relative to the total space of all possibilities. If this view is correct, it could be troublesome for alignment research. Beyond mere orthogonality, raw intellect and effectiveness (and most AI boosters want a hypothetical ASI to be highly effective at realizing its concrete visions in the external world) might actually be negatively correlated with empathy.

One HN comment on the paper read as follows:

A few months ago I asked GPT for a prompt to make it more truthful and logical. The prompt it came up with included the clause "never use friendly or encouraging language"

which is quite fascinating!

EDIT: Funny how many topics this fractured off into, seems notable even by TheMotte standards...

You know, I've long noticed a human version of this tension that I've been really curious about.

Different communities have different norms, of course. This isn't news. But I've had, at points, one foot in creative communities where artists or crafts people try to get good at things, and another foot in academic communities where academics try to "understand the world", or "critique society and power", or "understand math / economics / whatever". And what I've noticed, at least in my time in such communities, is that the creator spaces if they're functional at all (and not all are) tend to be a lot more positive and validating. A lot of the academic communities are much more demoralizing.

I'm sure some of that is that the creative spaces I'm thinking of tend to be more opt-in. Back in the day, no one was pointing a gun at anyone's head to participate in the Quake community, say. Same thing for people trying to make digital art in Photoshop, or musicians participating in video game remix communities, or people making indie browser games and looking for morale boosts from their peers. Whereas people participating in academic communities often are part of a more formalized system that where they have to be there, even if they're burned out, even if they stop believing in what they're working on, or even if they think it's likely that they have no future. So that's a very real difference.

But I've also long speculated that there's something more fundamental at play, like... I don't know, that everyone trying to improve in those functional creator spaces understands the incredibly vulnerable position people put themselves in when they take the initiative to create something and put themselves out there. And everyone has to start somewhere. It's a process for everyone. Demoralization is real. And everyone is trying to improve all the time, and there's just too much to know and master. There's a real balance between maintaining the standards of a community and maintaining the morale of individual members of a community - you do need enough high quality not to run off people who have actually mastered some things. And yet there really is very little to be gained by ripping bad work to shreds, in the usual case.

But in the academic communities, public critique is often treated as having a much higher status. It's a sign that a field is valuable, and it's a way of weeding "bad" work out of a field to maintain high standards and thus the value of the field in question. And it's a way to assert zero sum status over other high status people, too. But more, because of all of this, it really just becomes a kind of habit. Finding the flaws in work just becomes what you do, or at least that was the case for many of the academic fields I was familiar with (I've worked at universities and have a lot of professor friends). And it's not even really viewed as personal most of the time (although it can be). It's just sort of a way of navigating the world. It reminds me of the old Onion article about the grad student deconstructing a Mexican food menu.

The thing is, on paper, you might well find that the first style of forum does end up validating people for their crappy mistakes. I wouldn't be surprised if that were true. But it's also true that people exist through time. And tacit knowledge is real and not trivially shared or captured, either. I feel like there's a more complicated tradeoff lurking in the background here.

Recently I've been using AI (Gemini Pro 2.5 and Claude Sonnet 4.1) to work through a bunch of quite complicated math question I have. And yeah, they spend a lot of time glazing me (especially Gemini). And I definitely have to engage in a lot of preemptive self-criticism and skepticism to guard against that, and to be wary of what they say. And both models do get things wrong some time. But I've gotten to ask a lot of really in-depth questions, and its proven to be really useful. Meanwhile, I went back to some of the various stackexchange sites recently after doing this, and... yet, tedious prickly dickishness. It's still there. I know those communities have, in aggregate, all sorts of smart people. I've gotten value from the site. But the comparison of the experience between the two is night and day, in exactly the same pattern as I just described above, and I'm obviously getting vastly more value from the AI currently.

But in the academic communities, public critique is often treated as having a much higher status. It's a sign that a field is valuable, and it's a way of weeding "bad" work out of a field to maintain high standards and thus the value of the field in question. And it's a way to assert zero sum status over other high status people, too. But more, because of all of this, it really just becomes a kind of habit. Finding the flaws in work just becomes what you do, or at least that was the case for many of the academic fields I was familiar with (I've worked at universities and have a lot of professor friends). And it's not even really viewed as personal most of the time (although it can be). It's just sort of a way of navigating the world. It reminds me of the old Onion article about the grad student deconstructing a Mexican food menu.

My last ex was a PhD literature student in a very prestigious university. One of her perennial complaints was that I did not take as much interest in her work as she would like, which, though I denied it at the time, has a kernel of truth. The problem was not a lack of interest in her as a person, but in the nature of the intellectual game she was required to play.

Most humanities programs are, to put it bluntly, huffing their own farts. There is little grounding in fact, little contact with the real world of gears, machinery, or meat. I call this the Reality Anchor. A field has a strong Reality Anchor if its propositions can be tested against something external and unforgiving. An engineer builds a bridge: either it stands up to traffic and weather, or it does not. A programmer writes code: either it compiles and executes the desired function, or it throws an error. A surgeon performs a procedure, the patient’s outcome provides a grim but objective metric. Reality is the ultimate, non-negotiable peer reviewer.

Psychiatry is hardly perfect in that regard, but we care more about RCTs than debating Freudian vs Lacanian nonsense. Does the intervention improve outcomes in a measurable way? If not, it is of limited use, no matter how elegant the theory behind it.

When a field loses its Reality Anchor, the primary mechanism for advancement and evaluation shifts. The game is no longer about correctly modeling or manipulating the world. The game becomes one of status. Can you convince your peers of your erudition and wit? Can you create ever more contrived frameworks while studiously ignoring that your rarefied setting has increasingly little relevance to reality? Well, you better, and it is best if you drink the Kool-Aid. That is the only way you will get grants or cling on to a barely living wage. It helps if you can delude yourself into thinking your work is meaningful, since few people can handle the cognitive dissonance of genuinely pointless or counterproductive jobs.

Most physicists agree on the laws of physics, and are arguing about more subtle interpretations, edge cases, or speculating about better models. Most nuclear engineers do not disagree that radioactivity exists. Most doctors do not doubt that paracetamol reduces pain. Yet, if you go to the cafeteria of a philosophy department and ask ten people about the true meaning of philosophy, you will get eleven contradictory answers. When you ask them to establish consensus, they will start clobbering each other. In a field anchored by social consensus, destroying the consensus of others is a viable path to power.

Deconstructing a takeout menu, as in the Onion article, is the logical endpoint: a mind so trained in critique that it can no longer see a thing for what it is*, only as an object to be dismantled to demonstrate intellectual superiority. Critique becomes a status-seeking missile.

*I will begrudgingly say that the post-modernists have a point in claiming that it isn't really possible to see things "as they are." The observation is at least partially colored by the observer. But the image taken by a digital camera might be processed, but it is still more neutral than the same image run through a dozen Instagram filters. Pretending to have objective reality helps.

Most humanities programs are, to put it bluntly, huffing their own farts. There is little grounding in fact, little contact with the real world of gears, machinery, or meat. I call this the Reality Anchor.

The relation of the humanities to "reality" varies so drastically from field to field, and even from paper to paper, that it's almost impossible to make generalizations. You have to just take things on a case by case basis, determine what the intent was, and how well that intent was executed upon.

If we're going to regard analytic philosophy as one of the humanities (as you seem to do), then the "reality anchor" is simply how well the argument in question describes, well, reality, in addition to its own internal logical coherence. You have previously shared your own philosophical views on machine consciousness and machine understanding. Presumably, you did think that these views of yours were well supported by the evidence and that they were grounded in "reality". So it's not that you devalue philosophy; it's just that you think your own philosophical views are obviously correct, and the views of your philosophical opponents are obviously incorrect, which is what every single philosopher has thought since the beginning of recorded history, so you're in good company there.

Literary studies can end up being quite empirically grounded. You'll get people who are doing things like a statistical analysis of the lexicon of a given book or a given set of books, counting up how many times X type of word appears in Y genres of novels from time period Z. Or it can turn into a sort of literary history, pulling together letters and diary entries to show that X author read Y author which is why they were influenced to do Z kind of writing. Even in more abstract matters of literary interpretation though, I think it's rash to say that they have no grounding in empirical fact. There's a classic problem in Shakespeare studies, for example, over whether Shakespeare intended Marcus's monologue in Titus Andronicus to be ironic and satirical. I believe that most people would agree by default that there is a fact of the matter over whether Shakespeare had a conscious intent or not to write the speech in an ironic fashion (this assumption of course reveals philosophical complexities if you poke at it enough, but, most people will not find it to be too troublesome of an assumption). Of course the possibility of actually confirming this fact once and for all is now forbidden to us, lost as it is to the sands of time. But, since we know that people's thoughts and emotions influence their words and actions, we can presumably make some headway on gathering evidence regarding Shakespeare's intent here, and make a reasoned argument for one position or the other.

Psychiatry is hardly perfect in that regard, but we care more about RCTs than debating Freudian vs Lacanian nonsense.

One of the goals of psychoanalysis is to interrogate fundamental assumptions about what an "outcome" even is, which outcomes are desirable and worth pursuing in a given individual context, and what it means to actually "measure" a given "outcome". Presumably, empirical psychiatry does not take these questions to be its proper business, so it's unsurprising that there would be a divergence in perspective here. (If someone were to present with complaints of ritualistic OCD behaviors, for example, then psychoanalysis is theoretically neutral regarding whether the cessation of the behavior is the "proper" and desirable outcome. It certainly may very well be the desirable outcome in the majority of cases, but this cannot be taken as a given.)

I can't really ask for a better steelman for the positions I'm against, so thank you.

You have previously shared your own philosophical views on machine consciousness and machine understanding.

You accuse me of engaging in philosophy, and I can only plead guilty. But I suspect we are talking about two different things. I see a distinction between what we might call instrumental versus terminal philosophy. I use philosophy as a spade, a tool to dig into reality-anchored problems like the nature of consciousness or my ethical obligations to a patient. The goal is to get somewhere. For many professional philosophers I have encountered, philosophy is not a tool to be used but an object to be endlessly polished. They are not in it to dig, they're here to argue about the platonic ideal of a spade.

(In my case, I'm rather concerned that if we do instantiate a Machine God: we'd better teach it a definition of spade that doesn't mean our bones are used to dig our graves)

This is especially true in moral philosophy. I have a strong conviction that objective morality does not exist. The evidence against it is a vast, silent ocean; the evidence for it is a null set. I consider it as likely as finding a hidden integer between two and three that we've somehow missed. This makes moral arguments an interesting class of facts, but only facts about the people having them. Potentially facts about game theory and evolutionary processes, since many aspects of morality are conserved across species. Dogs and monkeys understand fairness, or have kin-group obligations.

it's just that you think your own philosophical views are obviously correct, and the views of your philosophical opponents are obviously incorrect

I must strongly disagree, this doesn't represent my stance at all. In fact, I would say that this is a category error. The only way a philosophical conjecture can be "incorrect" is through logical error in its formulation, or outright self-contradiction.

My own stance is that I am both a moral relativist and a moral chauvinist, and I deny these claims are contradictory. My preference for my own brand of consequentialism is just that: a preference. I do not think a Kantian is wrong so much as I observe that they must constantly ignore their own imperatives to function in the world.

That makes philosophical arguments not that different to debating a favorite football team. Can be fun over a drink, often interesting, but usually not productive.

This brings me back to your defense of the humanities. You give excellent examples of how these fields can be anchored to reality, like the statistical analysis of a lexicon. I do not doubt these researchers exist, my ex did similar work.

My critique is about the center of gravity of these fields. For every scholar doing a careful statistical analysis, how many are writing another unfalsifiable post-structuralist critique by doing the equivalent of scrutinizing a takeout menu? My experience suggests the latter is far more representative of the field's culture and what is considered high status work. The exceptions, however laudable, do not disprove the rule about the field's dominant intellectual mode.

Of course the possibility of actually confirming this fact once and for all is now forbidden to us, lost as it is to the sands of time

I am a Bayesian, so I am fully on board with probabilistic arguments. Yet, once again, in the humanities or in philosophy, consensus is rare or sometimes never reached. I find this farcical.

The core difference, as I see it, is the presence of a robust error correction mechanism. In my world, bad ideas have an expiration date because they fail to produce results. Phlogiston theory is dead. Lamarckian evolution is dead. They were falsified by reality (in the Bayesian, not Popperian sense). Can we say the same for the most influential ideas in the humanities? The continued influence of figures like Lacan, despite decades of withering critique, suggests the system is not structured to kill its darlings. It is designed to accumulate "perspectives," not to converge on truth.

(Even STEM rewards new discoveries, but someone conducting an experiment showing Einstein's model of gravity works/doesn't work in a new regime is doing something far more important and useful than someone arguing about feminist interpretations of underwater basket weaving)

My own field of psychiatry is a good case study here. We are in the aftermath of a replicability crisis. It is painful and embarrassing (but mostly in the softer aspects of psychology, the drugs usually work), but it is also a sign of institutional health. We are actively trying to discover where we have been wrong and hold ourselves to a higher standard. This is our Reality Anchor, however imperfect, pulling us back. I do not see an equivalent "interpretive crisis" in literary studies. I do not see a movement to discard theories that produce endless, unfalsifiable, and contradictory readings. The lack of such a crisis isn't a sign of stability. To me, it seems a sign the field may not have a reliable way to know when it is wrong. The Sokal Affair, or my own time in the Tate, shows that "earnest" productions are indistinguishable from parody.

This is not an accident. It flows directly from the incentive structure. In my field, discovering a new, effective treatment for depression grants you status because of its truth and utility. In literary studies, what is the reward for simply confirming the last fifty years of scholarship on Titus Andronicus? There is little to none. The incentive is to produce a novel interpretation, the more contrarian the better. This creates a centrifugal force, pushing the field away from stable consensus and towards ever more esoteric readings. The goal ceases to be understanding the text and becomes demonstrating the cleverness of the critic.

Regarding psychoanalysis and outcomes, I am a simple pragmatist. If a person with OCD is happy, I have no desire to treat them. If they are a paranoid schizophrenic setting parks on fire, the matter is out of my hands. In most cases, patients come to us because they believe they have a problem. We usually agree. That shared understanding of a problem in need of a solution is anchor enough.

This is why I believe the humanities are not a good target for limited public funds, at least at present. I have no objection to private donors funding this work. But most STEM and medical research has a far more obvious case for being a worthy investment of tax dollars. If we must make hard choices, I would fund the fields that have a mechanism for being wrong and a track record of correcting themselves, while also raising standards of living or technological progression.

You accuse me of engaging in philosophy, and I can only plead guilty. But I suspect we are talking about two different things. I see a distinction between what we might call instrumental versus terminal philosophy. I use philosophy as a spade, a tool to dig into reality-anchored problems like the nature of consciousness or my ethical obligations to a patient. The goal is to get somewhere. For many professional philosophers I have encountered, philosophy is not a tool to be used but an object to be endlessly polished. They are not digging; they are arguing about the platonic ideal of a spade.

Dear Lord what a beautiful illustration of Jung's dichotomy between extroverted thinking and introverted thinking. Textbook. I'm practically giddy over here.

Anyway, it's all exactly as you describe. Some people do just want to endlessly polish for its own sake. That's what they like to do. And that's ok with me. You get the same thing in STEM too. Mathematicians working on God knows what kinds of theories related to affine abelian varieties over 3-dual functor categories or whatever. None of it will ever be "useful" to anyone. But their work is quite fascinating nonetheless, so I'm happy that they're able to continue on with it in peace.

I must strongly disagree, this doesn't represent my stance at all. In fact, I would say that this is a category error. The only way a philosophical conjecture can be "incorrect" is through logical error in its formulation, or outright self-contradiction.

I'm a bit confused here. I believe you've claimed before that a) first-person consciousness does exist, and b) sufficiently advanced AI will be conscious. Correct me if I'm wrong here. You asserted these claims because you think they're true, yes? And so anyone who denies these claims is saying something false?

These claims (that first-person consciousness does exist, and that sufficiently advanced AI will be conscious) are philosophical claims. There are philosophers who deny one or both of them. Presumably you don't think they're making a "category error", you just think they're saying something false.

For every scholar doing a careful statistical analysis, how many are writing another unfalsifiable post-structuralist critique by doing the equivalent of scrutinizing a takeout menu?

Of course, there's a lot of indefensible crap out there. But 90% of everything is crap. I simply defend the parts that are defensible and ignore the parts that are indefensible.

It is designed to accumulate "perspectives," not to converge on truth.

That's a relatively accurate statement!

Some people just want to get things done. Some people just want to sit back and take a new perspective on things. Nature produces both types with regularity. Let us appreciate the beautiful diversity of types among the human race, yes?

I do not see an equivalent "interpretive crisis" in literary studies.

That's because you haven't been looking. There's basically never not an interpretive crisis going on in literary studies.

In the early 20th century you had New Criticism, and people criticized that for being overly formalist and ignoring social and political context, so then you had everything that goes under the banner of "postmodernism", ideology critique, historicism, all that sort of stuff, and then you had some people who said that the postmodernist stuff was leading us astray and we had gotten too far from the texts themselves and how they're actually received, so they got into "postcritique" and reader response theory, and on and on it goes...

In general, people outside of the humanities underestimate the degree of internal philosophical disagreement within the humanities. Here's an hour long podcast of Walter Benn Michaels talking about the controversy engendered by his infamous paper "Against Theory", if you're interested.

The incentive is to produce a novel interpretation, the more contrarian the better. This creates a centrifugal force, pushing the field away from stable consensus and towards ever more esoteric readings.

I'd be happy if you could direct me to any of these novel and esoteric readings. My impression is that the direction of force is the opposite, and that readings tend to be conservative because agreeing with your peers and mentors is how you get promoted (conservative in the sense of adhering to institutional trends, not conservative in the political sense).

In most cases, patients come to us because they believe they have a problem. We usually agree. That shared understanding of a problem in need of a solution is anchor enough.

Well, that's something that psychoanalysis actually does take a theoretical stance on. You can't trust the patient about what the problem is. Frequently, what they first complain about is not the root cause of what's actually going on. It might be. But frequently it's not. Any "shared understanding" after a one week period of consultation is illusory, because people fundamentally do not understand themselves. (I will relay a lovely anecdote about such a case in a reply to this comment, so as not to overly elongate the current post.)

This is why I believe the humanities are not a good target for limited public funds, at least at present.

I suppose that's where the rub always lies, isn't it. Well, you're getting your wish, since humanities departments are shuttering at an unprecedented rate. I fully agree that there is no "utilitarian" argument for why much of this work should continue. All I can do is try to communicate my own "perspective" (heh) on how I see value in this work, and hope that other people choose to share in that perspective.

Anyway, it's all exactly as you describe. Some people do just want to endlessly polish for its own sake. That's what they like to do. And that's ok with me. You get the same thing in STEM too. Mathematicians working on God knows what kinds of theories related to affine abelian varieties over 3-dual functor categories or whatever. None of it will ever be "useful" to anyone. But their work is quite fascinating nonetheless, so I'm happy that they're able to continue on with it in peace.

Isn't it a massive meme (based in fact) that even the most pure and apparently useless theoretical mathematics ends up having practical utility?

Hell, it even has a name: "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"

Just a few examples, since you probably know more than I do:

  • Number theory to modern cryptography

  • Non-Euclidean geometry was considered largely a curiosity till Einstein came along.

  • Group theory and particle physics

So even if the mathematicians themselves want to claim their work is just rolling in Platonic hay for the love of the game, well, I'll smile and wait. It's not like it's expensive either, you can run a maths department on roughly the budget for food, chalk and chalkboards.

(It's amazing how cheap they are, and how more of them don't just run off to a quant firm. Almost makes you believe that they genuinely love maths)

I'm a bit confused here. I believe you've claimed before that a) first-person consciousness does exist, and b) sufficiently advanced AI will be conscious. Correct me if I'm wrong here. You asserted these claims because you think they're true, yes? And so anyone who denies these claims is saying something false?

Have I? I'm pretty sure that's not the case.

The closest I can recall going is:

  • We do not have a complete mechanistic model of consciousness in humans

  • We do not know what the minimal requirements of consciousness even are in the first place

  • I have no robust way of knowing if other humans are conscious. I'm not an actual solipsist, because I think the odds are pretty damn solid (human brains are really similar), but it is not actually a certainty.

  • Ergo, LLMs might be conscious. I also always add the caveat that if they are, they are almost certainly an incredibly alien form of consciousness and likely to have very different qualia.

In a sense, I see the question of consciousness as irrelevant when it comes to AI. I really don't care! If an ASI tells me it's conscious, then I'll just shrug and go about my day. What I care far more about is what an ASI can achieve.

(If GPT-5 tells me it's conscious, I'd say, great, now where is that chart I asked for?)

In the early 20th century you had New Criticism, and people criticized that for being overly formalist and ignoring social and political context, so then you had everything that goes under the banner of "postmodernism", ideology critique, historicism, all that sort of stuff, and then you had some people who said that the postmodernist stuff was leading us astray and we had gotten too far from the texts themselves and how they're actually received, so they got into "postcritique" and reader response theory, and on and on it goes...

It looks to me like less of a crisis rather than business as usual. What I see is a series of cyclical fads going in and out of fashion, and no real consistency or convergence.

How many layers of rebuttal and counter-rebuttal must we go before a lasting consensus is achieved? I expect most literary academics would say that the self-licking nature of the ice cream cone is the point.

Contrast with STEM: If someone proves that the axiom of choice is, strictly speaking, unnecessary, that would cause a revolution. Even if such a fundamental change doesn't happen, the field will make steady improvements.

I'd be happy if you could direct me to any of these novel and esoteric readings.

Uh.. This really isn't my strong suit, but I believe that the queer theoretical interpretation of Moby Dick or the post-colonial reading of The Tempest might apply.

I do not think Shakespeare intended to say much on the topic of colonial politics. I can grant that sailors are hella gay, so maybe the critical queers have something useful to say.

Well, that's something that psychoanalysis actually does take a theoretical stance on. You can't trust the patient about what the problem is. Frequently, what they first complain about is not the root cause of what's actually going on. It might be. But frequently it's not. Any "shared understanding" after a one week period of consultation is illusory, because people fundamentally do not understand themselves.

I don't think you really need psychoanalysis to get there. Depressed people are often known to not acknowledge their depression. I've never felt tempted to bring out a psychoanalysis textbook to solve such problems, I study them because I'm forced to, for exams set by sadists.

Isn't it a massive meme (based in fact) that even the most pure and apparently useless theoretical mathematics ends up having practical utility?

Hell, it even has a name: "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"

Definitely not! The article you're referring to was about theoretical physics having surprising application to the real world, not pure math. The rabbit hole of pure math goes ridiculously deep, and only the surface layers are in any danger of accidentally becoming useful. Even most of number theory is safe - the Riemann Hypothesis might matter to cryptography (which is partly why it's a Millennium Problem), but to pick some accessible examples, the Goldbach Conjecture, Twin Primes conjecture, Collatz conjecture, etc. are never going to affect anyone's life in the tiniest way.

My career never went that way, so I've only dipped my head into the rabbit hole, but even I can rattle off many examples of fascinating yet utterly useless math results. Angels dancing on the head of a pin are more relevant to the real world than the Banach-Tarski paradox. The existence of the Monster group is amazing, but nobody who's not explicitly studying it will ever encounter it. Is there any conceivable use of the fact that the set of real numbers is uncountable? If and when BB(6) is found, will the world shake on its axis? Does the President need to be notified that Peano arithmetic is not a strong enough formal system to prove Goodstein's theorem?

https://people.seas.harvard.edu/~salil/am106/fall18/A Mathematician%27s Apology - selections.pdf

It will probably be plain by now to what conclusions I am coming; so I will state them at once dogmatically and then elaborate them a little. It is undeniable that a good deal of elementary mathematics—and I use the word ‘elementary’ in the sense in which professional mathematicians use it, in which it includes, for example, a fair working knowledge of the differential and integral calculus—has considerable practical utility. These parts of mathematics are, on the whole, rather dull; they are just the parts which have the least aesthetic value. The ‘real’ mathematics of the ‘real’ mathematicians, the mathematics of Fermat and Euler and Gauss and Abel and Riemann, is almost wholly ‘useless’ (and this is as true of ‘applied’ as of ‘pure’ mathematics). It is not possible to justify the life of any genuine professional mathematician on the ground of the ‘utility’ of his work

https://mathoverflow.net/questions/116627/useless-math-that-became-useful

Number theory, in particular investigations related to prime numbers, was famously considered useless (e.g., by Hardy) for practical matters. Now, since "everybody" needs some cryptography it is quite useful to know how to generate primes (e.g., for an RSA key) and alike, sometimes involving prior 'useless' number theory results

That thread, in general, seems to have a great many examples. Other quotes from it:

The Radon transform, when introduced by Johann Radon in 1917, was useless, until Cormack and Hounsfield developed Tomography in the 60's (Nobel prize for Medicine 1979).

The most famous example is conic sections. Conic sections were of great interest to Greek mathematicians, and their theory was highly developed in the 2-nd century BC.

However I don't know of any application until Kepler's discovery that celestial bodies move on conic sections. Thus 18 centuries passed between math research and the first application!

Number theory, in particular investigations related to prime numbers, was famously considered useless (e.g., by Hardy) for practical matters. Now, since "everybody" needs some cryptography it is quite useful to know how to generate primes (e.g., for an RSA key) and alike, sometimes involving prior 'useless' number theory results.

I hope this shores up my claim that even branches of maths that their creators (!) or famous contemporary mathematicians called useless have a annoying tendency to end up with practical applications. It's not just in the natural sciences, I've certainly never heard cryptography called a "natural science".

Also, see walruz's claim below , that even what you personally think is useless maths is already paying dividends!

Maths is quite cheap, has enormous positive externalities, and thus deserves investment even if no particular branch can be reliably predicted to be profitable. It just seems to happen nonetheless.

More comments