This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Training language models to be warm and empathetic makes them less reliable and more sycophantic:
Assuming that the results reported in the paper are accurate and that they do generalize across model architectures with some regularity, it seems to me that there are two stances you can take regarding this phenomenon; you can either view it as an "easy problem" or a "hard problem":
The "easy problem" view: This is essentially just an artifact of the specific fine-tuning method that the authors used. It should not be an insurmountable task to come up with a training method that tells the LLM to maximize warmth and empathy, but without sacrificing honesty and rigor. Just tell the LLM to optimize for both and we'll be fine.
The "hard problem" view: This phenomenon is perhaps indicative of a more fundamental tradeoff in the design space of possible minds. Perhaps there is something intrinsic to the fact that, as a mind devotes more attention to "humane concerns" and "social reasoning", there tends to be a concomitant sacrifice of attention to matters of effectiveness and pure rigor. This is not to say that there are no minds that successfully optimize for both; only that they are noticeably more uncommon, relative to the total space of all possibilities. If this view is correct, it could be troublesome for alignment research. Beyond mere orthogonality, raw intellect and effectiveness (and most AI boosters want a hypothetical ASI to be highly effective at realizing its concrete visions in the external world) might actually be negatively correlated with empathy.
One HN comment on the paper read as follows:
which is quite fascinating!
EDIT: Funny how many topics this fractured off into, seems notable even by TheMotte standards...
You know, I've long noticed a human version of this tension that I've been really curious about.
Different communities have different norms, of course. This isn't news. But I've had, at points, one foot in creative communities where artists or crafts people try to get good at things, and another foot in academic communities where academics try to "understand the world", or "critique society and power", or "understand math / economics / whatever". And what I've noticed, at least in my time in such communities, is that the creator spaces if they're functional at all (and not all are) tend to be a lot more positive and validating. A lot of the academic communities are much more demoralizing.
I'm sure some of that is that the creative spaces I'm thinking of tend to be more opt-in. Back in the day, no one was pointing a gun at anyone's head to participate in the Quake community, say. Same thing for people trying to make digital art in Photoshop, or musicians participating in video game remix communities, or people making indie browser games and looking for morale boosts from their peers. Whereas people participating in academic communities often are part of a more formalized system that where they have to be there, even if they're burned out, even if they stop believing in what they're working on, or even if they think it's likely that they have no future. So that's a very real difference.
But I've also long speculated that there's something more fundamental at play, like... I don't know, that everyone trying to improve in those functional creator spaces understands the incredibly vulnerable position people put themselves in when they take the initiative to create something and put themselves out there. And everyone has to start somewhere. It's a process for everyone. Demoralization is real. And everyone is trying to improve all the time, and there's just too much to know and master. There's a real balance between maintaining the standards of a community and maintaining the morale of individual members of a community - you do need enough high quality not to run off people who have actually mastered some things. And yet there really is very little to be gained by ripping bad work to shreds, in the usual case.
But in the academic communities, public critique is often treated as having a much higher status. It's a sign that a field is valuable, and it's a way of weeding "bad" work out of a field to maintain high standards and thus the value of the field in question. And it's a way to assert zero sum status over other high status people, too. But more, because of all of this, it really just becomes a kind of habit. Finding the flaws in work just becomes what you do, or at least that was the case for many of the academic fields I was familiar with (I've worked at universities and have a lot of professor friends). And it's not even really viewed as personal most of the time (although it can be). It's just sort of a way of navigating the world. It reminds me of the old Onion article about the grad student deconstructing a Mexican food menu.
The thing is, on paper, you might well find that the first style of forum does end up validating people for their crappy mistakes. I wouldn't be surprised if that were true. But it's also true that people exist through time. And tacit knowledge is real and not trivially shared or captured, either. I feel like there's a more complicated tradeoff lurking in the background here.
Recently I've been using AI (Gemini Pro 2.5 and Claude Sonnet 4.1) to work through a bunch of quite complicated math question I have. And yeah, they spend a lot of time glazing me (especially Gemini). And I definitely have to engage in a lot of preemptive self-criticism and skepticism to guard against that, and to be wary of what they say. And both models do get things wrong some time. But I've gotten to ask a lot of really in-depth questions, and its proven to be really useful. Meanwhile, I went back to some of the various stackexchange sites recently after doing this, and... yet, tedious prickly dickishness. It's still there. I know those communities have, in aggregate, all sorts of smart people. I've gotten value from the site. But the comparison of the experience between the two is night and day, in exactly the same pattern as I just described above, and I'm obviously getting vastly more value from the AI currently.
My last ex was a PhD literature student in a very prestigious university. One of her perennial complaints was that I did not take as much interest in her work as she would like, which, though I denied it at the time, has a kernel of truth. The problem was not a lack of interest in her as a person, but in the nature of the intellectual game she was required to play.
Most humanities programs are, to put it bluntly, huffing their own farts. There is little grounding in fact, little contact with the real world of gears, machinery, or meat. I call this the Reality Anchor. A field has a strong Reality Anchor if its propositions can be tested against something external and unforgiving. An engineer builds a bridge: either it stands up to traffic and weather, or it does not. A programmer writes code: either it compiles and executes the desired function, or it throws an error. A surgeon performs a procedure, the patient’s outcome provides a grim but objective metric. Reality is the ultimate, non-negotiable peer reviewer.
Psychiatry is hardly perfect in that regard, but we care more about RCTs than debating Freudian vs Lacanian nonsense. Does the intervention improve outcomes in a measurable way? If not, it is of limited use, no matter how elegant the theory behind it.
When a field loses its Reality Anchor, the primary mechanism for advancement and evaluation shifts. The game is no longer about correctly modeling or manipulating the world. The game becomes one of status. Can you convince your peers of your erudition and wit? Can you create ever more contrived frameworks while studiously ignoring that your rarefied setting has increasingly little relevance to reality? Well, you better, and it is best if you drink the Kool-Aid. That is the only way you will get grants or cling on to a barely living wage. It helps if you can delude yourself into thinking your work is meaningful, since few people can handle the cognitive dissonance of genuinely pointless or counterproductive jobs.
Most physicists agree on the laws of physics, and are arguing about more subtle interpretations, edge cases, or speculating about better models. Most nuclear engineers do not disagree that radioactivity exists. Most doctors do not doubt that paracetamol reduces pain. Yet, if you go to the cafeteria of a philosophy department and ask ten people about the true meaning of philosophy, you will get eleven contradictory answers. When you ask them to establish consensus, they will start clobbering each other. In a field anchored by social consensus, destroying the consensus of others is a viable path to power.
Deconstructing a takeout menu, as in the Onion article, is the logical endpoint: a mind so trained in critique that it can no longer see a thing for what it is*, only as an object to be dismantled to demonstrate intellectual superiority. Critique becomes a status-seeking missile.
*I will begrudgingly say that the post-modernists have a point in claiming that it isn't really possible to see things "as they are." The observation is at least partially colored by the observer. But the image taken by a digital camera might be processed, but it is still more neutral than the same image run through a dozen Instagram filters. Pretending to have objective reality helps.
The relation of the humanities to "reality" varies so drastically from field to field, and even from paper to paper, that it's almost impossible to make generalizations. You have to just take things on a case by case basis, determine what the intent was, and how well that intent was executed upon.
If we're going to regard analytic philosophy as one of the humanities (as you seem to do), then the "reality anchor" is simply how well the argument in question describes, well, reality, in addition to its own internal logical coherence. You have previously shared your own philosophical views on machine consciousness and machine understanding. Presumably, you did think that these views of yours were well supported by the evidence and that they were grounded in "reality". So it's not that you devalue philosophy; it's just that you think your own philosophical views are obviously correct, and the views of your philosophical opponents are obviously incorrect, which is what every single philosopher has thought since the beginning of recorded history, so you're in good company there.
Literary studies can end up being quite empirically grounded. You'll get people who are doing things like a statistical analysis of the lexicon of a given book or a given set of books, counting up how many times X type of word appears in Y genres of novels from time period Z. Or it can turn into a sort of literary history, pulling together letters and diary entries to show that X author read Y author which is why they were influenced to do Z kind of writing. Even in more abstract matters of literary interpretation though, I think it's rash to say that they have no grounding in empirical fact. There's a classic problem in Shakespeare studies, for example, over whether Shakespeare intended Marcus's monologue in Titus Andronicus to be ironic and satirical. I believe that most people would agree by default that there is a fact of the matter over whether Shakespeare had a conscious intent or not to write the speech in an ironic fashion (this assumption of course reveals philosophical complexities if you poke at it enough, but, most people will not find it to be too troublesome of an assumption). Of course the possibility of actually confirming this fact once and for all is now forbidden to us, lost as it is to the sands of time. But, since we know that people's thoughts and emotions influence their words and actions, we can presumably make some headway on gathering evidence regarding Shakespeare's intent here, and make a reasoned argument for one position or the other.
One of the goals of psychoanalysis is to interrogate fundamental assumptions about what an "outcome" even is, which outcomes are desirable and worth pursuing in a given individual context, and what it means to actually "measure" a given "outcome". Presumably, empirical psychiatry does not take these questions to be its proper business, so it's unsurprising that there would be a divergence in perspective here. (If someone were to present with complaints of ritualistic OCD behaviors, for example, then psychoanalysis is theoretically neutral regarding whether the cessation of the behavior is the "proper" and desirable outcome. It certainly may very well be the desirable outcome in the majority of cases, but this cannot be taken as a given.)
I can't really ask for a better steelman for the positions I'm against, so thank you.
You accuse me of engaging in philosophy, and I can only plead guilty. But I suspect we are talking about two different things. I see a distinction between what we might call instrumental versus terminal philosophy. I use philosophy as a spade, a tool to dig into reality-anchored problems like the nature of consciousness or my ethical obligations to a patient. The goal is to get somewhere. For many professional philosophers I have encountered, philosophy is not a tool to be used but an object to be endlessly polished. They are not in it to dig, they're here to argue about the platonic ideal of a spade.
(In my case, I'm rather concerned that if we do instantiate a Machine God: we'd better teach it a definition of spade that doesn't mean our bones are used to dig our graves)
This is especially true in moral philosophy. I have a strong conviction that objective morality does not exist. The evidence against it is a vast, silent ocean; the evidence for it is a null set. I consider it as likely as finding a hidden integer between two and three that we've somehow missed. This makes moral arguments an interesting class of facts, but only facts about the people having them. Potentially facts about game theory and evolutionary processes, since many aspects of morality are conserved across species. Dogs and monkeys understand fairness, or have kin-group obligations.
I must strongly disagree, this doesn't represent my stance at all. In fact, I would say that this is a category error. The only way a philosophical conjecture can be "incorrect" is through logical error in its formulation, or outright self-contradiction.
My own stance is that I am both a moral relativist and a moral chauvinist, and I deny these claims are contradictory. My preference for my own brand of consequentialism is just that: a preference. I do not think a Kantian is wrong so much as I observe that they must constantly ignore their own imperatives to function in the world.
That makes philosophical arguments not that different to debating a favorite football team. Can be fun over a drink, often interesting, but usually not productive.
This brings me back to your defense of the humanities. You give excellent examples of how these fields can be anchored to reality, like the statistical analysis of a lexicon. I do not doubt these researchers exist, my ex did similar work.
My critique is about the center of gravity of these fields. For every scholar doing a careful statistical analysis, how many are writing another unfalsifiable post-structuralist critique by doing the equivalent of scrutinizing a takeout menu? My experience suggests the latter is far more representative of the field's culture and what is considered high status work. The exceptions, however laudable, do not disprove the rule about the field's dominant intellectual mode.
I am a Bayesian, so I am fully on board with probabilistic arguments. Yet, once again, in the humanities or in philosophy, consensus is rare or sometimes never reached. I find this farcical.
The core difference, as I see it, is the presence of a robust error correction mechanism. In my world, bad ideas have an expiration date because they fail to produce results. Phlogiston theory is dead. Lamarckian evolution is dead. They were falsified by reality (in the Bayesian, not Popperian sense). Can we say the same for the most influential ideas in the humanities? The continued influence of figures like Lacan, despite decades of withering critique, suggests the system is not structured to kill its darlings. It is designed to accumulate "perspectives," not to converge on truth.
(Even STEM rewards new discoveries, but someone conducting an experiment showing Einstein's model of gravity works/doesn't work in a new regime is doing something far more important and useful than someone arguing about feminist interpretations of underwater basket weaving)
My own field of psychiatry is a good case study here. We are in the aftermath of a replicability crisis. It is painful and embarrassing (but mostly in the softer aspects of psychology, the drugs usually work), but it is also a sign of institutional health. We are actively trying to discover where we have been wrong and hold ourselves to a higher standard. This is our Reality Anchor, however imperfect, pulling us back. I do not see an equivalent "interpretive crisis" in literary studies. I do not see a movement to discard theories that produce endless, unfalsifiable, and contradictory readings. The lack of such a crisis isn't a sign of stability. To me, it seems a sign the field may not have a reliable way to know when it is wrong. The Sokal Affair, or my own time in the Tate, shows that "earnest" productions are indistinguishable from parody.
This is not an accident. It flows directly from the incentive structure. In my field, discovering a new, effective treatment for depression grants you status because of its truth and utility. In literary studies, what is the reward for simply confirming the last fifty years of scholarship on Titus Andronicus? There is little to none. The incentive is to produce a novel interpretation, the more contrarian the better. This creates a centrifugal force, pushing the field away from stable consensus and towards ever more esoteric readings. The goal ceases to be understanding the text and becomes demonstrating the cleverness of the critic.
Regarding psychoanalysis and outcomes, I am a simple pragmatist. If a person with OCD is happy, I have no desire to treat them. If they are a paranoid schizophrenic setting parks on fire, the matter is out of my hands. In most cases, patients come to us because they believe they have a problem. We usually agree. That shared understanding of a problem in need of a solution is anchor enough.
This is why I believe the humanities are not a good target for limited public funds, at least at present. I have no objection to private donors funding this work. But most STEM and medical research has a far more obvious case for being a worthy investment of tax dollars. If we must make hard choices, I would fund the fields that have a mechanism for being wrong and a track record of correcting themselves, while also raising standards of living or technological progression.
It's rather ironic that your own choice of analogy willingly jumps into the thicket of the philosophy of mathematics. Perhaps you're just doing so unknowingly or just with a general lack of care, but that would indeed be apropos.
What sort of 'evidence' do you think one would gather to determine the status of mathematical objects? Is it empirical? Do you perform an experiment? Is that the means by which one 'finds' or, say, 'discovers' things like integers?
I hate to do this, but last time we did this, you were unable to even explain what it is that those terms meant. Would you like to take another go at it?
Thank you for reminding me of that rather unpleasant experience. I would actually not like to take another go at it. Anyone wanting elaboration is welcome to read the thread.
Fair enough on the positive claim concerning meta-ethics. If you'd prefer to leave that one in incoherence, you can leave that one in incoherence.
Would you like to take a shot at your negative claim with analogy to philosophy of mathematics? Any sort of clarity or argument there?
No, I showed that my point was coherent, it is beyond me why you don't see that. It's not really my problem at this point.
Not with you, I'm afraid. @Primaprimaprima is far more pleasant to talk to, hence I am more than happy to discuss that in detail with them. You're welcome to read that thread and make of it what you will.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link