This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Training language models to be warm and empathetic makes them less reliable and more sycophantic:
Assuming that the results reported in the paper are accurate and that they do generalize across model architectures with some regularity, it seems to me that there are two stances you can take regarding this phenomenon; you can either view it as an "easy problem" or a "hard problem":
The "easy problem" view: This is essentially just an artifact of the specific fine-tuning method that the authors used. It should not be an insurmountable task to come up with a training method that tells the LLM to maximize warmth and empathy, but without sacrificing honesty and rigor. Just tell the LLM to optimize for both and we'll be fine.
The "hard problem" view: This phenomenon is perhaps indicative of a more fundamental tradeoff in the design space of possible minds. Perhaps there is something intrinsic to the fact that, as a mind devotes more attention to "humane concerns" and "social reasoning", there tends to be a concomitant sacrifice of attention to matters of effectiveness and pure rigor. This is not to say that there are no minds that successfully optimize for both; only that they are noticeably more uncommon, relative to the total space of all possibilities. If this view is correct, it could be troublesome for alignment research. Beyond mere orthogonality, raw intellect and effectiveness (and most AI boosters want a hypothetical ASI to be highly effective at realizing its concrete visions in the external world) might actually be negatively correlated with empathy.
One HN comment on the paper read as follows:
which is quite fascinating!
You know, I've long noticed a human version of this tension that I've been really curious about.
Different communities have different norms, of course. This isn't news. But I've had, at points, one foot in creative communities where artists or crafts people try to get good at things, and another foot in academic communities where academics try to "understand the world", or "critique society and power", or "understand math / economics / whatever". And what I've noticed, at least in my time in such communities, is that the creator spaces if they're functional at all (and not all are) tend to be a lot more positive and validating. A lot of the academic communities are much more demoralizing.
I'm sure some of that is that the creative spaces I'm thinking of tend to be more opt-in. Back in the day, no one was pointing a gun at anyone's head to participate in the Quake community, say. Same thing for people trying to make digital art in Photoshop, or musicians participating in video game remix communities, or people making indie browser games and looking for morale boosts from their peers. Whereas people participating in academic communities often are part of a more formalized system that where they have to be there, even if they're burned out, even if they stop believing in what they're working on, or even if they think it's likely that they have no future. So that's a very real difference.
But I've also long speculated that there's something more fundamental at play, like... I don't know, that everyone trying to improve in those functional creator spaces understands the incredibly vulnerable position people put themselves in when they take the initiative to create something and put themselves out there. And everyone has to start somewhere. It's a process for everyone. Demoralization is real. And everyone is trying to improve all the time, and there's just too much to know and master. There's a real balance between maintaining the standards of a community and maintaining the morale of individual members of a community - you do need enough high quality not to run off people who have actually mastered some things. And yet there really is very little to be gained by ripping bad work to shreds, in the usual case.
But in the academic communities, public critique is often treated as having a much higher status. It's a sign that a field is valuable, and it's a way of weeding "bad" work out of a field to maintain high standards and thus the value of the field in question. And it's a way to assert zero sum status over other high status people, too. But more, because of all of this, it really just becomes a kind of habit. Finding the flaws in work just becomes what you do, or at least that was the case for many of the academic fields I was familiar with (I've worked at universities and have a lot of professor friends). And it's not even really viewed as personal most of the time (although it can be). It's just sort of a way of navigating the world. It reminds me of the old Onion article about the grad student deconstructing a Mexican food menu.
The thing is, on paper, you might well find that the first style of forum does end up validating people for their crappy mistakes. I wouldn't be surprised if that were true. But it's also true that people exist through time. And tacit knowledge is real and not trivially shared or captured, either. I feel like there's a more complicated tradeoff lurking in the background here.
Recently I've been using AI (Gemini Pro 2.5 and Claude Sonnet 4.1) to work through a bunch of quite complicated math question I have. And yeah, they spend a lot of time glazing me (especially Gemini). And I definitely have to engage in a lot of preemptive self-criticism and skepticism to guard against that, and to be wary of what they say. And both models do get things wrong some time. But I've gotten to ask a lot of really in-depth questions, and its proven to be really useful. Meanwhile, I went back to some of the various stackexchange sites recently after doing this, and... yet, tedious prickly dickishness. It's still there. I know those communities have, in aggregate, all sorts of smart people. I've gotten value from the site. But the comparison of the experience between the two is night and day, in exactly the same pattern as I just described above, and I'm obviously getting vastly more value from the AI currently.
My last ex was a PhD literature student in a very prestigious university. One of her perennial complaints was that I did not take as much interest in her work as she would like, which, though I denied it at the time, has a kernel of truth. The problem was not a lack of interest in her as a person, but in the nature of the intellectual game she was required to play.
Most humanities programs are, to put it bluntly, huffing their own farts. There is little grounding in fact, little contact with the real world of gears, machinery, or meat. I call this the Reality Anchor. A field has a strong Reality Anchor if its propositions can be tested against something external and unforgiving. An engineer builds a bridge: either it stands up to traffic and weather, or it does not. A programmer writes code: either it compiles and executes the desired function, or it throws an error. A surgeon performs a procedure, the patient’s outcome provides a grim but objective metric. Reality is the ultimate, non-negotiable peer reviewer.
Psychiatry is hardly perfect in that regard, but we care more about RCTs than debating Freudian vs Lacanian nonsense. Does the intervention improve outcomes in a measurable way? If not, it is of limited use, no matter how elegant the theory behind it.
When a field loses its Reality Anchor, the primary mechanism for advancement and evaluation shifts. The game is no longer about correctly modeling or manipulating the world. The game becomes one of status. Can you convince your peers of your erudition and wit? Can you create ever more contrived frameworks while studiously ignoring that your rarefied setting has increasingly little relevance to reality? Well, you better, and it is best if you drink the Kool-Aid. That is the only way you will get grants or cling on to a barely living wage. It helps if you can delude yourself into thinking your work is meaningful, since few people can handle the cognitive dissonance of genuinely pointless or counterproductive jobs.
Most physicists agree on the laws of physics, and are arguing about more subtle interpretations, edge cases, or speculating about better models. Most nuclear engineers do not disagree that radioactivity exists. Most doctors do not doubt that paracetamol reduces pain. Yet, if you go to the cafeteria of a philosophy department and ask ten people about the true meaning of philosophy, you will get eleven contradictory answers. When you ask them to establish consensus, they will start clobbering each other. In a field anchored by social consensus, destroying the consensus of others is a viable path to power.
Deconstructing a takeout menu, as in the Onion article, is the logical endpoint: a mind so trained in critique that it can no longer see a thing for what it is*, only as an object to be dismantled to demonstrate intellectual superiority. Critique becomes a status-seeking missile.
*I will begrudgingly say that the post-modernists have a point in claiming that it isn't really possible to see things "as they are." The observation is at least partially colored by the observer. But the image taken by a digital camera might be processed, but it is still more neutral than the same image run through a dozen Instagram filters. Pretending to have objective reality helps.
Not even close to original with them. Plato famously said the same with the Allegory of the Cave, and there's Kant's noumena.
A quick aside about Kant, since so many people blame Kant for things that he really had little or nothing to do with (I recall a program on a Catholic TV channel where they accused Kant of being a "moral relativist", which is... distressing and concerning, that they think that...).
Kant saw himself as trying to mediate between the rationalists and the empiricists. The empiricists thought we could only know things through direct sensory experience, which seems pretty reasonable, until you realize that a statement like "empiricism is true" can't be known directly through your five senses, nor were they able to explain a lot of other things, like how we can have true knowledge of the laws of nature or of causal relations in general (Hume's problem: just because pushing the vase off the table made it fall over a million times doesn't mean it'll happen again the millionth and first time). The rationalists thought that we could know things just by thinking about them, which would be cool if true, except they weren't able to explain how this was actually possible (even in the 1700s, the idea of a "faculty of rational intuition" hiding somewhere in the brain was met with significant skepticism).
Kant's solution was that we can know certain things about the world of experience using only our minds, because the world of experience that we actually perceive is shaped by and generated by our minds in some fundamental sense. The reality we experience must conform to the structure of our minds. So to condense about 800 pages of arguments into one sentence, we can know contra Hume that the world of experience actually is governed by law-like causal relations, because in order to have conscious experience of anything at all, and in order to be able to perceive oneself as a stable subject who is capable of reflecting on this experience, that experience itself must necessarily be governed by logical and law-like regularities. So we can actually know all sorts of things in a very direct way about the things we perceive. When you see an apple you know that it is in fact an apple, you know that if you push it off the table it will fall over, etc. The only downside is that we can't know the true metaphysical nature of things in themselves, independent of how they would appear to any perceiving subject. But that's fine, because in Kant's view he has secured the philosophical possibility of using empirical science to discover the true nature of the reality that we do perceive, and we can leave all the noumena stuff in the reality that we don't perceive up to God.
So he really was trying to "prove the common man right in a language that the common man could not understand", to use Nietzsche's phrase. It must be admitted though that Kant can be interpreted as saying that the laws of mathematics and physics issue forth directly from the structure of the human mind. I believe he would almost certainly add though that this structure is immutable and is not subject to conscious modification. You could argue that some later thinkers got inspired by this view, dropped the "immutable" part, and thus became relativists who granted undue creative power to human subjectivity. But a) the postmodernists are generally not as "relativist" as many people presuppose anyway, and b) I basically can't recall any passage from any book at all where someone said "I believe XYZ relativist type claim because Kant said so", so if Kant did exert some influence in this direction, it was probably only in a very indirect fashion.
Related pet peeve of mine - ask a roomful of medical ethicists (who should bloody well know better, and to be fair some of them do) about Kant and "autonomy". It's darkly hilarious. Just because Kant made extensive use of a word that is often translated as "autonomy", a lot of people seem to think he held something like a modern medical ethicist's typical views about the importance of self-determination, informed consent, and so on. This is almost the exact opposite of the truth. Kantian "autonomy" means you have to arrive at the moral law by your own reasoning, and not out of (say) social pressure, for it to really "count" - but there's only one moral law, and it's the same for everyone, with zero space for individualized variation.
(And you aren't really acting morally unless you follow it out of duty, not because it feels good or gets good results. Just arriving at the same object-level conclusions about how to act isn't enough.)
Yes exactly! “Autonomy” for Kant just means… the ability to autonomously come to the exact same ethical conclusions that Kant did. Which is pretty hilarious.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link