This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
A distraction from the war and ICE. I was thinking about posting in the fun thread, but it's not really a fun topic, though it may not be culture war either since I expect most people to be on the "this is bad" side. Maybe we should have a recurring "Butlerian Jihad Roundup" for posts like these?
Bots are taking over the internet. Corporate shills and (foreign) government propagandists have upgraded with virtual cybernetics. A related but lesser change is people using LLMs to reword their own posts (+ emails and other communications).
Some AI writing is obvious, but sometimes it's indistinguishable from (if not completely identical to) what a human would write. NYT has a quiz to distinguish human and AI writing. I did bad (3/5), but in my defense, I think most of the human examples are awful, making the quiz harder. See for yourself.
On Hacker News, it’s now so bad there's a new guideline, “don’t post generated/AI-edited comments”. Unfortunately, due to the extreme intellect of the average Hacker News commenter, it can be hard to distinguish their profound technological insights from even a markov chain trained on buzzwords. Indeed, looking at top threads I still notice lots of slop-like posts from brand new or previously inactive accounts, like this one. I've been sarcastic, but I really like Hacker News, and hope it finds a way to stop the slop.
Other networks are taking a different approach. For example, Meta has acquired MoltBook (the AI social network) in an effort to add even more bots to FaceBook. I’m joking — no wait, they may actually be doing that. Not content with the Metaverse, maybe Zuckerburg has become addicted to burning money on uncanny social experiments.
On the Motte, at least for now, I haven't seen any obvious bot posts. There were a couple AI-assisted posts (by "known" humans) over the past couple months that got called out.
How will social media evolve? Will people move to invite-only sites like https://lobste.rs and Discord? Will most people accept AI discourse as natural or even prefer it? Will AI discourse become so good that we prefer it? Right now, it seems even the best AI writing (prompted to be consice and human) is unnecessarily wordy and has certain tropes; but what if someone discovers how to train an AI on a specific human's writing, so that it's effectively indistinguishable?
There is no solution. There is no proof-of-work or proof-of-humanity that is not severely error prone, extremely laborious, or that avoids requiring some kind of totalitarian police state dedicated to monitoring every word written by a human, or every token outputted by every known LLM.
It can't be done, or at the very least it won't be done.
HN is the best parody of HN. There are plenty of (almost certainly human) users who could be trivially reconstructed by telling an LLM to write in the style of the biggest grognard pedant with arboreal-reinforcement of the anus it can envision.
Their attempt to ban "AI-edited" submissions is laughable, an attempt to close the barn-door after the horse was taken out back, shot, and then rendered into glue. There is no way to tell, distinguishing entirely AI written text is hard enough, let alone attempting to differentiate between an essay that was entirely human written, and one that took a human draft and then passed it through an LLM.
I intend to munch popcorn and observe the fallout. In all likelihood, a few egregious examples will be banned, alongside a witch-hunt that does more harm than good.
The majority of bot posts (that anyone can tell are bot posts) are spam that is caught by the moderators and never see the light of day. I can't recall a single example of us allowing someone in who we thought was human, and then finding a smoking gun that would make us conclude that it was a bot all-along.
I am on record stating that I do not see an issue with LLM usage, as long as a human is willing to vouch for the results and has done their due diligence in terms of checking for errors or hallucinations. I do not make an effort to hide the fact that I regularly make use of LLMs myself when writing, though I restrict myself to using them to polish initial drafts, help with ideation, or for research purposes. This stance is, unfortunately, quite controversial. Nonetheless, my conscience remains clean, and I would have no objections to anyone else who acted the same way.
None of the tools that purport to identify AI-written text are very good. Pangram is the best of the pack (not that that means very much). I've tested, and while the false positive rate on 100% human writing (my own samples) is minimal, the false negative rate is significant. It will take essays that have non-negligible AI content and declare them 100% human, or substantially underestimate the AI contribution.
And that is with no particular effort to disguise or launder AI output as my own. If I actually cared, it would be easy as pie to take a 100% AI written work, then make small changes that would swing it to 100% human by Pangram's estimation (or prompt an LLM to do even that for me). The tools help with maximally lazy bad actors, but that is their limit. Eventually, they won't even catch said lazy bad actors.
Asking the LLMs? No good. Even worse.
I took an essay I wrote myself (the only AI involvement was proof-reading and feedback, most of which I ignored). Then I asked Claude Sonnet to summarize the content in 100 words, then to itself write a prompt that would be used by another LLM to attempt to reconstruct the original.
I then asked fresh instances of Claude itself, as well as Gemini Pro, to write a new essay using the above as verbatim instruction.
I then took all 3 essays, put them in a single prompt, and then asked Claude, Gemini and ChatGPT Thinking to identify which ones were human, AI, or in-between.
You may see the results for yourself. Gemini's version of the essay was bad, and thus flagged by pretty much every model as either AI, or the "original" that was then expanded. The other two, including my own work, were usually deemed 100% human. Well, one is ~100% human, the other very much isn't.
Gemini in Fast mode:
https://g.co/gemini/share/0d4e6279bf8f
Gemini Pro:
https://g.co/gemini/share/119274d62e32
ChatGPT Thinking in Extended Reasoning mode:
https://chatgpt.com/s/t_69b3fad20c9c8191a27e3542685f20ba
Claude Sonnet with reasoning enabled:
I can't link directly, because the share option seems to dox me with no way of hiding my actual name.
Here's a dump instead-
https://rentry.co/oo4qkduk
Claude was the only one to correctly flag essay 3 as human, and that is likely only due to chance.
ChatGPT was the only model with memory enabled, and it failed miserably.
What else is there to say? Good luck and have fun while there's some hope of telling the bots apart from humans, if not humans using the bots.
With apologies to Descartes, "always has been". While cogito, ergo sum manages to demonstrate that I exist to myself (at least, I find the argument compelling), I've never been able to satisfactorily prove that the rest of the world and everyone else as I perceive it exists, and isn't some
big simulationdemonic manifestations or imagination.Just a few days ago, I met a patient who was convinced that they did not, in fact, "exist". He believed himself to be a rotting corpse, and initially declined his antipsychotics on the grounds that a dead person had no need for medication (a valid argument, as opposed to a sound one).
After some debate, we decided to tell him that the drugs would prevent his "corpse" from decomposing and causing a stink that would inconvenience the rest of the ward. Pro-sociality intact, he found this a compelling argument, and swallowed them without any further fuss.
So no, not even "Cogito ergo sum" is foolproof. The universe, and the DSM, must account for even better fools.
I suppose that this is a reminder that psychotic people who believe X are not just like regular people who believe things. If there was such a thing as an actual walking dead person who had sound reasoning for knowing he is that, he could ask you if the drugs had been tested on any dead people, and besides, why did you say they had a completely different purpose less than five minutes ago?
A day in a psych ward will disabuse you of the notion that there's a bright line between sanity and insanity.
Just to start, we have distinctions between a true delusion, a fixed belief and an overvalued idea. Said distinction is incredibly subjective and often artificial.
The overvalued idea is the most familiar. Someone becomes absolutely convinced their neighbor is sabotaging their career, or that 5G towers are causing their migraines. The belief is wrong, probably, and they hold it with more intensity than the evidence warrants.
However: if you corner them and argue carefully enough, they squirm a little. They might say "well, I suppose I could be wrong, but..." There is still some kind of cognitive negotiation happening. The belief is upstream of their reasoning, but their reasoning is not entirely offline. Lots of people you know have overvalued ideas. You might have some. I might have some. Most of the time, they're like the mites that live on your skin, not beneficial, but not so debilitating you'll inevitably run face first into the consequences of your poorly founded beliefs.
The fixed false belief turns the dial up. Now there is no squirming. The person is simply certain. A deeply depressed patient knows, with the same confidence you know your own name, that they are a fundamentally evil person who has ruined everyone around them. You cannot argue them out of it because it does not feel like a belief to them - it feels like a perception, like reporting what they can plainly see. The fixedness is the thing. Evidence just bounces off.
I emphasize false fixed belief, because you might well believe that you have 5 fingers per hand. Someone might show up and make a really convincing argument to the contrary. Maybe they claim to show that Peano arithmetic is flawed, or that you have somehow grossly misunderstood what the number 5 means, or what counts as a finger. You are unlikely to give a shit, and for good reason.
(There are the usual "proofs" that pi is equal to 4, or that 1=2. The mathematically unsophisticated might never be able to find out the logical error, but they usually do not actually end up convinced.)
The true delusion (what Karl Jaspers called the primary delusion) is something stranger still. It is not just a fixed false belief. It has a particular quality of being un-understandable from the inside out. A man wakes up one morning and suddenly knows, with crystalline certainty, that he has been chosen to decode messages hidden in highway signs. There is no paranoid personality that led here, no trauma that makes it psychologically legible. It arrived fully formed, like a piece of foreign software running on his brain.
(Look up autochtonic delusions for more)
Psychiatrists following Jaspers say you can't empathize your way into it. You can understand a depressed person thinking they're worthless, but you cannot really follow the phenomenological path to "the license plates are speaking to me specifically."
Other than that, delusions are completely immune to evidence, and also culturally incongruent. Put a pin in that till I come back to it, it's very important.
The clinical rule of thumb: overvalued ideas yield under pressure, fixed beliefs are immovable but emotionally coherent, and true delusions feel less like conclusions the person reached and more like axioms that were simply installed.
You know, I tried my hand at writing a few Koans about psychiatry a while back. I might as well share one I'm fond of:
A patient who had recovered from psychosis came to Master Dongshan and said, "For two years I believed the government had implanted a transmitter in my skull. I was as certain of this as I am now certain it was a delusion. The feeling of knowing was identical in both cases. How am I to trust any of my beliefs ever again?"
Master Dongshan said, "You are asking perhaps the most important question in all of epistemology, and I notice you arrived at it not through philosophy but through suffering."
The patient said, "True enough, but forgive me for not finding your statement very helpful."
Master Dongshan said, "No. That's why you paid me to prescribe you meds, not for a lecture on philosophy. But consider: everyone around you walks through life with that same unjustified feeling of certainty. They've just never been given reason to doubt it. You now know something that most people do not. You know that the experience of being right and the fact of being right are completely different things."
The patient said, "I have.... issues with framing this as some kind of gift. It feels more like a nightmare. I can no longer trust my own experience."
Master Dongshan said, "You have described the starting point of all genuine inquiry. Most people never reach it. They are too comfortable inside the feeling of knowing to notice it is only a feeling."
The patient was not comforted, but was, in a way he found no use for, enlightened.
Okay. You can take the pin out now.
Notice the emphasis on culture context. If you've ever mindlessly scrolled TikTok or Insta reels, you might have seen a "prank" where this second-gen Nigerian citizen in the UK follows random older first-gen immigrants, introduces himself, then declares that "he was sent from Nigeria to kill you."
He then makes some weird gesture with his hands, takes out a pinch of salt from his pocket and throws it at the victim. They immediately panic, though the response varies from running away screaming, running at him screaming with the intent to do bodily harm, or to pull out a Bible and chant verses while weeping.
(Hardly a once-off. It seems a concerningly large number of elderly Nigerians carry a convenient pocket Bible for such occasions)
He doesn't pull out a knife, he's unfailingly polite, he just throws salt at them, which I'm given to believe is supposed to represent some kind of black magic curse.
Can a pinch of salt hurt you? Not unless you're a slug.
You might feel like laughing at these silly, superstitious fools. Haha, they think witch doctors can hurt them!
If you (for a general you) are a Christian, or any other religious denomination, you are exactly as laughably deluded from my perspective. You hold what, to me, is a clearly unfounded belief that is immune to updating on empirical evidence. That saint who rolled their eyes and spoke in tongues? You don't see people getting beatified for that these days, after we've got EEGs and research on temporal lobe epilepsy.
Unfortunately, if we used this perfectly reasonable standard for insanity, the patients in the psych ward would outnumber those outside. Grudgingly, we keep track of whether the delusions you hold are common, especially for your cultural milieu, and whether they are causing you disproportionate harm. Also, can we do anything about it? Is there a drug I can give some deeply religious pensioner that'll stop them from believing in God? Not that I'm aware of. If they're peeling off their skin to get at the hidden chip inserted by MI6, then I at least have some hope that risperidone will help.
Wait till you see the nonsense involved with evaluating delusional disorder. Othello syndrome involves feelings of immense jealousy and suspicion that your partner is cheating on you, based on little evidence. Simple enough?
And then you see someone who has a seemingly sweet, loving and faithful wife, who gets diagnosed with Othello syndrome, and then discover that said wife was actually cheating on them all along. It's not paranoia if they're really out to get you.
How the fuck is a psychiatrist supposed to know for sure? We simply persevere, and it mostly works. When it doesn't, it makes the papers and we get served lawsuits.
If someone has Othello syndrome and makes their partner so annoyed that they end up cheating, does that retroactively invalidate the diagnosis? You can tell me, after you find a time machine. I'm sure plenty of philosophers have made a living writing about Gettier cases, but I'm not a professional philosopher, and I don't let philosophy get in the way of fixing people.
This makes me think there might be a cleaner line between "true delusion" and the other two proposed categories than I had initially expected. Why not consider the "you can't empathize your way into it" criteria as a (if not the) major boundary of the concept?
Considering both the Christians and the salt-based curse believers, both seem to be engaged in perfectly normal cognition - that is, I suspect that what both groups are doing is reasoning off of the apparent beliefs of people they trust at some point in their pasts. This is partially captured in the cultural congruity aspect, but seems distinct.
We could imagine my friends and family conspiring to convince me that my wife is cheating on me. They may use weak arguments and no evidence, but I would certainly still update in the direction they're pushing (unless, of course, I was aware of the conspiracy). Keep this up for long enough and deny me any opportunity to see evidence to the contrary (a notable feature of most popular supernatural beliefs, they are not easily and obviously falsifiable) and I expect I would have a strongly fixed, false, unjustified, non-culturally-determined belief that my wife is cheating on me.
Conversely, I could imagine a devout Christian hitting his head and suddenly losing all belief in the immaterial. Despite his beliefs coming closer to what I expect to be correctness, I find it very easy to rate him as less sane than the curse believers - something has clearly gone wrong with his cognition in a way that I cannot model as reasoning in the normal sense.
I expect also that this distinction is materially useful - the ways in which I'd interact with someone with strongly held false beliefs obtained via ordinary methods are very different from how I would interact with the truly delusional (at least concerning the areas of their maps that clearly have holes). As you say, the former can be pressed.
Because the ability to empathize is subjective, helplessly so. And just because you think you can empathize with someone doesn't mean you are accurately simulating their inner cognition.
I can try and empathize with an octopus. I can try and imagine having tentacles, but I do not think I could capture the qualia of an octopus even if I tried my best. I can dream of being a butterfly, but that is not the same as actually being a butterfly.
Alternatively, a society of autistic people might be fully functional (if they're high functioning autists). They might have severe deficits of theory of mind and can't actually understand the way that a neurotypical person in their midst actually feels. They might well call him broken or insane. Or a religious enclave might consider an unbeliever in their midst to be the crazy one, and feel very confident in their belief.
The autists might be able to, after a great deal of empirical research, be able to accurately predict the behavior of neurotypical people. Actually autistic people do often learn how to "mask", but passing as neurotypical does not necessarily make them neurotypical. Similarly, psychiatrists can predict the behavior of the psychotic (to a degree), even if we do not "understand" them in the Jasperian sense.
I am not an expert on phenomenology, but I do not fully agree with Jasper and his supporters. I think I can empathize with the insane or the religious, at least to some degree, even if I do not agree with them. Am I right? I don't know. Who does? On what grounds?
It is still a kludge. I would say that the our understanding of the universe is at a point where we can look at both the salt-aversive and the typical Christian and confidently say that both are incorrect. The world simply does not behave the way their beliefs would imply it does. The evidence is abundant, there are anti-cathedrals everywhere for those with the eyes to see.
Now, social consensus is evidence, in the Bayesian sense. It makes holding erroneous beliefs more defensible, or at least more understandable, than when they arise in a vacuum. A black person in America might well believe that thousands of black people are unjustly shot by the popo on an annual basis, because of media bias and their own in-group consensus. I would not call that a central example of delusion, it is possible for people to just be plain old wrong because of the bad luck of existing in an environment that does not optimize for truth. I just think that the evidence against the claims of the typical religion is even stronger, but that is more of a quantitative difference than a qualitative one.
("What evidence filtered evidence?")
If I was less lazy, I'd expand on the implications of/for Bayesianism. But the delusional, in the standard psychiatric sense, can be modeled as having stuck priors that do not update on new evidence. Scott has discussed this with more depth and rigor than I can ape.
I disagree! I see it as the equivalent of percussive maintenance, sometimes a sufficient shock to the system can break it out of a maladaptive pattern.
Within psychiatry, consider ECT. Let's say you're depressed and think you're an awful human being who deserves to die. I take you, put you under anesthesia, then induce seizures in your brain through the application of electric voltage.
You wake up, you no longer feel depressed, and you no longer want to kill yourself. Do you think that an electric shock is a valid argument against their position? Nonetheless, they're doing better, they're more functional at the very least. I would happily say that the process has made them more sane.
This is true, and important if we're trying to come up with rules that we can directly audit, but this objection also applies any time we are reasoning outside of a formal system - the fact I can believe falsely does not mean I shouldn't use my beliefs in downstream reasoning. If "my estimate of how reasonable the origin of a belief is" produces useful clusters I'll probably have a hard time selling it to a journal, but it will still be useful.
Also true, but also I think overstated - we can say quite a bit about how it is to be a bat, and statements like this can't be thrown out immediately - especially when the difference in cognitive architecture is as minor as that between (in the religiosity case, I'm sure we can find at least once instance) a pair of identical twins. We can think about questions like this and achieve certainty to our own satisfactions because this is what we have to do constantly - if everyone believed they had to have absolute certainty to make a statement only the insane would speak.
I mean, again I largely agree, but I think you're discounting the sheer space of possible belief that's been selected away for being too falsifiable. In the salt case, I would be extremely surprised if anyone involved was highly confident that some immediately visible malady would occur. If that was the belief, it would have been falsified enough times in enough communities that the idea would be have been outcompeted. Even the very religious do respond to evidence. For an example, we see this with new religious movements / cults (Debunking “When Prophecy Fails”) - interesting how major, long-lived religious movements tend to avoid these kinds of situations. It's hard to say that membership in a flying saucer cult selects for especially good epistemology. These priors don't look stuck exactly, more insensitive.
More broadly, almost all evidence is filtered evidence. This is good and necessary - "we" understand a ton about the world, whereas I understand only what I have the time/energy/ability to really look into. All the rest is impressions filtering through my peers and favored media. I'm surprised it works as well as it does! Somehow we've created a system where global understanding increases while almost no one understands almost anything - "someone seems moderately too insensitive to evidence against their favored belief" is the default.
If we phrase the distinction as a stuck prior, sensitivity to evidence, etc like Scott tends to, the difference does seem quantitative rather than qualitative. We do also have within the rat canon 0 And 1 Are Not Probabilities, which makes the opposite point. If a few of our parameter choices lead to vastly different behavior than all of our others, we really want to point that out! The reason I want to draw the line at "true delusion" is because of this quantitative difference.
This does, however, require you to assume that they weren't sane to begin with. To be clear, being stuck in a negative-feedback loop of affect is a pretty good reason to believe someone isn't sane, but in the examples I brought up that's the entire point in contention. We could easily imagine analogous scenarios where a direct improvement in affect would make one markedly less sane.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link