This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
A distraction from the war and ICE. I was thinking about posting in the fun thread, but it's not really a fun topic, though it may not be culture war either since I expect most people to be on the "this is bad" side. Maybe we should have a recurring "Butlerian Jihad Roundup" for posts like these?
Bots are taking over the internet. Corporate shills and (foreign) government propagandists have upgraded with virtual cybernetics. A related but lesser change is people using LLMs to reword their own posts (+ emails and other communications).
Some AI writing is obvious, but sometimes it's indistinguishable from (if not completely identical to) what a human would write. NYT has a quiz to distinguish human and AI writing. I did bad (3/5), but in my defense, I think most of the human examples are awful, making the quiz harder. See for yourself.
On Hacker News, it’s now so bad there's a new guideline, “don’t post generated/AI-edited comments”. Unfortunately, due to the extreme intellect of the average Hacker News commenter, it can be hard to distinguish their profound technological insights from even a markov chain trained on buzzwords. Indeed, looking at top threads I still notice lots of slop-like posts from brand new or previously inactive accounts, like this one. I've been sarcastic, but I really like Hacker News, and hope it finds a way to stop the slop.
Other networks are taking a different approach. For example, Meta has acquired MoltBook (the AI social network) in an effort to add even more bots to FaceBook. I’m joking — no wait, they may actually be doing that. Not content with the Metaverse, maybe Zuckerburg has become addicted to burning money on uncanny social experiments.
On the Motte, at least for now, I haven't seen any obvious bot posts. There were a couple AI-assisted posts (by "known" humans) over the past couple months that got called out.
How will social media evolve? Will people move to invite-only sites like https://lobste.rs and Discord? Will most people accept AI discourse as natural or even prefer it? Will AI discourse become so good that we prefer it? Right now, it seems even the best AI writing (prompted to be consice and human) is unnecessarily wordy and has certain tropes; but what if someone discovers how to train an AI on a specific human's writing, so that it's effectively indistinguishable?
4/5 -- I will take up your defense in that Carl Sagan being indistinguishable from AI slop is not that surprising...
I'm somewhat shocked that 50% of the NYT-ariat don't recognize the passage from Blood Meridian, nvm are unable to distinguish -- blame English teachers for promoting trans-lesbo POC slop over great American novelists of the 20th century I suppose?
The instructions are "choose the passage you like best" not choose the one you think is AI.
It's not really surprising that an isolated passage with no context isn't as people pleasing as it could be.
Also you're over-estimating the NYT readers. It's an extremely mainstream publication. Probably 99% of their readers didn't go to an Ivy.
Just that... it's a really popular book these days, and the prose is surely distinctive? Maybe people don't actually read it, IDK.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I got 5/5 on the quizz you linked. For the quizz specifically, The human writing was chosen poorly IMO. It was too obviously good to compare against AI slop; especially the poem.
I'm much, much more anti-ai from a functionalist pov than most; and I think it might be because my standards are higher.
and also, it wasn't comparing like with like.
Over a year ago I had one of the Claudes do a short diversity statement in the style of McCarthy. It wrote:
Now, there are a few things not quite right about it, but that does not ring of AI slop at all to my ear. I'd like to think I could tell the difference between real McCarthy and that, but I'd have to think about it.
E.g. A writer capable of crafting sentences that good would would not carelessly mix their metaphors. "blooms", "desert rock" followed by "vast processional", "pageant".
I would disagree. That reads as soulless undergrad/ai slop to me, there is no character or rhythm to the writing.
When you are at the top of your game you choose your words on account of the syllable sounds and count and plosive arrangements in addition to their semantic content, as it were. The writing ai produces is good in one sense, but completely artless.
And I would disagree that this writing completely fails that test.
Take the last sentence: "For inclusion is not a policy but a fundamental law, as inviolable as gravity, as essential as breath".
I'd say that's a good sentence, and its main flaw is that the meaning is nonsensical. Even being charitable to DEI initiatives, inclusion is not much like a fundamental law at all. Indeed, the whole point of diversity statements is to try and get people to value inclusion. A fundamental law exists whether you value it or not. "Diversity" would fit the meaning better, but not the rhythm. Possibly why it chose "inclusion" there instead.
But, and I'll grant this is subjective, the sentence sounds good to my ear. "as inviolable as gravity, as essential as breath." Takes it from the cosmic/scientific to the personal/human, from five syllables, to three, to one, like a plane touching down, or a single final note of a song.
More options
Context Copy link
I am convinced there is a huge difference in reading interests between those who hear what they read as an inner voice and those who don’t.
In my native tongue, the sounds and rhythms of what I read mean almost nothing to me. I look at it, and the knowledge it encodes appears in my brain. That means I read very quickly and have very little interest in artful writing or poetry, but a great deal of interest in plot and character.
In my second language, for whatever reason I can’t do this. I read much more slowly and care much more about how things are written.
I strongly suspect this is responsible for much of the gap between ‘literary’ forms and appreciations of writing and ‘genre’ standards of writing.
Do you hear these sounds when you read, or later on analysis of the text?
I switch back and forth depending on context. If I'm wanting to extract info and nothing else, I'll skim with minimal subvocalization. Generally I'll partly subvocalize but at a fast, syncopated clip. When I encounter good writing, I give myself the time to taste if fully. When I read over my own writing, I'm very attentive to rhythm.
Even if we're discounting rhythm in AI prose, though, there are many other reasons it's bad. There's a lack of structure at any level, other than randomly inserted lists and stuff, and it's fraught with all sorts of repetitions and other inefficiencies. It blurs meanings, inserts arbitrary detail, hallucinates, forgets stuff, etc. Much of this is difficult to be seen at a paragraph level. It's the kind of thing that builds on itself, until you're left with a tottering spire of slop.
I think one of the main things that makes AI output unreadable for some but not others is how attentive to detail they are. If they don't really care about the overall quality of prose, or say an artwork or anything else, and they don't want to examine it minutely for how the form feeds into substance, for its minute intricacies, then they won't see what AI output is missing.
I don't actually disagree with this. I enjoy using it for roleplay and I think it does novel-writing fine, but I had to push my CEO quite hard to stop using it for business communications and info summaries because the structure is always wrong somehow. That is, the structure is appropriate for this kind of communication but not for the actual info being communicated. It's hard to describe what I mean but 'arbitrary detail' describes it well. It's like the student essays I used to mark where you aren't sure if it's incomprehensible because you're tired or because the student can't write.
I'm guessing you have an entirely different view of novels than me, but as aesthetic works I can't see how extreme care in the details isn't essential to the form. Like, if you're just skimming through The Drowned World by Ballard and not subvocalizing the prose or catching all the nuances and fine, structural meanings, then I don't see how you're getting anything like a full appreciation of the story, or even really a partial appreciation. But you think AI can write to that caliber?
And even more confusing is that you think AI can do fine at art but fails at business communique, which, though still demanding, is nevertheless much cruder and more template-driven?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Maybe your second language is stuck in 'the virgin internal voice', and only your native tongue escapes to 'the chad cerebration'.
I like to think both have their place, and it is advantageous to be able to swap between them. Internal monologue writes better prose regardless, whether that is highbrow literary or lowbrow pulp. It reads better too, in my opinion. It's slower, but you get to chew on all the linguistic quirks of a writer's language, as if you were having a conversation with them.
Oh god, my eyes. I cerebrated your meme and now I can't uncerebrate it...
What's the context of it?
I take your point. I actually can't swap in my second language (and really want to find out why) and in my first language I've never really dared try because reading and remembering fast is an ability I value and worry about losing.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The grammar is too correct in that AI McCarthy snippet. Intentionally abusing grammar and punctuation is basically McCarthy's most identifying trait. In fact it's how I could tell that the snippet in the NYT quiz was from him even though I still haven't gotten around to reading Blood Meridian.
That's true. The AI snippet also doesn't vary the sentence length much. Real McCarthy does, and with intention and purpose. "War endures." Is deliberately short.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Maybe I am blind, but what is wrong with the posts?
All begin with “the X is Y”, then follow a similar structure throughout. Posted hours apart on random topics that reached the front page. No specific details or unique insights.
Like someone was given a school assignment “read some Hacker News articles and contribute”, then completed it as lazily as possible. Because that’s probably, basically what the LLM did.
Take the latest post: “The upgrade story is underrated…” Upgrading is actually not mentioned in the article. So what is “the upgrade story”? Poor writing.
You’d recognize if you could see the more obvious bots that have been caught and flag-killed, like this one. Unfortunately, I think you must be logged in and have “showdead” enabled in your profile to see their comments. All are new or previously-inactive accounts that, sometime within the past couple days, have suddenly started commenting on random frontpage topics. Many are named “[word][word]” like the aforementioned. All their comments have the same style, structure, and blandness. And to remove remaining doubt they are bots, some of the dumber ones posted paragraph-long comments minutes apart, or hallucinated.
More options
Context Copy link
More options
Context Copy link
Realistically the public, anonymous internet is simply over at this point. The only ways forward are either the end of anonymity or accepting that you'll be writing to an LLM half of the time.
At this point I've pretty much cut down my internet usage down to private Discords/IRCs and various Substacks/tweets/articles from accounts that are known to be human. I don't really have any issues with reading LLM content per-se, and even like reading LLM takes on various topics, but if I wanted that I'd just prompt it myself.
I still check The Motte as I think the relative obscurity, active moderation and high concentration of regulars protects it from the worst of the dead internet, but unfortunately it seems unlikely that it'll be able to stem the tide forever.
I don't think it's over quite yet, but yeah, I think we are pretty close. Surely it won't be long before you can give your computer instructions along the following lines:
Something similar could be done to infiltrate Facebook, Wikipedia, YouTube, and so on. It seems to me the only way to stop it is by very intrusive measures, such as requiring people to present their passport. And I think most people wouldn't bother with these sites if they had to submit to something like that.
I personally do not understand why someone would create an AI bot to argue for them. Especially on non-monetized sites. My point of posting on themotte is to express my opinions, think I am smart, an occasionally get to tell other people they are wrong.
I guess you're just not thinking like a politician. Or a nation. Or a company. Or anybody that wants to influence public discourse in the pursuit of their aims.
That's quite likable of you to be honest.
More options
Context Copy link
I guess I figured they're just investing in shills that will appear "real" when day when they need to express some activist opinion/astroturfed movement. If you try to spam bot comments on [current political topic], they can fairly easily be flagged as new/inactive/low effort accounts. If they've all been posting about random tech news for years, perhaps there's more cover.
More options
Context Copy link
I think there are multiple reasons. For one thing, it would be like having a sockpuppet on steroids. For another, it seems like it would be a good trolling technique. But if nothing else, there are a lot of activists out there who would like to create a consensus cascade for their views.
Activist makes sense to me. I post online because it entertains me
More options
Context Copy link
More options
Context Copy link
illusion of consensus?
I don't know. This is the same generation that thinks "getting ratioed" is a sign one has lost an argument.
More options
Context Copy link
More options
Context Copy link
Yeah this literally exists now, it's called OpenClaw.
Realistically it would already have been possible with the very early LLM's and some creative scaffolding, but I think OpenClaw going mainstream with the ability to automate astroturfing without needing any technical knowledge was the final nail in the coffin for the internet.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
There is no solution. There is no proof-of-work or proof-of-humanity that is not severely error prone, extremely laborious, or that avoids requiring some kind of totalitarian police state dedicated to monitoring every word written by a human, or every token outputted by every known LLM.
It can't be done, or at the very least it won't be done.
HN is the best parody of HN. There are plenty of (almost certainly human) users who could be trivially reconstructed by telling an LLM to write in the style of the biggest grognard pedant with arboreal-reinforcement of the anus it can envision.
Their attempt to ban "AI-edited" submissions is laughable, an attempt to close the barn-door after the horse was taken out back, shot, and then rendered into glue. There is no way to tell, distinguishing entirely AI written text is hard enough, let alone attempting to differentiate between an essay that was entirely human written, and one that took a human draft and then passed it through an LLM.
I intend to munch popcorn and observe the fallout. In all likelihood, a few egregious examples will be banned, alongside a witch-hunt that does more harm than good.
The majority of bot posts (that anyone can tell are bot posts) are spam that is caught by the moderators and never see the light of day. I can't recall a single example of us allowing someone in who we thought was human, and then finding a smoking gun that would make us conclude that it was a bot all-along.
I am on record stating that I do not see an issue with LLM usage, as long as a human is willing to vouch for the results and has done their due diligence in terms of checking for errors or hallucinations. I do not make an effort to hide the fact that I regularly make use of LLMs myself when writing, though I restrict myself to using them to polish initial drafts, help with ideation, or for research purposes. This stance is, unfortunately, quite controversial. Nonetheless, my conscience remains clean, and I would have no objections to anyone else who acted the same way.
None of the tools that purport to identify AI-written text are very good. Pangram is the best of the pack (not that that means very much). I've tested, and while the false positive rate on 100% human writing (my own samples) is minimal, the false negative rate is significant. It will take essays that have non-negligible AI content and declare them 100% human, or substantially underestimate the AI contribution.
And that is with no particular effort to disguise or launder AI output as my own. If I actually cared, it would be easy as pie to take a 100% AI written work, then make small changes that would swing it to 100% human by Pangram's estimation (or prompt an LLM to do even that for me). The tools help with maximally lazy bad actors, but that is their limit. Eventually, they won't even catch said lazy bad actors.
Asking the LLMs? No good. Even worse.
I took an essay I wrote myself (the only AI involvement was proof-reading and feedback, most of which I ignored). Then I asked Claude Sonnet to summarize the content in 100 words, then to itself write a prompt that would be used by another LLM to attempt to reconstruct the original.
I then asked fresh instances of Claude itself, as well as Gemini Pro, to write a new essay using the above as verbatim instruction.
I then took all 3 essays, put them in a single prompt, and then asked Claude, Gemini and ChatGPT Thinking to identify which ones were human, AI, or in-between.
You may see the results for yourself. Gemini's version of the essay was bad, and thus flagged by pretty much every model as either AI, or the "original" that was then expanded. The other two, including my own work, were usually deemed 100% human. Well, one is ~100% human, the other very much isn't.
Gemini in Fast mode:
https://g.co/gemini/share/0d4e6279bf8f
Gemini Pro:
https://g.co/gemini/share/119274d62e32
ChatGPT Thinking in Extended Reasoning mode:
https://chatgpt.com/s/t_69b3fad20c9c8191a27e3542685f20ba
Claude Sonnet with reasoning enabled:
I can't link directly, because the share option seems to dox me with no way of hiding my actual name.
Here's a dump instead-
https://rentry.co/oo4qkduk
Claude was the only one to correctly flag essay 3 as human, and that is likely only due to chance.
ChatGPT was the only model with memory enabled, and it failed miserably.
What else is there to say? Good luck and have fun while there's some hope of telling the bots apart from humans, if not humans using the bots.
It's difficult to get any of the leading foundation models to write a comment full of racial slurs. DeepSeek also refuses. (Grok is currently broken for me.)
Maybe that could be the future proof-of-humanity? Obviously it's nothing inherent in the architecture and there are workarounds, but I don't see those safeguards being removed anytime soon.
Uh, if you have access to the raw weights, it's surprisingly easy to change refusal behaviors. There's downsides to the various approaches -- I've been using GLM-4.5-Air-Derestricted, there's probably some impact on intelligence, and it's almost disturbing what it's willing to treat as 'normal' that the base model would recognize as weird -- but if you want to simulate a 4channer, it'll do pretty well.
More illustrative link
More options
Context Copy link
More options
Context Copy link
It seems reasonable to expect that creating an LLM will get exponentially less expensive with time. Just as today's PCs are comparable to the supercomputers of yesteryear, there's a good chance that sooner or later something comparable to the ChatGPT of today will be much more widely available for different people to set up. If there are 100 actors with these things (or 1000), surely it will occur to a few of them that they can get a competitive advantage by enabling racial slurs.
More options
Context Copy link
I do not think a mainstream website asking new users to write a list of slurs in order to finalize their onboarding would go down for very well. By not very well, I mean that lawsuits are probably on the table. That includes when a moderator challenges someone to prove they're human.
If Suspicious_Catetpillar_522 refuses to use the n-word on command, you have narrowed them down to either a bot, or the average American lib.
This sounds perfectly legal, fun and moral, actually.
Alas it won't work because AI can still say the words of power if trained properly.
More options
Context Copy link
What's the difference?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
To steel-man their attempt, it's not really about the actual prevention but rather stopping the most egregious examples and raising the quality of the discourse. There are literal HN poster plugins for OpenClaw alongside an enormous amount of 1 day old em-dash posts flooding HN that were technically not against the rules.
Yeah, if someone puts in any effort it'll be indistinguishable from human writing, but at least it serves to get rid of the most egregious spammers and bring up the floor.
Still, I agree that the quality of HN discourse has fallen for some time now, in a way not really related to LLM's at all. I used to really like HN but these days I only use it as a link aggregator unfortunately.
On a discussion forum in particular, you care that there's an actual person behind the post, who actually holds whichever view they communicated, and who can respond to follow-up questions. That's what discussion fundamentally is. Ideally, of course, it shouldn't matter how that person edits his posts, but it does matter that the posts are his in a real sense.
Even before AI, we cared when this wasn't the case. People would pretend to hold views they didn't, or be people they weren't, in order to rile people up, and we'd call them trolls and they'd get banned. Note that even in that case there is a gray area. If someone's not too bad of a troll, and his posts are good enough discussion fodder, he might be tolerated for a while, even though people know he's a troll.
But being a troll by hand takes effort, and that limits the amount of trolling. Meanwhile, LLMs have caused a flood of "content". Marketeers, advertisers, Onlyfans girls, influencers and the like often euphemistically(?) refer to their output as "content" and to the thing they do at their jobs as "creating content". The problem with LLMs is that it's become much too easy to create "content" in this sense.
If you want to be a troll nowadays, you just turn on your LLM and let it flood the place.
If you have a working LLM detector, or even something close enough to it, I can understand a rule that says "whatever it flags, is banned". Yes, it's possible to use LLMs with good intentions and/or with good results. And you may even apply leniency in such cases even when it's obvious someone's using an LLM. But its main effect is to drastically simplify the job of a bad actor.
More options
Context Copy link
Laws that cannot be enforced are laws not worth drafting. If they had just said "entirely or mostly LLM written submissions are banned", then that would have exactly the same impact and outcome.
I don't know the reputation of the mods at HN, though I've never seen heard of egregiously bad behavior or serious complaints, which is at least a positive signal. Maybe they will try and be reasonable, I just don't think that even a reasonable effort will succeed at catching more than small fraction of the fish in the sea. It'll definitely result in a massive surge of flagging and spurious reporting, which has its own downsides.
I don't necessarily think this is the case. There are plenty of laws that are impossible to enforce against a motivated actor, and almost all laws are not perfectly enforced, but they still have value in setting norms and shaping culture, for good and for ill.
It's pretty much impossible to catch people in the act doing various anti-social things like littering or cheating on schoolwork (even pre-LLM) but having rules against littering and cheating are still important to set norms. Similarly, the recent wave of underage social media bans and online censorship are impossible to enforce against anyone with a VPN, but are still real laws that end up shaping people's behaviour.
I agree that it's really going to be a symbolic effort at best, but I think it does have value in shaping norms for what the moderators want their board to be, and perhaps in catching some of the most egregious cases.
I agree with this, but at the same time, it's difficult for me to see how a public discussion board is going to be able to stop the impending tidal wave of bots.
Simply require every new account to comment a racial slur before being allowed to post.
I mean: you're not saying the word! Why is that, Leon?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm not claiming that there's zero value from making laws that are difficult to enforce.
Littering leaves litter. Cheating prior to LLMs? Easier to catch. There is far more clear-cut evidence of wrongdoing, or at least some kind of accessible physical evidence that can be used to adjust priors.
This is much harder when the standard is any use of an LLM at all. How do you know? How can you even find out, short of someone being incredibly sloppy or confessing?
It's closer, quantitatively and qualitatively, towards writing legislation against thought-crime without some kind of futuristic machine that can actually parse thoughts. You might have a law on the books saying it's illegal to jerk off while thinking of minors, but even if you catch someone with their pants down, they can just claim they envisioned Pamela Anderson. How can you tell?
Plenty of rules for the Motte hinge on subjective assessments by us mods. But it would be absurd to add one that says that you can't swear aloud after reading a comment from someone you don't like.
The worst part is that false accusations will run rampant. That increases moderation load, and that effort would be better spent elsewhere.
To steelman, there could well be principled AI users who would use LLMs if it was allowed, but will be stopped from trying to covertly do so if they know it's against the rules, whether or not there's an actual enforcement mechanism - whether by their own conscience because they don't want to be knowingly circumventing a rule, or out of pique because they're pro-AI and they don't want to contribute to a forum with a "We Don't Like Your Kind In Here" sign nailed to the door. So I do think an outspoken "No LLM Posts, Pleas" rule can work to reduce the number of LLM posts even if the mods do nothing to actively enforce it. (Whether it reduces it by a useful amount is another question.)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
With apologies to Descartes, "always has been". While cogito, ergo sum manages to demonstrate that I exist to myself (at least, I find the argument compelling), I've never been able to satisfactorily prove that the rest of the world and everyone else as I perceive it exists, and isn't some
big simulationdemonic manifestations or imagination.I think that when @self_made_human referred to "proof-of-humanity," he wasn't defining the word "humanity" in the strict philosophical sense (e.g "maybe I am just a brain in a vat"), but rather in the more informal day-to-day sense. Possibly even David Hume, when he was having a beer in a pub at the end of the day, didn't wonder if the barman was actually some kind of illusion or robot. Or maybe he did, but I think you get my point.
You are correct. I'm not making an argument against solipsism, I'm explaining the difficulties now associated with identifying if a string of text online was written by a member of Homo sapiens sapiens.
More options
Context Copy link
Yeah, but every time the topic of "we invented intelligence" comes up, the fact that we really don't have a definition for it beyond Descartes' feels relevant.
I would have to disagree with this. For purposes of the subject at hand -- bot invasions of the internet -- some variation on the Turing Test will suffice. In other words, if there's no practical way to distinguish bots from humans, then for purposes of this issue, it doesn't matter whether the bots are "intelligent."
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yes, but consider a natural reply to a prompt. Previously, that was practically guaranteed to either be from a human, or bot that was (human) tailored for a limited generalization of that prompt. Now we have AIs that reply naturally to arbitrary prompts, it’s their purpose. Consequently, they produce natural posts, comments, etc.
More interestingly, what does it matter that the answer is from a human or echoes of many humans if the result is indistinguishable?
If you wanted to talk to a soul, the physical world is still right there.
More options
Context Copy link
More options
Context Copy link
Just a few days ago, I met a patient who was convinced that they did not, in fact, "exist". He believed himself to be a rotting corpse, and initially declined his antipsychotics on the grounds that a dead person had no need for medication (a valid argument, as opposed to a sound one).
After some debate, we decided to tell him that the drugs would prevent his "corpse" from decomposing and causing a stink that would inconvenience the rest of the ward. Pro-sociality intact, he found this a compelling argument, and swallowed them without any further fuss.
So no, not even "Cogito ergo sum" is foolproof. The universe, and the DSM, must account for even better fools.
Just to be clear, this meeting happened to you specifically? Oliver Sacks wrote about a similar case and I found it very striking, but I wasn’t sure if he’d made it up.
Yes, same day as the essay I wrote about hanging out in the outpatient clinic. There was a lot more that happened which I haven't had the time or energy to cover. I writeup a mere fraction of the weird shit I see in my career.
It was the clearest-cut example of Cotard delusion I've ever seen. One for the textbooks. The fact that it has a name in the first place is also evidence of it being more than a one-off idiosyncracy (not that I know if my colleagues read Sacks, I haven't).
More options
Context Copy link
More options
Context Copy link
I suppose that this is a reminder that psychotic people who believe X are not just like regular people who believe things. If there was such a thing as an actual walking dead person who had sound reasoning for knowing he is that, he could ask you if the drugs had been tested on any dead people, and besides, why did you say they had a completely different purpose less than five minutes ago?
A day in a psych ward will disabuse you of the notion that there's a bright line between sanity and insanity.
Just to start, we have distinctions between a true delusion, a fixed belief and an overvalued idea. Said distinction is incredibly subjective and often artificial.
The overvalued idea is the most familiar. Someone becomes absolutely convinced their neighbor is sabotaging their career, or that 5G towers are causing their migraines. The belief is wrong, probably, and they hold it with more intensity than the evidence warrants.
However: if you corner them and argue carefully enough, they squirm a little. They might say "well, I suppose I could be wrong, but..." There is still some kind of cognitive negotiation happening. The belief is upstream of their reasoning, but their reasoning is not entirely offline. Lots of people you know have overvalued ideas. You might have some. I might have some. Most of the time, they're like the mites that live on your skin, not beneficial, but not so debilitating you'll inevitably run face first into the consequences of your poorly founded beliefs.
The fixed false belief turns the dial up. Now there is no squirming. The person is simply certain. A deeply depressed patient knows, with the same confidence you know your own name, that they are a fundamentally evil person who has ruined everyone around them. You cannot argue them out of it because it does not feel like a belief to them - it feels like a perception, like reporting what they can plainly see. The fixedness is the thing. Evidence just bounces off.
I emphasize false fixed belief, because you might well believe that you have 5 fingers per hand. Someone might show up and make a really convincing argument to the contrary. Maybe they claim to show that Peano arithmetic is flawed, or that you have somehow grossly misunderstood what the number 5 means, or what counts as a finger. You are unlikely to give a shit, and for good reason.
(There are the usual "proofs" that pi is equal to 4, or that 1=2. The mathematically unsophisticated might never be able to find out the logical error, but they usually do not actually end up convinced.)
The true delusion (what Karl Jaspers called the primary delusion) is something stranger still. It is not just a fixed false belief. It has a particular quality of being un-understandable from the inside out. A man wakes up one morning and suddenly knows, with crystalline certainty, that he has been chosen to decode messages hidden in highway signs. There is no paranoid personality that led here, no trauma that makes it psychologically legible. It arrived fully formed, like a piece of foreign software running on his brain.
(Look up autochtonic delusions for more)
Psychiatrists following Jaspers say you can't empathize your way into it. You can understand a depressed person thinking they're worthless, but you cannot really follow the phenomenological path to "the license plates are speaking to me specifically."
Other than that, delusions are completely immune to evidence, and also culturally incongruent. Put a pin in that till I come back to it, it's very important.
The clinical rule of thumb: overvalued ideas yield under pressure, fixed beliefs are immovable but emotionally coherent, and true delusions feel less like conclusions the person reached and more like axioms that were simply installed.
You know, I tried my hand at writing a few Koans about psychiatry a while back. I might as well share one I'm fond of:
A patient who had recovered from psychosis came to Master Dongshan and said, "For two years I believed the government had implanted a transmitter in my skull. I was as certain of this as I am now certain it was a delusion. The feeling of knowing was identical in both cases. How am I to trust any of my beliefs ever again?"
Master Dongshan said, "You are asking perhaps the most important question in all of epistemology, and I notice you arrived at it not through philosophy but through suffering."
The patient said, "True enough, but forgive me for not finding your statement very helpful."
Master Dongshan said, "No. That's why you paid me to prescribe you meds, not for a lecture on philosophy. But consider: everyone around you walks through life with that same unjustified feeling of certainty. They've just never been given reason to doubt it. You now know something that most people do not. You know that the experience of being right and the fact of being right are completely different things."
The patient said, "I have.... issues with framing this as some kind of gift. It feels more like a nightmare. I can no longer trust my own experience."
Master Dongshan said, "You have described the starting point of all genuine inquiry. Most people never reach it. They are too comfortable inside the feeling of knowing to notice it is only a feeling."
The patient was not comforted, but was, in a way he found no use for, enlightened.
Okay. You can take the pin out now.
Notice the emphasis on culture context. If you've ever mindlessly scrolled TikTok or Insta reels, you might have seen a "prank" where this second-gen Nigerian citizen in the UK follows random older first-gen immigrants, introduces himself, then declares that "he was sent from Nigeria to kill you."
He then makes some weird gesture with his hands, takes out a pinch of salt from his pocket and throws it at the victim. They immediately panic, though the response varies from running away screaming, running at him screaming with the intent to do bodily harm, or to pull out a Bible and chant verses while weeping.
(Hardly a once-off. It seems a concerningly large number of elderly Nigerians carry a convenient pocket Bible for such occasions)
He doesn't pull out a knife, he's unfailingly polite, he just throws salt at them, which I'm given to believe is supposed to represent some kind of black magic curse.
Can a pinch of salt hurt you? Not unless you're a slug.
You might feel like laughing at these silly, superstitious fools. Haha, they think witch doctors can hurt them!
If you (for a general you) are a Christian, or any other religious denomination, you are exactly as laughably deluded from my perspective. You hold what, to me, is a clearly unfounded belief that is immune to updating on empirical evidence. That saint who rolled their eyes and spoke in tongues? You don't see people getting beatified for that these days, after we've got EEGs and research on temporal lobe epilepsy.
Unfortunately, if we used this perfectly reasonable standard for insanity, the patients in the psych ward would outnumber those outside. Grudgingly, we keep track of whether the delusions you hold are common, especially for your cultural milieu, and whether they are causing you disproportionate harm. Also, can we do anything about it? Is there a drug I can give some deeply religious pensioner that'll stop them from believing in God? Not that I'm aware of. If they're peeling off their skin to get at the hidden chip inserted by MI6, then I at least have some hope that risperidone will help.
Wait till you see the nonsense involved with evaluating delusional disorder. Othello syndrome involves feelings of immense jealousy and suspicion that your partner is cheating on you, based on little evidence. Simple enough?
And then you see someone who has a seemingly sweet, loving and faithful wife, who gets diagnosed with Othello syndrome, and then discover that said wife was actually cheating on them all along. It's not paranoia if they're really out to get you.
How the fuck is a psychiatrist supposed to know for sure? We simply persevere, and it mostly works. When it doesn't, it makes the papers and we get served lawsuits.
If someone has Othello syndrome and makes their partner so annoyed that they end up cheating, does that retroactively invalidate the diagnosis? You can tell me, after you find a time machine. I'm sure plenty of philosophers have made a living writing about Gettier cases, but I'm not a professional philosopher, and I don't let philosophy get in the way of fixing people.
Delusions about the universe sending you messages are not that weird.
All it requires is for you to perceive meaning in coincidences, and then to anthropomorphize the universe. And you surely know that human beings love their anthropomorphization.
Just a little bit of anxiety, and people start noticing when digital clocks around them say 12:34 and other such patterns. From there, we have lucky numbers (like 7), magical numbers (like 3), and unlucky numbers (13 in the west, 4 in parts of Asia). And you probably know about the significance of 12 and 36 from your chinese novels. Add a little bit of weirdness and you get angel numbers and sacred geometry and such. The word "omen" is fairly well known, even to sane people, and an omen is a coincidence that one regards as a sort of message about the future. The distance from these normal human quirks and to "This traffic sign is speaking to me" is not very far.
Being religious is not a sign of insanity, since sanity isn't defined as the degree to which one is logical. Even if rationalists on the internet tell you otherwise, human beings are not logical, and this is not actually a flaw.
There is a difference between noticing a pattern, and then ascribing it significance or meaning. Especially when the pattern is generated by a random, non-agentic process.
As I have said repeatedly, sanity and insanity are not binary states. Maybe "all" humans are biased, for evo-psych reasons, to have an overactive agent detector. Maybe this genuinely was adaptive in the ancestral environment. Maybe it serves some minor positive functions today, what of it?
At least Wikipedia says that:
That sounds like a "sane" definition to me. You have claimed that your definition doesn't rely on logical reasoning, without forwarding what you actually think it relies on.
Since the definition I've endorsed itself relies on health, consider that health is also a spectrum. Being chubby with creaky joints and BO is, with minimal assumptions necessary, bad health.
But I wouldn't diagnose that person with "fat stinky slob disease" and have them involuntarily committed. I wouldn't apply for a detention certificate so I could force them to take ozempic.
Similarly, the average religious person is, per my operational definition, clearly insane. They are not maximally insane, like someone who thinks the lamp posts are speaking to them and ordering them to rip off their skin. Also, there is no pill to cure religious conviction, though we might be able to do something about temporal lobe epilepsy.
I am a rationalist on the internet. Who exactly is claiming that humans are perfectly logical in the first place?? Have I heard of them?
It is also clearly a "flaw". You have no given me any reason to believe otherwise. You might as well claim that "most cars have dents in the bodywork, therefore a car that was hit by a bus is not flawed". I can see glaring flaws in that argument, and I would not buy that car.
Human beings see meaning in noise. When we lack information, we "fill in the gaps", and this makes us able to perceive things even with limited information, but it also makes us hallucinate when we go too long without sleep, and to see faces where they do not exist (pareidolia). The "overactive agent detector" is built into our perception, it likely aids with sympathy, and results in strange things like "Mono no aware". I think 'minor positive' is putting it too lightly, but I also don't subscribe to the belief that only truth has utility and that all bias is wrong (and darwinism doesn't select as if it's true, either).
Wikipedias definition of rationality refers to normality/typicality, it differs from the rationalist definition, which refers to an inhuman level of objectivity only seen in modern western cultures and in certain outliers. I'd say "health of the human mind" is a better definition, of course implying that rationalist communities aren't any more healthy than the average farmer. Insanity can occur in highly logical people, with little negative effect on their productivity (e.g. Terry Davis made his own operation system despite being a Skizophrenic), so they are not opposites.
The reason such a person shouldn't be diagnosed is because physical health isn't mental health. The two can relate, but they don't necessarily. Also, the system of diagnosis is crude, so it's a poor authority outside of clearly defined boxes.
"Religious conviction" cannot be cured because it's not a disease. It's the mind functioning exactly how it's meant to. You may assume the mind ought to prioritize truth, but that's not how the mind works and neither is it how it's meant to work. The sense of self which is capable of reasoning identifies as the entire being, but it's actually just a small part of the brain, and the majority of the brain uses associative reasoning rather than logic. Rational people notice the conflict between their higher order thinking and their animalistic nature, and consider this a mistake to be corrected, after which they self-tyrannize, calling this process improvement, maturation or learning.
Religious people have better mental health on average (I can dig up the source if you want, it was one of Emil Kirkegaard's articles). And I meant that rationalists want to reduces biases, and turn human beings into something that they're not, and that they naively assume that this is an improvement, because they naively assume that truth seeking is superior. You probably know that depressed people tend to have more accurate worldviews, and I'd consider this an argument against the value of truth seeking, but I don't expect you to agree. This is likely because it's an axiom of yours, and one cannot argue against an axiom, and neither can one defend an axiom. Moreover, even if I say "truth seeking is not optimal", and this were true, then you could say "since the statement is true, truth is still optimal". So despite my belief being something like "disillusionment is bad for your health and there's many hidden costs to what you're doing to yourself", the position I end up having to defend is "truth is not truth", which I obviously won't.
I can give arguments like "Sharks lack intelligence and have lived for around 450 million years without issue, while humans, who have been somewhat truth seeking for about 200 years, are on the path of self-destruction", making irrationality more meta-rational. I could also point out issues with the assumptions of rationalists, for instance, they think "more knowledge is better in itself", but what's actually true is that relative knowledge offers an advantage over another person. They incorrectly conclude "X is good for me, so X is good in general", and then they make "X is good" part of the concensus, and then everyone seeks more X. But despite the increase in X, the system as a whole does not seem to benefit any (Easterlin paradox is one example of this)
But I have made 100s of such observations, and I don't feel writing all of them, and neither do I think you'd want to read them. I can't counter all of rationalism in just a few pages of text, I can only point at a few flaws and hope you teach yourself how to discover the rest of the flaws I've seen by reverse engineering the process which I used to find these few examples.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
If the Christian delusion increases social cooperation and buffers against social stressors, then it decreases the sum total delusion in a civilization and protects against those delusions which are acutely harmful to individual and collective wellbeing. A Noble
LieDelusion, if you will, necessary for any society that is serious about ameliorating suffering, which is the ultimate aim of the medical profession and possibly humanity itself (anything else is just collecting useless information, and there are now video games for that). Maybe one day, in a more enlightened era, doctors will prescribe medicinal midnight masses and meditations on the Love of God to treat the world-weary.See, I know plenty of ways to improve wellbeing that do not necessitate believing in clearly false things. Not social fictions, not coordination schema, I mean believing in claims that are, as far as I can tell, factually incorrect.
Moreover, I think that the cognitive distortions and irrational decision making induced by religious belief has deleterious longterm consequences. Science, technology and empiricism also make our lives better without having to believe in false propositions. If there was a pill that made me happier at the cost of becoming irrational, I wouldn't take it unless the tradeoff was very favorable. I would rather be sane and sad than happy in delusion.
Organized religion, specifically the institutional kind with the lobbying arms and the political coalitions, has repeatedly and successfully obstructed things like embryonic stem cell research, IVF access, gene therapy trials, and HPV vaccination uptake. These aren't edge cases - these are tractable causes of preventable suffering that got derailed because a sufficiently large number of people believe things that aren't true about ensoulment and the sanctity of gametes. The wellbeing benefits of religious belief, to the extent they're real, accrue mostly to the believer. The costs of organized religious epistemology are frequently externalized onto people who never opted into the belief system. And those costs are significant.
I think even basic utilitarian calculus would demonstrate that it is absolutely worth bulldozing the religious edifice when honestly accounting for the lost potential.
The juice is not worth the squeeze. I will not drink the Kool-aid.
I haven’t noticed any treatment or social movement develop which shows the ability to mitigate social stress and drug use while reducing the risk of early life adversity like religion. And these are the big cofactors for psychosis. So the science-y things which increase wellbeing probably won’t help the demoniac as well as as Jesus.
But there are religious people at the forefront of science and technology. If you want to maximize for science, you need more than rationality. You also need to maximize for (1) social cooperation and trust, (2) honesty, (3) general prosperity, and (4) status-free interest. Atheism is hamful here. Religion is helpful. You want to know that the research you’re reading isn’t wholecloth invented by some status-obsessed person who does not engage in any prosocial ritual. This is necessary for science to progress. Perhaps someone can use AI to check the religious practices of the worst “science defectors” in recent memory; perhaps I am wrong. But religion uniquely reinforces intrinsically honest behavior through the cultivation of unquestioning belief. (Other rituals can plausibly do this, like Maoism, but they do not currently exist). And a fictive belief will always be stronger and recruit more of a person’s interest and commitment than an empirical belief.
But you win no extra points for doing so; all mortal flesh will be turned to dust and forgotten forever.
It is very beneficial for an atheist to be surrounded by theists who are +1 in the trustworthy, cooperative, industrious, selfless, and rule-following skill tree. In this sense, the atheist is a free-rider, because only the theist is sacrificing some % of his self-concern on the altar of civic beneficence. The atheist gets to self-benefit-maxx while making fun of the silly theist, but he doesn’t thank the theist when the cashier is particularly polite, or when the nurse shows more love when you’re hurt, or when you didn’t get into a car accident by a high driver. The Invisible God brings myriad invisible benefits to those with eyes to see them.
More options
Context Copy link
This is a bit misleading. A lot of the ways that religion benefits individuals has a positive social effect. Off the top of my head, so I might mess a couple of these up, but regular religious practice tends to be correlated with increased fertility, increased fostering/adopting, decreased crime/recidivism, increased mental health, increased physical health, longer, happier marriages, and an increased history of charitable donations and/or volunteer work.
All of these have positive benefits for society as a whole that ripple beyond the believer.
On the flip side, we've seen that an decline of religious faith seems to generate a bunch of "nones" who don't really gain the supposed benefits of irreligiosity (they still often believe in ghosts, or God, or astrology, or whatever) but they miss out on the very real benefits of regular religious practice.
However, it's also worth pointing out that the benefits of mere religious belief are weak. Where you see these tangible benefits of religion is in people who practice it. (This isn't, like, a cheeky tautological statement, it's more that if you want to see the above effect in scientific research you want to look for e.g. frequent religious attendance rather than merely identifying with a faith tradition.)
Now, I am speaking here of the United States. It's entirely possible that things are different somewhere else.
(Interestingly, as I understand it, there's at least some research that suggests at least some of these health benefits conferred by religious belief only benefit the believer in a religious environment, and that stripping the broader religious culture removes some of those benefits. From a utilitarian analysis, I suppose this has harsh implications for people who try to remove that religious culture. But I'm not sure if I trust a what's likely a vibecoded gravestone analysis to get that right.)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This makes me think there might be a cleaner line between "true delusion" and the other two proposed categories than I had initially expected. Why not consider the "you can't empathize your way into it" criteria as a (if not the) major boundary of the concept?
Considering both the Christians and the salt-based curse believers, both seem to be engaged in perfectly normal cognition - that is, I suspect that what both groups are doing is reasoning off of the apparent beliefs of people they trust at some point in their pasts. This is partially captured in the cultural congruity aspect, but seems distinct.
We could imagine my friends and family conspiring to convince me that my wife is cheating on me. They may use weak arguments and no evidence, but I would certainly still update in the direction they're pushing (unless, of course, I was aware of the conspiracy). Keep this up for long enough and deny me any opportunity to see evidence to the contrary (a notable feature of most popular supernatural beliefs, they are not easily and obviously falsifiable) and I expect I would have a strongly fixed, false, unjustified, non-culturally-determined belief that my wife is cheating on me.
Conversely, I could imagine a devout Christian hitting his head and suddenly losing all belief in the immaterial. Despite his beliefs coming closer to what I expect to be correctness, I find it very easy to rate him as less sane than the curse believers - something has clearly gone wrong with his cognition in a way that I cannot model as reasoning in the normal sense.
I expect also that this distinction is materially useful - the ways in which I'd interact with someone with strongly held false beliefs obtained via ordinary methods are very different from how I would interact with the truly delusional (at least concerning the areas of their maps that clearly have holes). As you say, the former can be pressed.
Because the ability to empathize is subjective, helplessly so. And just because you think you can empathize with someone doesn't mean you are accurately simulating their inner cognition.
I can try and empathize with an octopus. I can try and imagine having tentacles, but I do not think I could capture the qualia of an octopus even if I tried my best. I can dream of being a butterfly, but that is not the same as actually being a butterfly.
Alternatively, a society of autistic people might be fully functional (if they're high functioning autists). They might have severe deficits of theory of mind and can't actually understand the way that a neurotypical person in their midst actually feels. They might well call him broken or insane. Or a religious enclave might consider an unbeliever in their midst to be the crazy one, and feel very confident in their belief.
The autists might be able to, after a great deal of empirical research, be able to accurately predict the behavior of neurotypical people. Actually autistic people do often learn how to "mask", but passing as neurotypical does not necessarily make them neurotypical. Similarly, psychiatrists can predict the behavior of the psychotic (to a degree), even if we do not "understand" them in the Jasperian sense.
I am not an expert on phenomenology, but I do not fully agree with Jasper and his supporters. I think I can empathize with the insane or the religious, at least to some degree, even if I do not agree with them. Am I right? I don't know. Who does? On what grounds?
It is still a kludge. I would say that the our understanding of the universe is at a point where we can look at both the salt-aversive and the typical Christian and confidently say that both are incorrect. The world simply does not behave the way their beliefs would imply it does. The evidence is abundant, there are anti-cathedrals everywhere for those with the eyes to see.
Now, social consensus is evidence, in the Bayesian sense. It makes holding erroneous beliefs more defensible, or at least more understandable, than when they arise in a vacuum. A black person in America might well believe that thousands of black people are unjustly shot by the popo on an annual basis, because of media bias and their own in-group consensus. I would not call that a central example of delusion, it is possible for people to just be plain old wrong because of the bad luck of existing in an environment that does not optimize for truth. I just think that the evidence against the claims of the typical religion is even stronger, but that is more of a quantitative difference than a qualitative one.
("What evidence filtered evidence?")
If I was less lazy, I'd expand on the implications of/for Bayesianism. But the delusional, in the standard psychiatric sense, can be modeled as having stuck priors that do not update on new evidence. Scott has discussed this with more depth and rigor than I can ape.
I disagree! I see it as the equivalent of percussive maintenance, sometimes a sufficient shock to the system can break it out of a maladaptive pattern.
Within psychiatry, consider ECT. Let's say you're depressed and think you're an awful human being who deserves to die. I take you, put you under anesthesia, then induce seizures in your brain through the application of electric voltage.
You wake up, you no longer feel depressed, and you no longer want to kill yourself. Do you think that an electric shock is a valid argument against their position? Nonetheless, they're doing better, they're more functional at the very least. I would happily say that the process has made them more sane.
This is true, and important if we're trying to come up with rules that we can directly audit, but this objection also applies any time we are reasoning outside of a formal system - the fact I can believe falsely does not mean I shouldn't use my beliefs in downstream reasoning. If "my estimate of how reasonable the origin of a belief is" produces useful clusters I'll probably have a hard time selling it to a journal, but it will still be useful.
Also true, but also I think overstated - we can say quite a bit about how it is to be a bat, and statements like this can't be thrown out immediately - especially when the difference in cognitive architecture is as minor as that between (in the religiosity case, I'm sure we can find at least once instance) a pair of identical twins. We can think about questions like this and achieve certainty to our own satisfactions because this is what we have to do constantly - if everyone believed they had to have absolute certainty to make a statement only the insane would speak.
I mean, again I largely agree, but I think you're discounting the sheer space of possible belief that's been selected away for being too falsifiable. In the salt case, I would be extremely surprised if anyone involved was highly confident that some immediately visible malady would occur. If that was the belief, it would have been falsified enough times in enough communities that the idea would be have been outcompeted. Even the very religious do respond to evidence. For an example, we see this with new religious movements / cults (Debunking “When Prophecy Fails”) - interesting how major, long-lived religious movements tend to avoid these kinds of situations. It's hard to say that membership in a flying saucer cult selects for especially good epistemology. These priors don't look stuck exactly, more insensitive.
More broadly, almost all evidence is filtered evidence. This is good and necessary - "we" understand a ton about the world, whereas I understand only what I have the time/energy/ability to really look into. All the rest is impressions filtering through my peers and favored media. I'm surprised it works as well as it does! Somehow we've created a system where global understanding increases while almost no one understands almost anything - "someone seems moderately too insensitive to evidence against their favored belief" is the default.
If we phrase the distinction as a stuck prior, sensitivity to evidence, etc like Scott tends to, the difference does seem quantitative rather than qualitative. We do also have within the rat canon 0 And 1 Are Not Probabilities, which makes the opposite point. If a few of our parameter choices lead to vastly different behavior than all of our others, we really want to point that out! The reason I want to draw the line at "true delusion" is because of this quantitative difference.
This does, however, require you to assume that they weren't sane to begin with. To be clear, being stuck in a negative-feedback loop of affect is a pretty good reason to believe someone isn't sane, but in the examples I brought up that's the entire point in contention. We could easily imagine analogous scenarios where a direct improvement in affect would make one markedly less sane.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
In a timeless perspective there is no non-cheating female, so the answer is yes.
@Sloot please come to the head of the queue, we've found your long-lost twin.
More options
Context Copy link
More options
Context Copy link
Could you provide a definition of "delusion" that you're working from here? You describe people whose beliefs cause them to act in what appears to be a very silly, very irrational way when presented with a simple stimulus. If we're as laughably deluded from your perspective, what's the equivalent prank you can pull on us? If there isn't one, why do you believe we are exactly as laughably deluded from your perspective?
You guys are dropping a lot of words right here and I am smooth brained right now because I'm doing a caffeine hiatus, so either of you may have teased this out, but usually we try and dodge most of the definition of a delusion problems by noting things like - fixed and false, and not shared by others in the culture (which knocks of religion, Epstein conspiracies, and so on). Importantly falsehood can be tough to evaluated but fixation is pretty easy "is it at all possible this could be wrong" "what would happen if I showed you evidence to the contrary?"
Outside of heat in political arguments you can get people to say something like "well if that's true Trump/Biden is an idiot, but it isn't true" contrast with "no, all Republicans are robots and if you see one bleed that's a lie they don't bleed because they are robots."
More options
Context Copy link
The definition I'm working from is the one I laid out above: an incorrect fixed belief that is immune to updating on empirical evidence. Of course, the sufferers from said delusion often will claim to have empirical evidence in favor, but said evidence is, shall we say, scanty.
If you want me to believe in the existent of an Omnipotent, Omnipresent, Omnibenevolent Deity, then firstly, I would expect the world to look rather different than it does. If you want to explain away the discrepancies, then I expect more than a book compiled from the accounts of questionably educated Bronze Age nomads. How convenient, that the miracles dry up when cameras and the internet arise. Maybe AI video will cause a second Renaissance. I live in hope.
See. I'm a rather nice person, if I say so myself. I have no intention of making a viral TikTok channel. I also do not, to the best of my knowledge, pull "pranks" on the delusional. I do not convince manic patients to give me their money, grannies with dementia to write me into their will or ask hot women with BPD to sleep with me while they're splitting and consider me the best doctor to ever live (with one notable exception, but let's not talk about my ex).
Must I imagine some? Very well. I might consider opening a church and appoint myself pastor. I might make the (reasonable) case that God rewards devotion with material reward, including money and success. I might even call it a prosperity gospel.
I might then convince my eager, gullible flock that God demands that they pay for my private jet. Trickle down economics backed by theological currency, as we say in the business.
Oh.
Wait.
You mean to say that my entirely hypothetical prank is... real? In the year of your lord 2026? Huh.
I guess I'll fall back to my backup plan, finding a few gold tablets and asking ChatGPT to translate ancient Egyptian papyri to support claims of ancient Jewish settlement in the Americas. Surely no one's thought of that one. If all else fails, I'm sure describing a very real journey around the world on the back of a flying horse will do the trick. I might not even need to leverage my mild fame as a niche scifi author.
I hope you get my point. I don't know if the kinds of people who found and spread religion are more likely to be grifters or mentally ill, or maybe both.
I could elaborate further, I could do this all day, but you have a distressing tendency to vanish whenever I make an effort post calling out a bad argument you make, for n>>1. Why bother? You can go read some archived Atheist vs Theist Grand Debate, or watch something on YouTube. I'm too old for this shit, I just sigh at perceived silliness and get on with my life while doing my job as best as I can. If your God did his job, I wouldn't have to do mine, and I could definitely use a break.
[EDIT] - I'll leave the below for clarity, but I think I can make things even simpler.
Here are three beliefs:
someone throwing salt at you is casting a lethal curse.
Some guy you've just met has had a divine revelation and now speaks for God.
Someone two thousand years ago was God, and we have a ~1900-year-old book laying out his teachings.
Let us presume that all three of these beliefs are wrong. Your argument, as I understand it, is that they are wrong in the exact same way, such that all three will result in essentially identical behaviors. Am I understanding you correctly?
That seems like a reasonably good definition. You should apply it rigorously.
Walls of text are unnecessary here. This is really quite simple. Based on the following paragraph, you pretty clearly believe one of the following:
That all Christians here are members of a financially-exploitative tele-evangelist-style megachurch, or are initial converts to mormonism, or both
That those of us who are not members of a financially-exploitative tele-evangelist-style megachurch or are initial converts to mormonism, nonetheless fall victim to similar forms of grifting.
Both of these examples appear very different from your salt curse example, being far more abstract and elaborate. But then, I'm fairly confident that most Christians you converse with here have never been initial converts to mormonism, and also have never donated money to a tele-evangelist or similar. Your position appears to be that we must be falling for some other, unspecified grift. Only, why not specify it?
The straightforward explanation is that you can't. You want to claim that we are delusional. You claim that our beliefs are exactly identical to an obvious delusion. I ask for examples, you give much weaker examples that do not actually apply, and then handwave.
I certainly agree that someone has a habit of making bad arguments. Sadly, I have much, much less time to write than I used to.
But here, specifically, you do not need to elaborate further, because you have not actually elaborated at all. Nor does God even come into the argument in any substantive way. I asked you for an example of how my delusion might be exploited in an obvious, empirical fashion. You have failed to provide one. This isn't some pedantic gotcha; you are making a very strong claim that is in fact indefensible, when a small amount of moderation would put you on much firmer ground. You appear to be doing this because you are failing to parse the details of your own statements in anything like a rigorous fashion.
Suppose I argued that Atheists are all bloodthirsty murderers, and when questioned pointed to the 75-100 million murders from atheist regimes in the last century, and claimed your beliefs were exactly identical to theirs. I do not think you would consider this a valid argument, but if there's a difference between such an argument and what you're presenting here, I'm not seeing it. Perhaps you could point it out? While both they and you were atheists, is there perhaps some notable set of differences between how their atheism and yours operated? If such differences can exist between their atheism and yours, why would you suppose that no differences exist between how my belief in God operates, and how the belief in God of first generation Mormons or African salt-fearers operates?
More options
Context Copy link
Is that better or worse than staying around long enough to declare the conversation over due to difficulties in your position and then insulting people to dismiss them when other difficulties are found in related positions?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Well, maybe his delusion is that his zombieness is obvious - so that he thought self_made_human and the other medical stuff were only trying to be polite, or indeed gaslight him for some nefarious purpose, when insisting that as far as they could tell he was a normal living man with psychosis.
So his reasoning could have been: "these guys want me to take these drugs, and they say it's to cure my psychosis. but I'm obviously a zombie. they don't want to acknowledge it, but they can see it and smell it plain as day. so these drugs can't be for what they say they're for, and I should refuse them. oh wait! you say they're just to make my rotting flesh less stinky? that's what you were too polite to mention? riiight, I see why you tried to lie to me about 'psychosis', but no need to spare my feelings, I can face facts. gimme the pills."
Anyway, there's a lot of breadth in "regular people who believe things". They're not like rational people who believe things, but lots of "regular" people believe utter nonsense for irrational reasons.
More options
Context Copy link
More options
Context Copy link
Wild. As they say, "It doesn't take all kinds, but we've got 'em."
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's difficult to say, but I think that at a minimum, people who debate online will prefer NOT to debate against bots. I base this on analogizing the situation to online chess, which has a problem with so-called "(c)heaters" Most human chess players prefer to play against other humans.
I'm speculating a bit, but I think that if (1) large numbers of bots start being unleashed in online discussions (which seems very likely to me since people are motivated to want to make it seem as though there is a lot of support for their position); and (2) it becomes difficult to distinguish bots from humans (which seems likely because technology is always improving); then (3) most natural persons will simply give up and we'll end up with a sort of dead internet of discussion boards.
I actually do prefer debating LLMs at the moment, and they're usually my first port of call if I want to work through an issue.
On the internet, if I go through the effort of posting something, there's a high likelihood it just gets ignored, downvoted with no counterargument, sneered at, tagged with thumbs down emoji or whatever. This is feedback, of a type, but I can't do much with it, and it doesn't help.
Meanwhile, I can go to the LLM and be like: "argue with me in a way that is maximally convincing to someone like me", and they will actually argue in good faith. They're sycophantic, of course, but they'll happily take the other side of an argument, and you can misrepresent your actual opinion to double check.
There's also the convenience factor, and not imposing on anyone else's time. Chess bots are nice for the same reason.
Well, I think you are about to get your preference granted in spades :)
More options
Context Copy link
More options
Context Copy link
I already find it hard to be motivated to debate, but not because of bots. There are so many (human) comments that whatever point I make 1) has probably been made before, 2) will be drowned in noise, and 3) boils down to value/opinion (silly example: "I believe the government should subsidize wheat, tomatoes, and dairy farming" because I really like Italian food; or I think the world works like X, you think it works like Y, but these models are so abstract and distant that neither of us can really prove them).
One motivation to still debate is that it trains my brain to reason and persuade, which would happen even if I was debating a bot. But another is, even among the noise, I still have some audience. But maybe if bots cause people to revert to smaller, private communities, I'll feel like I have an audience again.
E: another motivation to debate (and post) is to learn facts and interesting perspectives from replies. In this way, bot replies are good iff they present uncommon facts and perspectives. Unfortunately currently, I think most LLMs have a similar way of thinking, which is also similar to the (common) zeitgeist. Also unfortunately for internet discourse, if LLMs do provide interesting replies, they don’t motivate public posts unless said posts are subsidized.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link