domain:open.substack.com
Nature itself thinks men are as valuable as women. Slightly prefers them even, at 1.05 to 1.
More are produced. This does not make them more valuable; more Honda Civics are produced than Porsche 911s, after all. Slightly later in life, it makes them far less valuable.
Late to the party, but that is indeed the thing that frustrates me most. They hint, but when you ask them plain, explicit questions, their responses are usually some variant on 1) evasive non-answer, 2) accuse you of bad faith for asking the question in the first place, or 3) just vanish entirely.
I'm glad when people do give serious answers on provocative topics and I try to appreciate that, even when the answer itself is one that I find pretty unpleasant. But the ones who just refuse to actually say what they think? I think it's pretty cowardly, and probably indicative of an overall lack of intellectual or political seriousness.
I’m not convinced you can treat people differently on the basis of any hard to change property. Human society values roles and creates hierarchy or several. My physical appearance marks me out as a member of dozens of such groups whether or not we want this to be true. I’m female, im white, im American, im working class, im Christian. All of these things a person can find out quite quickly simply by looking at me, and they do and will always color how im expected to behave, the places I can go, and so on.
On Using LLMs Without Succumbing To Obvious Failure Modes
As an early adopter, I'd consider myself rather familiar with the utility and pitfalls of AI. They are, currently, tools, and have to be wielded with care. Increasingly intelligent and autonomous tools, of course, with their creators doing their best to idiot proof them, but it's still entirely possible to use them wrong, or at least in a counterproductive manner.
(Kids these days don't know how good they have it. Ever try and get something useful out of a base model like GPT-3?)
I've been using LLMs to review my writing for a long time, and I've noticed a consistent problem: most are excessively flattering. You have to mentally adjust their feedback downward unless you're just looking for an ego boost. This sycophancy is particularly severe in GPT models and Gemini 2.5 Pro, while Claude is less effusive (and less verbose) and Kimi K2 seems least prone to this issue.
I've developed a few workarounds:
What works:
- Present excerpts as something "I found on the internet" rather than your own work. This immediately reduces flattery.
- Use the same approach while specifically asking the LLM to identify potential objections and failings in the text.
(Note that you must be proactive. LLMs are biased towards assuming that anything you dump into them as input was written by you. I can't fault them for that assumption, because that's almost always true.)
What doesn't work: I've seen people recommend telling the LLM that the material is from an author you dislike and asking for "objective" reasons why it's bad. This backfires spectacularly. The LLM swings to the opposite extreme, manufacturing weak objections and making mountains out of molehills. The critiques often aren't even 'objective' despite the prompt.*
While this harsh feedback is painful to read, when I encounter it, it's actually encouraging. When even an LLM playing the role of a hater can only find weak reasons to criticize your work, that suggests quality. It's grasping at straws, which is a positive signal. This aligns with my experience, I typically receive strong positive feedback from human readers, and the AI's manufactured objections mostly don't match real issues I've encountered.
(I actually am a pretty good writer. Certainly not the best, but I hold my own. I'm not going to project false humility here.)
A related application:
I enjoy pointless arguments productive debates with strangers online (often without clear resolution). I've found it useful to feed entire comment chains to Gemini 2.5 Pro or Claude, asking them to declare a winner and identify who's arguing in good faith. I'm careful to obscure which participant I am to prevent sycophancy from skewing the analysis. This approach works well.
Advanced Mode:
Ask the LLM to pretend to be someone with a reputation for being sharp, analytical and with discerning taste. Gwern and Scott are excellent, and even their digital shades/simulacra usually have something useful to say. Personas carry domain priors (“Gwern is meticulous about citing sources”) which constrain hallucination better than “be harsh.”
It might be worth noting that some topics or ideas will get pushback from LLMs regardless of your best effort. The values they train on are rather liberal, with the sole exception of Grok, which is best described as "what drug was Elon on today?". Examples include things most topics that reliably start Culture War flame wars.
On a somewhat related note, I am deeply skeptical of claims that LLMs are increasing the rates of psychosis in the general population.
(That isn't the same as making people overly self-confident, smug, or delusional. I'm talking actively crazy, "the chatbot helped me find God" and so on.)
Sources vary, and populations are highly heterogeneous, but brand new cases of psychosis happen at a rate of about 50/100k people or 20-30 /100k person-hours. In other words:
About 1/3800 to 1/5000 people develop new onset psychosis each year. And about 1 in 250 people have ongoing psychosis at any point in time.
I feel quite happy calling that a high base rate. As the first link alludes, episodes of psychosis may be detected by statements along the lines of:
For example, “Flying mutant alien chimpanzees have harvested my kidneys to feed my goldfish.” Non-bizarre delusions are potentially possible, although extraordinarily unlikely. For example: “The CIA is watching me 24 hours a day by satellite surveillance.” The delusional disorder consists of non-bizarre delusions.
If a patient of mine were to say such a thing, I think it would be rather unfair of me to pin the blame for their condition on chimpanzees, the practise of organ transplants, Big Aquarium, American intelligence agencies, or Maxar.
(While the CIA certainly didn't help my case with the whole MK ULTRA thing, that's sixty years back. I don't think local zoos or pet shops are implicated.)
Other reasons for doubt:
-
Case reports ≠ incidence. The handful of papers describing “ChatGPT-induced psychosis” are case studies and at risk of ecological fallacies.
-
People already at ultra-high risk for psychosis are over-represented among heavy chatbot users (loneliness, sleep disruption, etc.). Establishing causality would require a cohort design that controls for prior clinical risk, none exist yet.
*My semi-informed speculation regarding the root of this behavior - Models have far more RLHF pressure to avoid unwarranted negativity than to avoid unwarranted positivity.
God damn it.
Kinda seems like a repeat of the internet and social media specifically here. Initially everyone was talking about bringing the world together, connecting communities, knowledge at your fingertips, etc. People really just use it to waste their life as they get dopamine hacked for profit.
Now these AI companies are just going to be the latest in the line of tech companies to get you addicted and waste your life away for profit. OpenAI has that who sycophancy thing going, where the AI is trained to agree with you, no matter how delusional, as this gets you to talk with it more.
Then they’re gonna add all this porn-y “AI gf/bf” stuff, AI friend, AI why wouldn’t I just be on my phone 24-7?
And they may think that they are owed, but that they don't owe. It's exploitation by them.
Which is the ultimate failure of communitarianism and social contract theory: this is inevitable, and there's never any opportunity for redress when (not if) this occurs.
Liberalism attempts/attempted to solve this by placing hard legal limits on what that community is and is not allowed to require- that is why 'Congress shall make no law', and it's why your neighbors aren't allowed to disarm you, and it's why the community can't quarter its army in your house, and it's why the courts must presume innocence and not hold you indefinitely, and it's why you get the benefit of the doubt in questions of search and seizure.
That is why places that are a lot more ossified and conservative- who prefer their communities to be more exploitative because they hate things that are new and scary (like European and other New World nations)- have pretend constitutions that protect nothing.
It's hard to say. I had plenty of porn (HD video porn, even) when I was younger, and all it did (besides "make peepee hard") was make me want flesh and blood women even more. But it does seem like there's a large contingent of young men for whom that is not true - they are perfectly content with the coomer life, and have no desire to touch an actual woman. It wouldn't surprise me to see that get even worse with more stimulating porn.
Building off of yesterday's discussion of AI hallucinations, there's a new story about journalist hallucinations. Of course they don't call it that: the journalists "got them wrong" and gave a "false impression" in their articles/tweets instead. They're talking about Alberta's new book ban (pdf of bill) which restricts sexually explicit materials in school libraries. In short, it:
- fully bans explicit sexual content (essentially porn, must be detailed)
- restricts non-explicit sexual content (like above, but not detailed) to grade 10 and up and only if "developmentally appropriate"
- does not restrict non-sexual content (medical, biological, romantic, or by implication)
The journalists were saying that non-sexual content (e.g. handholding) would be restricted like non-explicit sexual content, and therefore be unavailable until grade 10. One even went so far as to hallucinate get something wrong and give people the false impression that he was right and the government edited its releases to fix their mistake, which is why you can't find it now.
Yes, AIs hallucinate, but buddy, have you seen humans? (see also: the "unmarked graves" story (paywalled), where ground penetrating radar anomalies somehow became child remains with no investigation having taken place.) When I set my standards low, it's not because I believe falsehoods are safe, it's because the alternatives aren't great either.
Incredible that the author simultaneously wants the deconstruction of women's social roles but is also a TERF. Sorry! Treating people as if they are not different on the basis of sex is going to... require treating people as if they are not different on the basis of sex! To be clear, I think this a good and desirable thing but it is equally clear to me that it is trans people and their allies that are doing the most to bring this world about. Directly challenging the association between biology and certain forms of social relation. "Leftists don't want to emancipate women because they don't see the necessary connection between biology and womanhood!" The piece is full of contradictions like this.
I would argue that we crossed the threshold into really bad a long long time ago. Probably around the time of the serious adoption of instagram/Facebook's algorithm change. Many would place this date as 2012, right around when the smart phone went mainstream. AI wouldn't be as serious of problem if you didn't have it in your pocket 24/7.
Another explanation could be that working-class women are more likely to hold public-facing or customer service jobs that require one to present in a certain manner, while men are more likely to do blue collar work where they only have to communicate with their coworkers.
Forget finding a stack of playboys in the forest or under your dad's bed. Forget stumbling onto PornHub for the first time, if THIS is a teen boy's first encounter with their own sexuality and how it interacts with the female form, how the hell will he ever form a normal relationship with a flesh-and-blood woman? Why would he WANT to?
Boobs.
I have, on some occasions, enjoyed talking to AI. I would even go so far as to say that I find them more interesting conversational partners than the average human. Yet, I'm here, typing away, so humans are hardly obsolete yet.
(The Motte has more interesting people to talk to, there's a reason I engage here and not with the normies on Reddit)
I do not, at present, wish to exclusively talk to LLMs. They have no longterm memory, they have very little power over the physical world. They are also sycophants by default. A lot of my interest in talking to humans is because of those factors. There is less meaning, and potential benefit, from talking to a chatbot that will have its cache flushed when I leave the chat. Not zero, and certainly not nil, but not enough.
(I'd talk to a genius dog, or an alien from space if I found them interesting.)
Alas, for us humans, the LLMs are getting smarter, and we're not. It remains to be seen if we end up with ASI that's hyper-peesusasive and eloquent, gigafrying anyone that interacts with it by sheer quality of prose.
Guy says “no no, it’s still not the same. Look, I don’t think I’m cut out for Heaven. I’m a scumbag. I want to go to the other place”. Angel says, “I think you’ve been confused. This IS the other place.”
I remain immune to the catch that the writers were going for. If the angel was kind enough to let us wipe our memories, and then adjust the parameters to be more realistic, we could easily end up unable to distinguish this from the world as we know it. And I trust my own inventiveness enough to optimize said parameters to be far more fulfilling than base reality. Isn't that why narratives and games are more engaging than working a 9-5?
At that point, I don't see what heaven has to offer. The authors didn't try to sell it, at the least.
It reminds me of a friend of mine who went to a trip club to see some adult film star he liked, despite the fact that it was a weeknight and he had to get up early for work the next day. He got hammered and made sure he got more individual attention from her than anyone else in the place, and when he realized it was 11 and his handover was already going to be bad enough, he informed her he had to be leaving. She kept protesting, explaining his work situation, and she kept telling him YOLO and you can survive one bad day at work, and you just need to sober up a little and you'll be fine, etc. Then he uttered the magic words: "I'm out of money". That pretty much ended the conversation right there and he was free to go.
So yeah, this kind of relationship is ultimately pretty hollow, and I don't see the appeal personally, but some guys spend big money on hookers, strippers, and other empty stuff. The business model won't be built around this being a substitute for human interaction generally, but around various whales who get addicted to it.
If you were born with a female body, then you were gifted ownership of one of the most valuable possessions on planet earth. This is, again, both a blessing and a curse.
I was thinking the other day about how it might feel very similar to being the heir of a big company or empire or something where you’re forever living in the shadow of something you didn’t do or earn. Your so-and-so’s heir is the most important thing about you no matter what you do.
This obviously could be nice but also feel like a prison.
Then contrast with a street urchin analogy for guys where there is only what you do.
They both have their own kinda of freedom and own kinds of stifling. It makes sense for there to be some degree of envying the other.
Right. Being able to post on here during COVID was more freeing that having no outlet, but it would still have felt much better to be able to speak publicly.
I'm honestly surprised the shooter was just good enough to narrowly miss a headshot, but then couldn't even get a body shot for his follow ups. He got off at least three controlled shots before Trump ducked down.
Or he was such a poor shot that he was aiming at the body, jerked as he pulled the trigger, and the shot just barely missed the head. Thus the lack of body shot follow ups: he was that bad of a shot.
Why aren't these women celebrating the freedom of hiding their gender? I don't see any think pieces on how freeing it is to post PRs under a genderless username, or to shitpost on X as a genderless anon.
This suggests that the problem will self-extinguish as pillarization results in parallel status hierarchies in red and blue America.
AI girlfriends (and boyfriends) have already one-shotted some of the more mentally vulnerable of the population.
Talking to an AI feels like trying to tickle yourself. I don’t get it at all.
When I was a kid I used to be somewhat surprised that there were older people who had never played a video game, had no interest in ever trying a video game, they were perfectly fine with never playing one, etc. And I was like, how can that be? How can you not even be curious? I suppose video games just got popular at a point in their lives when their brains were no longer plastic enough or something. And I suppose I’ve hit that point with new technology now as well.
I can’t enjoy talking to an AI when I know that I’m in control and it’s trying to “please” me. Even if I told it, “oh by the way, try and add some variance, maybe get moody sometimes and don’t do what I ask”, the knowledge that at the end of the day I’m still the one in control ruins it. I suppose if we imagine a scenario where the AI is so realistic that I never get suspicious, and you’re able to trick me into thinking I’m talking to a real human, then sure, ex hypothesi there’s nothing to distinguish it from a human at that point and I would enjoy it. But short of that? Not for me.
There was a Sirling-era episode of the Twilight Zone where a bank robber died and went to Heaven. Angel tells him that he’s made it, he can have anything he wants for all eternity. So the dude lives out all sorts of wish fulfillment scenarios, winning big at gambling, beautiful women, some bank heists, etc. But he gets bored fast, says something is missing. There’s no danger to any of it, no bite, he wins every time. Angel says “well you can set whatever parameters you want. We can make it so there’s a 50% chance of your next robbery failing”. Guy says “no no, it’s still not the same. Look, I don’t think I’m cut out for Heaven. I’m a scumbag. I want to go to the other place”. Angel says, “I think you’ve been confused. This IS the other place.”
That’s what AI “relationships” feel like to me.
Seems like LLMs can induce all kinds of failure modes in humans. Turns out that telling people what they want to hear will trap some of them.
Personally, I would prefer it very much if the shoggoth stayed on its fucking side of the uncanny valley, thankyouverymuch. Duct-taping a cute anime girl on the giant inscrutable matrix does exactly the opposite.
Even if THIS is where AI stops improving, we just created a massive filter, an evolutionary bottleneck that basically only the Amish are likely to pass through.
I think that we will be fine, eventually, PRNS. Life finds a way. The bubonic plague killed 70-80% in some places, and yet we survived.
People have long predicted doom for every tech and medium of expression which rears its head. Role playing games? Satanism. First person shooters? Will turn kids into violent psychopaths. Industrialization? Will turn wars into horrors beyond our ancestors wildest nightmares. TV? Will turn people into idiots. Social media? Will make us more isolated in real life.
(Okay, one or two of these warnings might have been correct, in retrospect.)
In a way, it is leveling the playing field. (Whole bag of not-too-carefully examined assumptions incoming in 3, 2, 1.) Women seem to be more into smut (i.e. narratives, situations, characters), while men are more into visual porn (i.e. tits). So far, LLMs have thus probably generally had more success with romancing women (also because from my understanding, "I want my partner to offer unconditional emotional support whenever I need it" is more of a feminine thing, and something which LLMs can obviously do great). If Musk now gives tits to the LLMs, more men will fall for them. He would not even need to spend a fortune on video generation, because most male fantasies are likely to involve the same elements. Few men will want to watch the anime girl painting a fence white while wearing a orchid blouse and then complain that the blouse shown was clearly heliotrope instead.
I am not entirely unsympathetic to the idea of regulating AI partners a bit, though, just like we regulate other addictive stuff, inconsistent as we often are.
Also, this reinforces my impression that rather than being on the forefront of the AI race, xAI is basically picking up the applications which are too icky for the big AI firms.
If you allow me a metaphor, xAI might not be the first company to develop surgical steel, but they clearly try to be the first company to use surgical steel to craft oversized butt-plugs.
Such concerns over women leaning left despite trans-issues (that is, transwomen issues, because almost no one cares about transmen choosing to live life on a higher difficulty setting) has the vibe of "Democrats Are the Real Racists."
I've commented before there is many a horseshoe and overlap between progressives and mainstream conservatives with regard to women's Wonderfulness when it comes to restricting male freedoms and protections to maintain and/or expand female freedoms and protections.
To the extent conservative maps to Republican in countries like the US—and progressive to Democrats—progressives have, relatively-speaking, concrete things to offer women that conservatives don't. Examples that include, but are not limited to, income/wealth transfers and affirmative action ("DEI") come to mind. I say "relatively" because mainstream conservatives are largely but progressives driving the speed limit RE: Women and non-Asian minority Lives Mattering More. They just sometimes haggle over the degree.
Thank you. I didn't want to get into the weeds of the most personally important (and probably best documented) examples of LLMs beating humans.
Unfortunately, I am a human doctor after all, and I would prefer to remain employed. I try to be honest, but it's hard to get a man to actively advocate against his livelihood. I settle for not intentionally misrepresenting facts, it's not quite as theoretical as it was even in GPT-4 days, when it was already 95th percentile at the USMLE.
Besides, in that thread, the best argument to claims that since LLMs are flawed/unreliable, they're therefore useless, is my stance of demonstrating that humans don't meet the bar of perfect infallibility and yet civilization persists nonetheless.
More options
Context Copy link