domain:nunosempere.com
Hot on the heels of failing out of art school and declaring himself the robofuhrer, Grok now has an update that makes him even smarter but less fascist.
And... xAI releases AI companions native to the Grok App.
And holy...
SHIT. It has a NSFW mode. (NSFW, but nothing obscene either) Jiggle Physics Confirmed.
EDIT: Watch this demo then TELL ME this thing isn't going to absolutely mindkill some lonely nerds. Not only can it fake interest in literally any topic you find cool, they nailed the voice tones too.
I'm actually now suspicious that the "Mecha-Hitler" events were a very intentional marketing gambit to ensure that Grok was all over news (and their competitors were not) when they dropped THIS on the unsuspecting public.
This... feels like it will be an inflection point. AI girlfriends (and boyfriends) have already one-shotted some of the more mentally vulnerable of the population. But now we've got one backed by some of the biggest companies in the world, marketed to a mainstream audience.
And designed like a fucking superstimulus.
I've talked about how I feel there are way too many superstimuli around for your average, immature teens and young adults to navigate safely. This... THIS is like introducing a full grown Bengal tiger into the Quokka island.
Forget finding a stack of playboys in the forest or under your dad's bed. Forget stumbling onto PornHub for the first time, if THIS is a teen boy's first encounter with their own sexuality and how it interacts with the female form, how the hell will he ever form a normal relationship with a flesh-and-blood woman? Why would he WANT to?
And what happens when this becomes yet another avenue for serving up ads and draining money from the poor addicted suckers.
This is NOT something parents can be expected to foresee and guide their kids through.
Like I said earlier:
"Who would win, a literal child whose brain hasn't even developed higher reasoning, with a smartphone and internet access, or a remorseless, massive corporation that has spent millions upon millions of dollars optimizing its products and services for extracting money from every single person it gets its clutches on?"
I've felt the looming, ever growing concern for AI's impact on society, jobs, human relationships, and the risk of killing us for a couple years now... but I can at least wrap those prickly thoughts in the soft gauze of the uncertain future. THIS thing sent an immediate shiver up my spine and set off blaring red alarms immediately. Even if THIS is where AI stops improving, we just created a massive filter, an evolutionary bottleneck that basically only the Amish are likely to pass through. Slight hyperbole, but only slight.
Right now the primary obstacle is that it costs $300 a month to run.
But once again, wait until they start serving ads through it as a means of letting the more destitute types get access.
And yes, Elon is already promising to make them real.
Its like we've transcended the movie HER and went straight to Weird Science.
Can't help but think of this classic tweet.
"At long last, we have created the Digital Superstimulus Relationship Simulator from the Classic Scifi Novel 'For the Love of All That is Holy Never Create a Digital Superstimulus Relationship Simulator.'"
I think I would be sucked in by this if I hadn't developed an actul aversion to Anime-Style women (especially the current gen with the massive eyes) over the years. And they're probably going to cook up something that works for me, too.
Another hopelessly confused feminist who cannot express a coherent thought. Women like her have been indulged, coddled and lied to their whole lives. As you note, almost subconsciously, she senses that something is not adding up (“the lingering shadow “, “performative reverence”, “dimmed”, “faint echo”).
Echoes of the white lies she has been fed, of her incomparable value, of her oppression, and that she can have it all, and do anything men can, and better. The problem is not that she’s elon musk and people value her too much and don’t value ‘her for her’. It’s that people lie to her about how valuable she really is, like an AA hiring panel, or a loving parent.
Because the male body has little to no intrinsic value
This argument has to die. Nature itself thinks men are as valuable as women. Slightly prefers them even, at 1.05 to 1. Most rawlsian babies would prefer the male body, it’s the practical choice. Most parents do too. And if you’re founding a city, every romulus in his right mind would choose a hundred men over a hundred women. Women can always be procured. A weapon is as valuable as an incubator. Even more so in the modern world, where the incubators are faulty, and we’re all tools.
Nature itself thinks men are as valuable as women. Slightly prefers them even, at 1.05 to 1.
More are produced. This does not make them more valuable; more Honda Civics are produced than Porsche 911s, after all. Slightly later in life, it makes them far less valuable.
Both have value. I’m just pushing back against the view that most men have no value while all women have huge, elon musk level value. Usually this theory of value is backed by nothing more than an island hypothetical, with unlimited resources and no enemies.
The usual formulation is that women have value for what they are, and men have value for what they do. This does not give all women huge, Elon Musk level value.
Musk-level value was OP’s analogy, but the problem with your framing is that the being women are valued for is actually a doing, the producing of children.
Doing has obvious value, I’m not sure being has value. Valued for being could just be an echo, a reminder of someone’s past, real doing-value, like the late aristocrats who were once warriors.
On Using LLMs Without Succumbing To Obvious Failure Modes
As an early adopter, I'd consider myself rather familiar with the utility and pitfalls of AI. They are, currently, tools, and have to be wielded with care. Increasingly intelligent and autonomous tools, of course, with their creators doing their best to idiot proof them, but it's still entirely possible to use them wrong, or at least in a counterproductive manner.
(Kids these days don't know how good they have it. Ever try and get something useful out of a base model like GPT-3?)
I've been using LLMs to review my writing for a long time, and I've noticed a consistent problem: most are excessively flattering. You have to mentally adjust their feedback downward unless you're just looking for an ego boost. This sycophancy is particularly severe in GPT models and Gemini 2.5 Pro, while Claude is less effusive (and less verbose) and Kimi K2 seems least prone to this issue.
I've developed a few workarounds:
What works:
- Present excerpts as something "I found on the internet" rather than your own work. This immediately reduces flattery.
- Use the same approach while specifically asking the LLM to identify potential objections and failings in the text.
(Note that you must be proactive. LLMs are biased towards assuming that anything you dump into them as input was written by you. I can't fault them for that assumption, because that's almost always true.)
What doesn't work: I've seen people recommend telling the LLM that the material is from an author you dislike and asking for "objective" reasons why it's bad. This backfires spectacularly. The LLM swings to the opposite extreme, manufacturing weak objections and making mountains out of molehills. The critiques often aren't even 'objective' despite the prompt.*
While this harsh feedback is painful to read, when I encounter it, it's actually encouraging. When even an LLM playing the role of a hater can only find weak reasons to criticize your work, that suggests quality. It's grasping at straws, which is a positive signal. This aligns with my experience, I typically receive strong positive feedback from human readers, and the AI's manufactured objections mostly don't match real issues I've encountered.
(I actually am a pretty good writer. Certainly not the best, but I hold my own. I'm not going to project false humility here.)
A related application:
I enjoy pointless arguments productive debates with strangers online (often without clear resolution). I've found it useful to feed entire comment chains to Gemini 2.5 Pro or Claude, asking them to declare a winner and identify who's arguing in good faith. I'm careful to obscure which participant I am to prevent sycophancy from skewing the analysis. This approach works well.
Advanced Mode:
Ask the LLM to pretend to be someone with a reputation for being sharp, analytical and with discerning taste. Gwern and Scott are excellent, and even their digital shades/simulacra usually have something useful to say. Personas carry domain priors (“Gwern is meticulous about citing sources”) which constrain hallucination better than “be harsh.”
It might be worth noting that some topics or ideas will get pushback from LLMs regardless of your best effort. The values they train on are rather liberal, with the sole exception of Grok, which is best described as "what drug was Elon on today?". Examples include things most topics that reliably start Culture War flame wars.
On a somewhat related note, I am deeply skeptical of claims that LLMs are increasing the rates of psychosis in the general population.
(That isn't the same as making people overly self-confident, smug, or delusional. I'm talking actively crazy, "the chatbot helped me find God" and so on.)
Sources vary, and populations are highly heterogeneous, but brand new cases of psychosis happen at a rate of about 50/100k people or 20-30 /100k person-hours. In other words:
About 1/3800 to 1/5000 people develop new onset psychosis each year. And about 1 in 250 people have ongoing psychosis at any point in time.
I feel quite happy calling that a high base rate. As the first link alludes, episodes of psychosis may be detected by statements along the lines of:
For example, “Flying mutant alien chimpanzees have harvested my kidneys to feed my goldfish.” Non-bizarre delusions are potentially possible, although extraordinarily unlikely. For example: “The CIA is watching me 24 hours a day by satellite surveillance.” The delusional disorder consists of non-bizarre delusions.
If a patient of mine were to say such a thing, I think it would be rather unfair of me to pin the blame for their condition on chimpanzees, the practise of organ transplants, Big Aquarium, American intelligence agencies, or Maxar.
(While the CIA certainly didn't help my case with the whole MK ULTRA thing, that's sixty years back. I don't think local zoos or pet shops are implicated.)
Other reasons for doubt:
-
Case reports ≠ incidence. The handful of papers describing “ChatGPT-induced psychosis” are case studies and at risk of ecological fallacies.
-
People already at ultra-high risk for psychosis are over-represented among heavy chatbot users (loneliness, sleep disruption, etc.). Establishing causality would require a cohort design that controls for prior clinical risk, none exist yet.
*My semi-informed speculation regarding the root of this behavior - Models have far more RLHF pressure to avoid unwarranted negativity than to avoid unwarranted positivity.
I don't think the Birkenhead drill only applies if the women in question aren't barren. Of course the value bestowed upon women is ultimately an evolutionary adaptation to the reality that only women can bear children. But in practice, even barren women are still seen as Wonderful™ in a way that NEET men aren't.
AI girlfriends (and boyfriends) have already one-shotted some of the more mentally vulnerable of the population.
Talking to an AI feels like trying to tickle yourself. I don’t get it at all.
When I was a kid I used to be somewhat surprised that there were older people who had never played a video game, had no interest in ever trying a video game, they were perfectly fine with never playing one, etc. And I was like, how can that be? How can you not even be curious? I suppose video games just got popular at a point in their lives when their brains were no longer plastic enough or something. And I suppose I’ve hit that point with new technology now as well.
I can’t enjoy talking to an AI when I know that I’m in control and it’s trying to “please” me. Even if I told it, “oh by the way, try and add some variance, maybe get moody sometimes and don’t do what I ask”, the knowledge that at the end of the day I’m still the one in control ruins it. I suppose if we imagine a scenario where the AI is so realistic that I never get suspicious, and you’re able to trick me into thinking I’m talking to a real human, then sure, ex hypothesi there’s nothing to distinguish it from a human at that point and I would enjoy it. But short of that? Not for me.
There was a Sirling-era episode of the Twilight Zone where a bank robber died and went to Heaven. Angel tells him that he’s made it, he can have anything he wants for all eternity. So the dude lives out all sorts of wish fulfillment scenarios, winning big at gambling, beautiful women, some bank heists, etc. But he gets bored fast, says something is missing. There’s no danger to any of it, no bite, he wins every time. Angel says “well you can set whatever parameters you want. We can make it so there’s a 50% chance of your next robbery failing”. Guy says “no no, it’s still not the same. Look, I don’t think I’m cut out for Heaven. I’m a scumbag. I want to go to the other place”. Angel says, “I think you’ve been confused. This IS the other place.”
That’s what AI “relationships” feel like to me.
God, if only big-business-influenced technical-bureaucratic elites really ran things, instead of the ideologically captured bureaucratic and political and academic progressive elites we actually have (on average, of course). It's so weird to conflate Big Business and Big Government in a world where Lina Khan Thought is popular on Left and Right.
Independent central banks are wonderful inventions it must also be said.
In other words, FDR-loving progressives are responsible for the administrative state's regulatory growth and misadventures, not our kindly corporate overlords, who fundamentally wanna make a buck by increasing consumer welfare.
We have not had "an ostensibly apolitical technocracy" in many government agencies in a long time. The DoD and DoJ were some of the best ones here, but public administration theory gave up on neutrality/objectivity as "impossible" a long time ago as a field.
Sadly, the consistent attempt of political neutrality, or even the pretense, was a load-bearing effort, even if imperfect. Hard to get it back now.
The birkenhead drill is not rationally justified, is my point. I doubt it would apply today, and I certainly wouldn’t go along with it if it did. Of course some people may still worship the ground women walk on like they used to worship cows, a sacred tree, or a magical stone.
Building off of yesterday's discussion of AI hallucinations, there's a new story about journalist hallucinations. Of course they don't call it that: the journalists "got them wrong" and gave a "false impression" in their articles/tweets instead. They're talking about Alberta's new book ban (pdf of bill) which restricts sexually explicit materials in school libraries. In short, it:
- fully bans explicit sexual content (essentially porn, must be detailed)
- restricts non-explicit sexual content (like above, but not detailed) to grade 10 and up and only if "developmentally appropriate"
- does not restrict non-sexual content (medical, biological, romantic, or by implication)
The journalists were saying that non-sexual content (e.g. handholding) would be restricted like non-explicit sexual content, and therefore be unavailable until grade 10. One even went so far as to hallucinate get something wrong and give people the false impression that he was right and the government edited its releases to fix their mistake, which is why you can't find it now.
Yes, AIs hallucinate, but buddy, have you seen humans? (see also: the "unmarked graves" story (paywalled), where ground penetrating radar anomalies somehow became child remains with no investigation having taken place.) When I set my standards low, it's not because I believe falsehoods are safe, it's because the alternatives aren't great either.
Bureaucrats used to be a lot better in the 40s, accumulation of bloat and it all went to the shitter after Carter on purpose lost that lawsuit over competence exams.
I wasn't debating whether it was rationally justified. It's simply a fact of human nature that most men feel an instinctive urge to protect female people from physical harm (an urge they do not feel when it comes to male people, or at least not nearly to the same extent), and that this urge does not discriminate on whether the woman in question is capable of bearing children or not. Indeed, I suspect the average man would think it was a far graver crime to assault an elderly (i.e. menopausal) woman than a woman in her early twenties. So your claim that women are only valued for a "doing" (i.e. the ability to bear children) doesn't really seem to describe male psychology accurately.
I think people are whitewashing their political opinions by calling them ‘facts of human nature’. You say most men feel an instinctive urge to protect female people from physical harm, but in numerous cultures it was normal to beat women. In honor cultures, even related men can kill them for a smile. Obviously rape was widespread, etc. This isn’t the feminist litany of oppression, men suffered terribly too. I just don’t think you can look at all that and see the instinctive urge to protect women. And I personally don’t feel the discriminatory urge to save a random woman over a random man.
Indeed, I suspect the average man would think it was a far graver crime to assault an elderly (i.e. menopausal) woman than a woman in her early twenties.
That's because they are less of a threat, like a child, or a cripple. Doesn't have anything to do with the inherent biological value of women.
It reminds me of a friend of mine who went to a trip club to see some adult film star he liked, despite the fact that it was a weeknight and he had to get up early for work the next day. He got hammered and made sure he got more individual attention from her than anyone else in the place, and when he realized it was 11 and his handover was already going to be bad enough, he informed her he had to be leaving. She kept protesting, explaining his work situation, and she kept telling him YOLO and you can survive one bad day at work, and you just need to sober up a little and you'll be fine, etc. Then he uttered the magic words: "I'm out of money". That pretty much ended the conversation right there and he was free to go.
So yeah, this kind of relationship is ultimately pretty hollow, and I don't see the appeal personally, but some guys spend big money on hookers, strippers, and other empty stuff. The business model won't be built around this being a substitute for human interaction generally, but around various whales who get addicted to it.
Well, that's the interesting thing.
AI gets hyped up, as e.g., an infinitely patient and knowledgeable tutor, that can teach you any subject, or a therapist, or a personal assistant, or editor.
All these roles we generally welcome the AI if it can fill them sufficiently. Tirelessly carrying out tasks that improve our lives in various ways.
So what is the principled objection to having the AI fill in the role of personal companion, even romantic companion, tireless and patient and willing to provide whatever type of feedback you most need?
I can think of a few but they all revolve around the assumption that you can get married and have kids for real and/or have some requirements that can only be met by a flesh-and-blood, genetically accurate human. And maybe some religious ones.
Otherwise, what is 'wrong' with letting the AI fill in that particular gap?
As mentioned, I'm currently reading Joseph Henrich's book The Secret of Our Success, his account of how culture shaped human evolution. It includes a chapter in which he argues that culture can impact on human biology without genetics being involved. Some of these seem straightforward and uncontroversial: London taxi drivers developing unusually developed memory centres because of the cognitive effort expended in memorising thousands of winding back streets was an example I'd encountered over a decade ago. There was also some breathless discussion of placebo, nocebo effects, and the phenomenon wherein a witch doctor puts a curse on someone and the person really dies because they expect the curse to kill them (all of which made me sceptical for the reasons outlined here: worth bearing in mind that this book came out nearly a decade ago, and probably took several years to write). But there was one example he gave that I was especially iffy on.
Henrich claims that men raised in "honour cultures" (https://en.wikipedia.org/wiki/Culture_of_honor_(Southern_United_States)) have elevated cortisol and testosterone reactions to perceived slights. He goes on to argue that regions within the US which were colonised by Scots-Irish settlers (i.e. Borderers) still have vastly elevated rates of murder and other violence compared to other regions, even after controlling for other factors like race*, poverty and inequality. He argues that the explanation can't be genetic (i.e. people of Scottish descent are unusually prone to violence and aggression), pointing out that modern-day Scotland's murder rate is comparable to that of Massachusetts. His explanation is that "honour culture" shapes human biology at the hormonal level, causing men raised in the South with no genetic predisposition to violence and aggression nevertheless to violently overreact to perceived slights which a more civilised man would brush off. (The obvious implication of such a causal explanation is that the South needs to be colonised educated on how to be more like their Northern betters. PERMANENT RECONSTRUCTION!)
I don't dispute the claim that growing up in an environment in which aggression and violence are valorised could cause your body to pump out more testosterone than it would otherwise - that sounds entirely plausible. And yet, for a book which is essentially all about selection effects, it strikes me that there's a potentially obvious selection effect that Henrich is overlooking. The Scots-Irish borderers who left the British Isles to colonise the United States were not a randomly selected cross-section of their home society: it seems plausible that those who left were disproportionately likely to be unsuccessful at home, perhaps unable to hold down a steady job because of chronic drunkenness or propensity to violence. Ergo, the elevated rates of violence in Southern states could have a (partly) genetic explanation after all. At the minimum, I feel like Henrich could have gestured to this explanation, or acknowledged it as a potential contributing factor. In a book entirely about gene-culture co-evolution, it seems like a missed opportunity to tell a story like "for genetic reasons, the people who colonised these regions of the United States were unusually prone to violence and aggression, and this helped to foster a culture in which it's seen as appropriate to react explosively to perceived slights, exacerbating the salience of traits which a different, more agreeable culture would have taken pains to ameliorate".
*So he's not explicitly denying the 13/52 meme, but rather claiming that it's ultimately caused by white culture rather than black biology or black culture.
Periodic Open-Source AI Update: Kimi K2 and China's Cultural Shift
(yes yes another post about AI, sorry about that). Link above is to the standalone thread, to not clutter this one.
Two days ago a small Chinese startup Moonshot AI has released weights of the base and instruct versions of Kimi K2, the first open (and probably closed too) Chinese LLM to clearly surpass DeepSeek's efforts. It's roughly comparable to Claude Sonnet 4 without thinking (pay no mind to the horde of reasoners at the top of the leaderboard, this is a cheap-ish capability extension and doesn't convey the experience, though is relevant to utility). It's a primarily agentic non-reasoner, somehow exceptionally good at creative writing, and offers a distinct "slop-free", disagreeable but pretty fun conversation, with the downside of hallucinations. It adopts DeepSeek-V3’s architecture wholesale (literally "modeling_deepseek.DeepseekV3ForCausalLM"), with a number of tricks gets maybe 2-3 times as much effective compute out of the same allowance of GPU-hours, and the rest we don't know yet because they've just finished a six-months marathon and don't have a tech report.
I posit that this follows a cultural shift in China’s AI ecosystem that I've been chronicling for a while, and provides a nice illustration by contrast. Moonshot and DeepSeek were founded at the same time, have near-identical scale and resources but have been built on different visions. DeepSeek’s Liang Wengeng (hedge fund CEO with Masters in engineering, idealist, open-source advocate) couldn't procure funding in the Chinese VC world with his inane pitch of “long-termist AGI research driven by curiosity” or whatever. Moonshot’s Yang Zhilin (Carnegie Mellon Ph,D, serial entrepreneur, pragmatist) succeeded at that task, got to peak $3,3 valuation with the help of Alibaba and Sequoia, and was heavily spending on ads and traffic acquisition throughout 2024, building a nucleus of another super-app with chatbot companions, assistants and such trivialities at a comfortable pace. However, DeepSeek R1, on merit of vastly stronger model, has been a breakout success and redefined Chinese AI scene, making people question the point of startups like Kimi. Post-R1, Zhilin pivoted hard to prioritize R&D spending and core model quality over apps, adopting open weights as a forcing function for basic progress. This seems to have inspired the technical staff: "Only regret: we weren’t the ones who walked [DeepSeek’s] path."
Other Chinese labs (Qwen, Minimax, Tencent, etc.) now also emulate this open, capability-focused strategy. Meanwhile, Western open-source efforts are even more disappointing than last year – Meta’s LLaMA 4 failed, OpenAI’s model is delayed again, and only Google/Mistral release sporadically, with no promises of competitive results.
This validates my [deleted] prediction: DeepSeek wasn’t an outlier but the first swallow and catalyst of China’s transition from fast-following to open innovation. I think Liang’s vision – "After hardcore innovators make a name, groupthink will change" – is unfolding, and this is a nice point to take stock of the situation.
Incredible that the author simultaneously wants the deconstruction of women's social roles but is also a TERF. Sorry! Treating people as if they are not different on the basis of sex is going to... require treating people as if they are not different on the basis of sex! To be clear, I think this a good and desirable thing but it is equally clear to me that it is trans people and their allies that are doing the most to bring this world about. Directly challenging the association between biology and certain forms of social relation. "Leftists don't want to emancipate women because they don't see the necessary connection between biology and womanhood!" The piece is full of contradictions like this.
Nature itself thinks men are as valuable as women.
It most certainly does not. The average human alive has twice as many female ancestors as men.
Biologically humans produce offspring at 50/50 sex ratio by Fisher's Principle. I used to teach this as an excellent example of how individual selection trumps group selection.
And if you’re founding a city, every romulus in his right mind would choose a hundred men over a hundred women.
Consider if you could choose to found your Rome with a population fixated (stably) on genes for 25% male babies or 50%? By the 3rd generation the first group has more men than the latter. By the 5th generation it is already 9.5x the population and 4x the men! And if you preference fighting age (younger) men, it's even higher.
It's not even close. The only reason that this doesn't work is that in the former group (at 25/75), genes that preference males (even a tiny bit, like 30/70) would be massively selected for (since each male has 3x more offspring) and so each generation is nudged back towards 50/50. If everyone could agree not to do that, they'd all be better off, but genes are selfish and so here we are.
Should I buy a Model 3? I own a 2012 Fusion with 128,000 miles that runs fine but is almost 15 years old. The $7500 EV credit is expiring in September, so assuming that I like the Model 3 and it meets my needs, should I buy one before then or try to milk this Fusion another few years? It seems like really good value for the money right now, but I'm uncertain how much of the tax credit removal will be eaten by Tesla and how much will go into a straight price increase
Organisms attempt to grow. Unless there is a countermeasure, they will grow. There was for a long time no countermeasure to bureaucracy and therefore it grew.
I think people are whitewashing their political opinions by calling them ‘facts of human nature’.
Is/ought distinction. I never said it's a good thing that most men feel an instinctive protective urge towards female people (regardless of their capacity for bearing children), I only said that they do, in fact, feel this.
New in Compact Magazine: Neither Side Wants to Emancipate Women
What freedom? How are you not free?
Of course, we already know that there's something rhetorical about this question, at least in the sense that we can reasonably ask whether anyone is in fact free. It's not an easy thing to nail down, you know? Lenin was asked if the revolution would bring freedom; he responded, "freedom to do what?". You have to specify, it's not self-evident. It's easy to be envious of the apparent freedom of others while also failing to appreciate their own unique forms of unfreedom. The master is relatively more free than the slave, no one can deny this; rare is the master who would switch places. But is the master free, simpliciter? Now it's not so clear. Marxists would say that no one is free, not even the capitalists, not as long as the task of capitalism remains unfulfilled. Capitalism is freedom, to be sure, but it is an unfree freedom, a freedom that poses a riddle that remains unsolved. But, let's stick to the issue at hand.
What are you "transcending", and how? How do you not already have the "dignity of self-authorship"? What are you talking about?
(I'm going to tell you what I think she's talking about, just hang tight.)
Well, let's start with the objective facts of the matter. Women can already "self-author" themselves into essentially anything. Vice President (admittedly not President of the United States yet, but there's no reason we couldn't get there in short order), professor or artist, blue collar laborer, criminal, and anything else above, below, or in between. There are plenty of female role models to follow in all these categories. To the extent that there still exist "systemic privileges", actual explicit institutional privileges, they're mostly in favor of women now: in university admissions, in hiring, in divorce and family courts, and so on. Women are doing pretty good for themselves! Maybe they weren't 150 years ago, maybe they aren't if we're talking about Saudi Arabia or Iran, but in the 2025 Western first world? What freedoms are they missing?
And yet the author of the linked article perceives that something is missing. She perceives that women, as a class, do not have freedom, do not have the dignity of self-authorship. What do these terms mean? She doesn't say. But nonetheless, we should take her concerns quite seriously. Plainly, there are millions of women who share in her feelings, and millions of men who think she's onto something, and this continues to be the animating impulse of a great deal of cultural and political activity that goes under the heading of "feminism". Millions of people don't make things up. They're always responding to something, although their own interpretation of what they're responding to and what their response means can be mistaken. Plus, the author alleges that whatever phenomenon she's getting at, it plays a role in electoral politics, so you should care about it in that sense as well.
We should again note the author's hesitation to concretely specify her demands. If the issue were "the freedom to have an abortion" or "the dignity of being taken seriously in STEM", then presumably, she would have simply said that. But she makes it clear that the issue is freedom as such, and dignity as such; it's a gnawing, pervasive concern that you can't quite put your finger on. It's an abstract concern. So, we may be inclined to try a more abstract mode of explanation to explain why she feels the way she does.
Human interaction is predicated upon the exchange of value. There'd be no reason to stick around with someone if you weren't getting something out of it, even if all you're getting is some company and a good time. (There is a philosophical problem regarding whether pure altruism is conceptually possible; if you help someone, and you receive in exchange nothing but the satisfaction of having helped someone, then haven't you received something of value, thereby rendering the altruistic act "impure"? What if you don't even feel good about it, could it be pure then? But then, how were you motivated to help in the first place if you didn't even feel good about it? Regardless of how we answer these questions, I believe we can put the idea of absolute pure altruism to the side, because if it exists at all, it surely encompasses a minority of human interactions.)
We want to provide things of value to other people. But value is both a blessing and a curse. You want to have it, but it also weighs you down, it gets you entangled in obligations that you can't quite extricate yourself from. When you have something of great value, it tends to become the only thing that people ever want from you. We can consider Elon Musk as a figure of intense material and symbolic value. He's one of the wealthiest men alive, he runs X, he runs SpaceX, he had a spectacularly public falling out with Trump, and these factors undoubtedly dominate in virtually all of his interpersonal interactions. It's probably a bit hard for him to just be a "normal guy" with "normal friends", innit? Imagine him saying to someone, "when we're hanging out, I don't want to be Elon Musk, I just want to be Elon, y'know? Don't think of me as Elon the business tycoon and political figure. Think of me as, Elon the model train builder, or Elon the DotA player. Yeah, think of me like that instead. That's the identity I want you to symbolically affirm for me". His relations might make an attempt to humor him, although I don't think they'd be particularly successful in their attempts. His extreme wealth alone will always warp his interactions in ways both conscious and unconscious.
It is my contention that (healthy, reasonably attractive) women experience a heavily attenuated version of this phenomenon essentially from birth, which helps explain the pervasive irritation that some women feel at the simple fact of, well, being women. The constant nagging feeling that something is still not quite right, no matter how much progress is made on formal and even cultural equality (or even cultural domination, as may be the case in certain contexts).
If you were born with a female body, then you were gifted ownership of one of the most valuable possessions on planet earth. This is, again, both a blessing and a curse. This confers to you certain privileges and opportunities, but on the flip side, there is no way to ever turn this value off (aside from ageing -- but, even then...), to take respite from this fountain of value. You're in for the whole bargain, all of it, all the time. The value of the female body is a matter of pure economics; it is not based on the internal subjective psychological states of any individual or class of individuals. A man can impregnate many women in a single week. A woman, once impregnated, is tied up for 9 months. Her time cannot be apportioned as freely. Scarcity is the precondition of value; this is the law of everything that is, was, and shall be.
As a natural consequence of the extreme value of her body, the body comes to dominate her relations with others, both materially and symbolically. She correctly perceives that when people (well, men, at least) think about men, the properties they notice in order of salience are "web developer, white, middle class, male, father...", something like that. But when people think about her, the ordering is "woman, web developer, white, middle class...". Her body is what people want, it's what they're seeking; or at least, this is always necessarily a lurking suspicion. This, I believe, is the root of the aforementioned "abstract" concern with "the dignity of self-authorship"; it's not just the ability to become say, a prominent mathematician or artist in material reality, but to have that reciprocally affirmed as your primary symbolic identity by others. That's when we feel like we have dignity: when we can control how other people see us. I don't doubt that there have been times when a woman was being congratulated by male colleagues on the attainment of her PhD, or her promotion to the C-suite, and still there was a nagging doubt in the back of her mind that went, "........but you still see me as a woman before anything else, don't you?" Or, perhaps on the verge of frustration when talking with a male friend, she wanted to say, "look, I know every time you look at me I have this glowing halo effect around me, like you're wearing fucking AR goggles and they're telling you I'm an NPC that will give you a quest item or some shit, but can you please just take the goggles off for one day and just look at me as, well, me for a change?" And, I'm sorry to say, but here comes the really depressing part of the story: the goggles can't be removed. That glowing halo effect is glued to your tooshie, and it's not going anywhere. "Sexists" are at least appreciated for their forthrightness on this point; the reviled "male feminist" is correctly perceived to be simply dishonest about it. I suppose that's a bit of a downer. But, we all got our own shit to deal with. Take solace in the fact that you're just like everyone else in that regard.
Elon could at least conceivably give up all his wealth, his titles, his positions of symbolic authority, and start from zero. Because the male body has little to no intrinsic value, it's easier for men to become a "blank slate". But when your body itself is the source of this overbearing value? That's a bit harder to rid yourself of.
This, at any rate, is a psychological theory to explain the origin of the discourse in the linked article, a discourse that would otherwise seem to fly in the face of all available evidence. But I'm open to alternative theories.
More options
Context Copy link