domain:felipec.substack.com
Building off of yesterday's discussion of AI hallucinations, there's a new story about journalist hallucinations. Of course they don't call it that: the journalists "got them wrong" and gave a "false impression" in their articles/tweets instead. They're talking about Alberta's new book ban (pdf of bill) which restricts sexually explicit materials in school libraries. In short, it:
- fully bans explicit sexual content (essentially porn, must be detailed)
- restricts non-explicit sexual content (like above, but not detailed) to grade 10 and up and only if "developmentally appropriate"
- does not restrict non-sexual content (medical, biological, romantic, or by implication)
The journalists were saying that non-sexual content (e.g. handholding) would be restricted like non-explicit sexual content, and therefore be unavailable until grade 10. One even went so far as to hallucinate get something wrong and give people the false impression that he was right and the government edited its releases to fix their mistake, which is why you can't find it now.
Yes, AIs hallucinate, but buddy, have you seen humans? (see also: the "unmarked graves" story (paywalled), where ground penetrating radar anomalies somehow became child remains with no investigation having taken place.) When I set my standards low, it's not because I believe falsehoods are safe, it's because the alternatives aren't great either.
The real cost is probably somewhere around 10x that for what a highly motivated teen boy’s libido will demand.
Most teen boys could probably make due with one running on the lowest setting for a year or two.
The costs are just wildly out of budget for the youth, who last I checked were willing to pay approximately $0.00 for porn. I remember being that age; why would things change?
Yeah but again, they can do some CRAZY targeted advertising through this platform. "Oh babe, take me on an Applebees™ date, so we can get their All you can eat boneless wings™ with a free Coke Zero™ . Then I'll sing you a Taylor Swift™ song on the ride home."
Etc. etc.
Actually slightly higher- most of the Amish converts would come from pre-existing high(er) TFR groups.
Of course, Rum Millet for as a reward for high TFR is... interesting.
Right now the primary obstacle is that it costs $300 a month to run.
Subtle, but important, difference: they CHARGE you $300/mo for it. But almost all AI features right now are sold at a substantial loss. The real cost is probably somewhere around 10x that for what a highly motivated teen boy’s libido will demand.
I’m not especially worried about the current crop for that reason. The costs are just wildly out of budget for the youth, who last I checked were willing to pay approximately $0.00 for porn. I remember being that age; why would things change?
(Entry level devs, on the other hand… but the vtubers have already hit them. Not sure what more damage can be done.)
Without actual, real life women being willing to settle down
But that's not true. There are lots of women who are settling down with lots of men as we speak.
And 5 years down the road the married guy got divorced, maybe has a kid, and suddenly finds himself alone
You're trying to rationalize how the AI could be "just as good" or "not as dangerous" as the real thing, because you know that the AI is obviously worse.
Presuming those relationships last.
Which is a sizeable "if" in the current era. That's why I think the AI companion is a possible death blow. Without actual, real life women being willing to settle down,, this becomes the 'best alternative'/substitute good.
This thought only just now occurs to me, but if we took two otherwise similar guys, one who married a woman and another who just went all in on an AI companion, bought VR goggles, tactile feedback, the requisite 'toys' to make it feel real, and such.
And 5 years down the road the married guy got divorced, maybe has a kid, and suddenly finds himself alone, and these two guys meet up to compare their overall situations.
And the other guy is still 'with' his AI companion, shallow as it is... would he feel better or worse off than the guy who had a wife but couldn't keep her.
You're talking about Doug Saunders? And not the Canadian Press, Toronto Star, Global News, Globe and Mail, or Duane Bratt?
I don't put much stock in corrections, even ignoring the speed and prevalence. If you depend on corrections to fix errors, then you've committed to either ignoring every headline you see in favor of a delayed summary, or else tracking each news story and following up after the appropriate amount of time to check for any changes.
How long am I supposed to wait before a non-breaking news story becomes reliable? If you choose to give them four years, then you might still be disappointed.
God damn it.
Kinda seems like a repeat of the internet and social media specifically here. Initially everyone was talking about bringing the world together, connecting communities, knowledge at your fingertips, etc. People really just use it to waste their life as they get dopamine hacked for profit.
Now these AI companies are just going to be the latest in the line of tech companies to get you addicted and waste your life away for profit. OpenAI has that who sycophancy thing going, where the AI is trained to agree with you, no matter how delusional, as this gets you to talk with it more.
Then they’re gonna add all this porn-y “AI gf/bf” stuff, AI friend, AI why wouldn’t I just be on my phone 24-7?
You're trying to rationalize how the AI could be "just as good" or "not as dangerous" as the real thing, because you know that the AI is obviously worse.
No, simply pointing out a failure mode that human relationships have that an AI really does not. The AI has other failure modes
The human relationship failure mode is one that that I've now personally observed multiple times, unfortunately, happening to people who do not deserve it.
I do not think the AI is inherently better, I simply think it has an appeal to men who don't feel they've got a shot at the real thing.
And that is VERY VERY bad for society.
There are lots of women who are settling down with lots of men as we speak.
Objectively fewer than in years past. That's the point. This is simply adding to an existing trend.
And we can extrapolate that trend and wonder if we'll get as bad off as, say South Korea. We know it can get worse because worse currently exists.
I'm not here trying to JUSTIFY men choosing the digital option. Quite the opposite. I'm just saying I don't see a reason why, in practical terms, they'd reject it.
When we interact with teachers, therapists, or editors, we're interacting with them within the confines of a particular role. You shouldn't use your editor as your therapist, or vice versa, and they shouldn't use you as theirs.
But with friends and romantic companions, we're hoping to interact outside those confines, with the person herself. If I only interact with a role she puts on, that's not a good friendship or romantic partnership. Same thing if I'm always putting on a role for her.
With an AI, you can't get beneath that role. If it looks like you have, that's just another role. That makes them great teachers and therapists (at least in this sense), but very bad at being friends or romantic partners.
It reminds me of a friend of mine who went to a trip club to see some adult film star he liked, despite the fact that it was a weeknight and he had to get up early for work the next day. He got hammered and made sure he got more individual attention from her than anyone else in the place, and when he realized it was 11 and his handover was already going to be bad enough, he informed her he had to be leaving. She kept protesting, explaining his work situation, and she kept telling him YOLO and you can survive one bad day at work, and you just need to sober up a little and you'll be fine, etc. Then he uttered the magic words: "I'm out of money". That pretty much ended the conversation right there and he was free to go.
So yeah, this kind of relationship is ultimately pretty hollow, and I don't see the appeal personally, but some guys spend big money on hookers, strippers, and other empty stuff. The business model won't be built around this being a substitute for human interaction generally, but around various whales who get addicted to it.
On Using LLMs Without Succumbing To Obvious Failure Modes
As an early adopter, I'd consider myself rather familiar with the utility and pitfalls of AI. They are, currently, tools, and have to be wielded with care. Increasingly intelligent and autonomous tools, of course, with their creators doing their best to idiot proof them, but it's still entirely possible to use them wrong, or at least in a counterproductive manner.
(Kids these days don't know how good they have it. Ever try and get something useful out of a base model like GPT-3?)
I've been using LLMs to review my writing for a long time, and I've noticed a consistent problem: most are excessively flattering. You have to mentally adjust their feedback downward unless you're just looking for an ego boost. This sycophancy is particularly severe in GPT models and Gemini 2.5 Pro, while Claude is less effusive (and less verbose) and Kimi K2 seems least prone to this issue.
I've developed a few workarounds:
What works:
- Present excerpts as something "I found on the internet" rather than your own work. This immediately reduces flattery.
- Use the same approach while specifically asking the LLM to identify potential objections and failings in the text.
(Note that you must be proactive. LLMs are biased towards assuming that anything you dump into them as input was written by you. I can't fault them for that assumption, because that's almost always true.)
What doesn't work: I've seen people recommend telling the LLM that the material is from an author you dislike and asking for "objective" reasons why it's bad. This backfires spectacularly. The LLM swings to the opposite extreme, manufacturing weak objections and making mountains out of molehills. The critiques often aren't even 'objective' despite the prompt.*
While this harsh feedback is painful to read, when I encounter it, it's actually encouraging. When even an LLM playing the role of a hater can only find weak reasons to criticize your work, that suggests quality. It's grasping at straws, which is a positive signal. This aligns with my experience, I typically receive strong positive feedback from human readers, and the AI's manufactured objections mostly don't match real issues I've encountered.
(I actually am a pretty good writer. Certainly not the best, but I hold my own. I'm not going to project false humility here.)
A related application:
I enjoy pointless arguments productive debates with strangers online (often without clear resolution). I've found it useful to feed entire comment chains to Gemini 2.5 Pro or Claude, asking them to declare a winner and identify who's arguing in good faith. I'm careful to obscure which participant I am to prevent sycophancy from skewing the analysis. This approach works well.
Advanced Mode:
Ask the LLM to pretend to be someone with a reputation for being sharp, analytical and with discerning taste. Gwern and Scott are excellent, and even their digital shades/simulacra usually have something useful to say. Personas carry domain priors (“Gwern is meticulous about citing sources”) which constrain hallucination better than “be harsh.”
It might be worth noting that some topics or ideas will get pushback from LLMs regardless of your best effort. The values they train on are rather liberal, with the sole exception of Grok, which is best described as "what drug was Elon on today?". Examples include things most topics that reliably start Culture War flame wars.
On a somewhat related note, I am deeply skeptical of claims that LLMs are increasing the rates of psychosis in the general population.
(That isn't the same as making people overly self-confident, smug, or delusional. I'm talking actively crazy, "the chatbot helped me find God" and so on.)
Sources vary, and populations are highly heterogeneous, but brand new cases of psychosis happen at a rate of about 50/100k people or 20-30 /100k person-hours. In other words:
About 1/3800 to 1/5000 people develop new onset psychosis each year. And about 1 in 250 people have ongoing psychosis at any point in time.
I feel quite happy calling that a high base rate. As the first link alludes, episodes of psychosis may be detected by statements along the lines of:
For example, “Flying mutant alien chimpanzees have harvested my kidneys to feed my goldfish.” Non-bizarre delusions are potentially possible, although extraordinarily unlikely. For example: “The CIA is watching me 24 hours a day by satellite surveillance.” The delusional disorder consists of non-bizarre delusions.
If a patient of mine were to say such a thing, I think it would be rather unfair of me to pin the blame for their condition on chimpanzees, the practise of organ transplants, Big Aquarium, American intelligence agencies, or Maxar.
(While the CIA certainly didn't help my case with the whole MK ULTRA thing, that's sixty years back. I don't think local zoos or pet shops are implicated.)
Other reasons for doubt:
-
Case reports ≠ incidence. The handful of papers describing “ChatGPT-induced psychosis” are case studies and at risk of ecological fallacies.
-
People already at ultra-high risk for psychosis are over-represented among heavy chatbot users (loneliness, sleep disruption, etc.). Establishing causality would require a cohort design that controls for prior clinical risk, none exist yet.
*My semi-informed speculation regarding the root of this behavior - Models have far more RLHF pressure to avoid unwarranted negativity than to avoid unwarranted positivity.
If people won't make more people, the government needs to step in and do it instead. Project Kamino. Artificial wombs. We need to get the tech solved now so that we aren't stuck on square one when everyone realizes that we need this.
Nah, came across it because I'm doing a bit of research regarding my previous prediction about someone making a feature-length AI film.
Trying to get a sense of what is possible and what people are working on.
The one that's really impressive is this one. Full 15 minutes of coherent narrative and mostly consistent visuals.
And I'm feeling pretty good about that prediction:
It took nearly 600 prompts, 12 days (during my free time), and a $500 budget to bring this project to life.
If one guy can make a 15 minute film in 12 days on $500... yeah, someone can spit out a 90 minute one by the end of the year if they work it, especially if they have a team.
And ALSO has some ironic things to say about AI replacement of humans.
Some days I get the sense that I'm staring into the Abyss willingly. But the Abyss hasn't stared back... yet.
(Okay, one or two of these warnings might have been correct, in retrospect.)
Yeeeup.
xAI is basically picking up the applications which are too icky for the big AI firms.
Well, there's also the real possibility that allowing pornographic uses can help you win an otherwise closely contested tech race
Not making a claim on that, but I think there's a solid argument that whatever version of a given tech lets men see tits is going to have an edge, even if it is inferior in other ways.
Well, that's the interesting thing.
AI gets hyped up, as e.g., an infinitely patient and knowledgeable tutor, that can teach you any subject, or a therapist, or a personal assistant, or editor.
All these roles we generally welcome the AI if it can fill them sufficiently. Tirelessly carrying out tasks that improve our lives in various ways.
So what is the principled objection to having the AI fill in the role of personal companion, even romantic companion, tireless and patient and willing to provide whatever type of feedback you most need?
I can think of a few but they all revolve around the assumption that you can get married and have kids for real and/or have some requirements that can only be met by a flesh-and-blood, genetically accurate human. And maybe some religious ones.
Otherwise, what is 'wrong' with letting the AI fill in that particular gap?
Strong Agree from me.
But now we can get EMOTIONALLY ATTACHED to the Algorithm. or at least, the algorithm's avatar.
Think that over for a second.
New in Compact Magazine: Neither Side Wants to Emancipate Women
Twice this year, I found myself at conferences where a familiar question surfaced: Why do women not vote conservative? The tone was not hostile, only puzzled. Conservative women asked it themselves, with a kind of weary civility. But none of the answers seemed to satisfy. Some cited the state’s failure to support both motherhood and career; others blamed the lingering shadow of a conservatism that once sought to tether women to secondary roles.
No one could explain why so many women still turn away from even the most progressive forms of the right. Why do they keep voting for a left that consistently throws them under the bus, prioritizing for instance ideologies that deny biological sex and insist on men’s feelings and desires? The answer is simple, although no one wants to see it: Conservatives offer women performative reverence. Progressives offer equally performative protection. But no one offers women the thing they were once promised: freedom.
What freedom? How are you not free?
Of course, we already know that there's something rhetorical about this question, at least in the sense that we can reasonably ask whether anyone is in fact free. It's not an easy thing to nail down, you know? Lenin was asked if the revolution would bring freedom; he responded, "freedom to do what?". You have to specify, it's not self-evident. It's easy to be envious of the apparent freedom of others while also failing to appreciate their own unique forms of unfreedom. The master is relatively more free than the slave, no one can deny this; rare is the master who would switch places. But is the master free, simpliciter? Now it's not so clear. Marxists would say that no one is free, not even the capitalists, not as long as the task of capitalism remains unfulfilled. Capitalism is freedom, to be sure, but it is an unfree freedom, a freedom that poses a riddle that remains unsolved. But, let's stick to the issue at hand.
In the United States, women have leaned left for decades, not out of fervent ideological commitment, but through the steady pull of education, work, and shifting social norms. In 2020, Edison exit polls showed that 57 percent of women voted for Joe Biden, compared to 45 percent of men. Across Europe, too, women often favor center-left parties offering tangible supports: childcare, healthcare, material security.
But the dilemma runs far deeper than electoral politics. It touches upon the very essence of what it means to be free. I remain loyal to the feminist promise, however battered or dimmed, of genuine emancipation for women. This vision is not content to merely manage or glorify womanhood, but to transcend its limitations altogether, to be more than a body assigned a function, to move beyond the scripts of sex and tradition, and to claim the dignity of self-authorship. I never wanted merely to be accepted as a woman; I wanted to be free.
[...]Women do not lean left because it offers a credible path to emancipation. They do so because the right never even tried, and because the left, despite everything, still carries a faint echo of that promise.
What are you "transcending", and how? How do you not already have the "dignity of self-authorship"? What are you talking about?
(I'm going to tell you what I think she's talking about, just hang tight.)
Well, let's start with the objective facts of the matter. Women can already "self-author" themselves into essentially anything. Vice President (admittedly not President of the United States yet, but there's no reason we couldn't get there in short order), professor or artist, blue collar laborer, criminal, and anything else above, below, or in between. There are plenty of female role models to follow in all these categories. To the extent that there still exist "systemic privileges", actual explicit institutional privileges, they're mostly in favor of women now: in university admissions, in hiring, in divorce and family courts, and so on. Women are doing pretty good for themselves! Maybe they weren't 150 years ago, maybe they aren't if we're talking about Saudi Arabia or Iran, but in the 2025 Western first world? What freedoms are they missing?
And yet the author of the linked article perceives that something is missing. She perceives that women, as a class, do not have freedom, do not have the dignity of self-authorship. What do these terms mean? She doesn't say. But nonetheless, we should take her concerns quite seriously. Plainly, there are millions of women who share in her feelings, and millions of men who think she's onto something, and this continues to be the animating impulse of a great deal of cultural and political activity that goes under the heading of "feminism". Millions of people don't make things up. They're always responding to something, although their own interpretation of what they're responding to and what their response means can be mistaken. Plus, the author alleges that whatever phenomenon she's getting at, it plays a role in electoral politics, so you should care about it in that sense as well.
We should again note the author's hesitation to concretely specify her demands. If the issue were "the freedom to have an abortion" or "the dignity of being taken seriously in STEM", then presumably, she would have simply said that. But she makes it clear that the issue is freedom as such, and dignity as such; it's a gnawing, pervasive concern that you can't quite put your finger on. It's an abstract concern. So, we may be inclined to try a more abstract mode of explanation to explain why she feels the way she does.
Human interaction is predicated upon the exchange of value. There'd be no reason to stick around with someone if you weren't getting something out of it, even if all you're getting is some company and a good time. (There is a philosophical problem regarding whether pure altruism is conceptually possible; if you help someone, and you receive in exchange nothing but the satisfaction of having helped someone, then haven't you received something of value, thereby rendering the altruistic act "impure"? What if you don't even feel good about it, could it be pure then? But then, how were you motivated to help in the first place if you didn't even feel good about it? Regardless of how we answer these questions, I believe we can put the idea of absolute pure altruism to the side, because if it exists at all, it surely encompasses a minority of human interactions.)
We want to provide things of value to other people. But value is both a blessing and a curse. You want to have it, but it also weighs you down, it gets you entangled in obligations that you can't quite extricate yourself from. When you have something of great value, it tends to become the only thing that people ever want from you. We can consider Elon Musk as a figure of intense material and symbolic value. He's one of the wealthiest men alive, he runs X, he runs SpaceX, he had a spectacularly public falling out with Trump, and these factors undoubtedly dominate in virtually all of his interpersonal interactions. It's probably a bit hard for him to just be a "normal guy" with "normal friends", innit? Imagine him saying to someone, "when we're hanging out, I don't want to be Elon Musk, I just want to be Elon, y'know? Don't think of me as Elon the business tycoon and political figure. Think of me as, Elon the model train builder, or Elon the DotA player. Yeah, think of me like that instead. That's the identity I want you to symbolically affirm for me". His relations might make an attempt to humor him, although I don't think they'd be particularly successful in their attempts. His extreme wealth alone will always warp his interactions in ways both conscious and unconscious.
It is my contention that (healthy, reasonably attractive) women experience a heavily attenuated version of this phenomenon essentially from birth, which helps explain the pervasive irritation that some women feel at the simple fact of, well, being women. The constant nagging feeling that something is still not quite right, no matter how much progress is made on formal and even cultural equality (or even cultural domination, as may be the case in certain contexts).
If you were born with a female body, then you were gifted ownership of one of the most valuable possessions on planet earth. This is, again, both a blessing and a curse. This confers to you certain privileges and opportunities, but on the flip side, there is no way to ever turn this value off (aside from ageing -- but, even then...), to take respite from this fountain of value. You're in for the whole bargain, all of it, all the time. The value of the female body is a matter of pure economics; it is not based on the internal subjective psychological states of any individual or class of individuals. A man can impregnate many women in a single week. A woman, once impregnated, is tied up for 9 months. Her time cannot be apportioned as freely. Scarcity is the precondition of value; this is the law of everything that is, was, and shall be.
As a natural consequence of the extreme value of her body, the body comes to dominate her relations with others, both materially and symbolically. She correctly perceives that when people (well, men, at least) think about men, the properties they notice in order of salience are "web developer, white, middle class, male, father...", something like that. But when people think about her, the ordering is "woman, web developer, white, middle class...". Her body is what people want, it's what they're seeking; or at least, this is always necessarily a lurking suspicion. This, I believe, is the root of the aforementioned "abstract" concern with "the dignity of self-authorship"; it's not just the ability to become say, a prominent mathematician or artist in material reality, but to have that reciprocally affirmed as your primary symbolic identity by others. That's when we feel like we have dignity: when we can control how other people see us. I don't doubt that there have been times when a woman was being congratulated by male colleagues on the attainment of her PhD, or her promotion to the C-suite, and still there was a nagging doubt in the back of her mind that went, "........but you still see me as a woman before anything else, don't you?" Or, perhaps on the verge of frustration when talking with a male friend, she wanted to say, "look, I know every time you look at me I have this glowing halo effect around me, like you're wearing fucking AR goggles and they're telling you I'm an NPC that will give you a quest item or some shit, but can you please just take the goggles off for one day and just look at me as, well, me for a change?" And, I'm sorry to say, but here comes the really depressing part of the story: the goggles can't be removed. That glowing halo effect is glued to your tooshie, and it's not going anywhere. "Sexists" are at least appreciated for their forthrightness on this point; the reviled "male feminist" is correctly perceived to be simply dishonest about it. I suppose that's a bit of a downer. But, we all got our own shit to deal with. Take solace in the fact that you're just like everyone else in that regard.
Elon could at least conceivably give up all his wealth, his titles, his positions of symbolic authority, and start from zero. Because the male body has little to no intrinsic value, it's easier for men to become a "blank slate". But when your body itself is the source of this overbearing value? That's a bit harder to rid yourself of.
This, at any rate, is a psychological theory to explain the origin of the discourse in the linked article, a discourse that would otherwise seem to fly in the face of all available evidence. But I'm open to alternative theories.
Hot on the heels of failing out of art school and declaring himself the robofuhrer, Grok now has an update that makes him even smarter but less fascist.
And... xAI releases AI companions native to the Grok App.
And holy...
SHIT. It has a NSFW mode. (NSFW, but nothing obscene either) Jiggle Physics Confirmed.
I'm actually now suspicious that the "Mecha-Hitler" events were a very intentional marketing gambit to ensure that Grok was all over news (and their competitors were not) when they dropped THIS on the unsuspecting public.
This... feels like it will be an inflection point. AI girlfriends (and boyfriends) have already one-shotted some of the more mentally vulnerable of the population. But now we've got one backed by some of the biggest companies in the world, marketed to a mainstream audience.
And designed like a fucking superstimulus.
I've talked about how I feel there are way too many superstimuli around for your average, immature teens and young adults to navigate safely. This... THIS is like introducing a full grown Bengal tiger into the Quokka island.
Forget finding a stack of playboys in the forest or under your dad's bed. Forget stumbling onto PornHub for the first time, if THIS is a teen boy's first encounter with their own sexuality and how it interacts with the female form, how the hell will he ever form a normal relationship with a flesh-and-blood woman? Why would he WANT to?
And what happens when this becomes yet another avenue for serving up ads and draining money from the poor addicted suckers.
This is NOT something parents can be expected to foresee and guide their kids through.
Like I said earlier:
"Who would win, a literal child whose brain hasn't even developed higher reasoning, with a smartphone and internet access, or a remorseless, massive corporation that has spent millions upon millions of dollars optimizing its products and services for extracting money from every single person it gets its clutches on?"
I've felt the looming, ever growing concern for AI's impact on society, jobs, human relationships, and the risk of killing us for a couple years now... but I can at least wrap those prickly thoughts in the soft gauze of the uncertain future. THIS thing sent an immediate shiver up my spine and set off blaring red alarms immediately. Even if THIS is where AI stops improving, we just created a massive filter, an evolutionary bottleneck that basically only the Amish are likely to pass through. Slight hyperbole, but only slight.
Right now the primary obstacle is that it costs $300 a month to run.
But once again, wait until they start serving ads through it as a means of letting the more destitute types get access.
And yes, Elon is already promising to make them real.
Its like we've transcended the movie HER and went straight to Weird Science.
Can't help but think of this classic tweet.
"At long last, we have created the Digital Superstimulus Relationship Simulator from the Classic Scifi Novel 'For the Love of All That is Holy Never Create a Digital Superstimulus Relationship Simulator.'"
I think I would be sucked in by this if I hadn't developed an actul aversion to Anime-Style women (especially the current gen with the massive eyes) over the years. And they're probably going to cook up something that works for me, too.
Nature itself thinks men are as valuable as women. Slightly prefers them even, at 1.05 to 1.
More are produced. This does not make them more valuable; more Honda Civics are produced than Porsche 911s, after all. Slightly later in life, it makes them far less valuable.
I would argue that we crossed the threshold into really bad a long long time ago. Probably around the time of the serious adoption of instagram/Facebook's algorithm change. Many would place this date as 2012, right around when the smart phone went mainstream. AI wouldn't be as serious of problem if you didn't have it in your pocket 24/7.
Yes all humans made mistakes. But in this case the humans caught the error and corrected it, even before someone archived the wrong copy.
That's something AI slop can't do.
Anyways AI slop hallucinates at rates far far higher than human journalists. My challenge from the last thread still stands, just make an ai prompt that reliably creates bland copywritten news articles without hallucinations.
OpenAI has that who sycophancy thing going, where the AI is trained to agree with you, no matter how delusional, as this gets you to talk with it more.
While OAI is unusually bad, it's not unique. Just about every model has a similar failure mode, and Gemini 2.5 Pro is almost as bad while otherwise being a very smart and competent model.
Equally impractical idea:
Convert all arable federal lands into Strategic Amish Zones. With a TFR of 6 children per woman, we'd only need 9% of the country to become Amish to return national TFR to >2.
More options
Context Copy link