self_made_human
amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
A friend to everyone is a friend to no one.
User ID: 454
Exactly, if a woman marries me for my money, extends me love and attention, raises my kids, watches me die of natural causes and then goes to the Bahamas to cry on a cruise ship, I'm not really seeing the issue here.
There are very few women who don't care about money at all. I ask the married male Mottizens here to consider what would happen if they suddenly gave away all their money, quit their jobs and then told their wives that. "But don't you love me for who I am?", you'll have to cry plaintively as she files papers and takes the kids.
She's never been permabanned. I seem to recall her saying she'd lost the password to her previous account, and she then turned down our offer to restore it.
Thank you, that's the one. My internal betting market had strong odds in favor of you being the first to find the link, good to see I'm well-calibrated.
doesn't quite match your assertion
Hmm. It seems I was misremembering. I will weaken from saying that 18 (or my speculation of 16) being peak female attractiveness isn't supported by the graph.
I will note:
as you can see, men tend to focus on the youngest women in their already skewed preference pool, and, what's more, they spend athe median 30 year-old man spends as much time messaging teenage girls as he does women his own age significant amount of energy pursuing women even younger than their stated minimum. No matter what he's telling himself on his setting page, a 30 year-old man spends as much time messaging 18 and 19 year-olds as he does women his own age. On the other hand, women only a few years older are largely neglected.
I think this supports part of my argument: namely, that by setting an age minimum at 18, OKCupid obscures the fact that many/most men would happily approach younger women if they had the option. I suppose this is even less controversial, women don't magically go from being divorced of sexual value at 18 years - 1 Planck time to being hot when the clock strikes 12 on their 18th birthday.
Also look at the charts titled "The shape of the dating pool" and "how a person's attractiveness changes with time":
The latter shows that 18 year old women are about 75% as attractive as they are at their absolute peak at 21. They are roughly twice as attractive as they would be at 34. This strongly implies that women below 18 are more attractive than the majority of older women, the range restriction just doesn't allow us to measure this.
I had an ex who was actually two years older than me, but could have passed as 18 without much hassle. I visited London with her when I was 26ish, and she was 28. I remember getting dirty looks at a liquor store with her on my arm as we were gawking at the variety of booze on offer. The next time, when she went alone, she got even dirtier looks, and was finally accosted by both a random old granny and the lady at the till on suspicion of underage drinking. It was funny in hindsight, as much as women complain about getting carded, they're even more upset when it stops.
On the other hand, excluding venues where they have a policy of carding anyone who walks in, I haven't been specifically asked for ID since I was 16. I can only presume that the we were giving off the impression of a sizeable age gap.
Anecdotes aside, I think the primary driver of age gap discourse is the bitterness of a specific age group of women engaged in intrasexual warfare that spills out into intersexual forms.
Ages 25-35, I'd say. Just young enough to be terminally online, unlike even older women who grew up and settled down this before this was capital-d Discourse. (There are very few grannies out there who are going to lecture their granddaughters about dating a 35 yo when they're 22.)
They notice that the youth they once prized is fading, and while they're still perfectly happy to go for older men (as are almost all women), they resent the fact that the men in their ideal age range don't consider them to be in their ideal age range.
Lip-service to feminism makes it difficult to directly attack their direct competitors (younger girls), without coming off as bitter and butt-hurt. But you can attack the men. And if you can successfully pathologize male preference for youth as predatory, you accomplish two things simultaneously: you make the competing demographic seem like victims who need protection rather than rivals, and you make the men who prefer them seem like villains.
This reframing has the additional advantage of being unfalsifiable in ways that make it rhetorically robust. Any counterexample, any young woman who says she's perfectly happy in her relationship and was not victimized, can be explained as evidence of how thorough the manipulation was. She doesn't know she's a victim. That's the worst part.
The frontal-lobe argument is where things get especially interesting. The claim is that the prefrontal cortex isn't fully developed until 25, therefore people under 25 lack sufficient judgment to consent to relationships with older partners. I've seen this argument made by people with actual MDs on /r/medicine, which I find both impressive and alarming. It's impressive because it successfully launders a social preference into neuroscience. It's alarming because it's bad neuroscience.
Neurodevelopment is continuous. The "fully developed at 25" framing suggests a step function where below 25 you're basically a golden retriever and above 25 you're suddenly Immanuel Kant. This is not how brains work. The research shows gradual changes in certain cognitive and regulatory processes, with enormous individual variation, and basically no evidence that this translates into systematic inability to make reasonable decisions about relationships.
The younger girls? They absorb this by cultural osmosis. Younger Gen Z is actually the most vocal about age-gap discourse. Unfortunately (or fortunately), that isn't enough to overcome their innate biological preference for older, successful men, so actual behavior doesn't change much. If a 20 year old girl meets a 30 year old man she thinks is cute, she'll usually have few qualms about sleeping with him or getting into a relationship, age-gaps be damned.
Power-disparity is bad? Huh, someone should tell all the women who prefer that kind of disparity, in favor of the men they desire. Men tend to be more focused on attributes such as physical attractiveness and youth, which are, no prizes for guessing, more common in younger women.
I find such pathologization of universal human preferences distasteful, doubly so when my field is molested and forcefully conscripted to shore up bad arguments. Oh well, so be it. I'm lucky enough to be a MILF enjoyer and thus immune from direct blowback for the most part, even if I regretfully note that "MILF" increasingly just means women my age.
(Another anecdote: I remember grinding on a girl I vaguely knew at a club in Scotland. An older friend of mine had a thing for a bisexual woman about the same age as me. She ended up chatting with the first girl, who seemed receptive to her advances. Then the girl disclosed that she was 19, and that made the woman freak out, as they later explained in our company. I put aside any plans to approach the girl later, since the headache was far from worth it.)
If I was less lazy/busy, I'd insert the usual OkCupid stats blogs/archives from before they were bought and cucked. They showed that female attractiveness peaked at 18, but that was their minimum age cutoff, so I suspect the actual figure is even lower at around 16. Men also showed tolerance to wider age gaps as they got older. 30 year old and 35 year old men showed roughly the same willingness to approach 25 year old women.
I believe Gwern has a copy. Someone please do this in the comments, thanks, :*
Just look at what the side bar on the blog is titled.
I think my actual favorite by Watts is the Sunflower series/novella. There's no scope for heavy handed ecological metaphors, just good old fashioned scifi and existential dread.
That's like saying Einstein and a village idiot both suffer from the "same" problem, they stub their toes at equal rates. Or saying that a drunk Asian grandma and a professional F1 driver are as incompetent because F1 drivers crash their cars too.
How often they fail is important.
Now that's actual insanity. I presume you mean you used GPT 3.5 (because that was the version in the first public ChatGPT release) vs GPT-4.
The actual GPT-3 was a base model, it wasn't instruction tuned.
I actively used GPT 3.5 when I was learning how to code, and found it useful but frustratingly inaccurate. I remember trying GPT-4 during the same period, and it was so much better that I gave up all aspirations of directly switching from medicine to programming and ended up becoming a psychiatrist. Regardless of how good the AI was, I noticed that it was getting better, faster than I was. An excellent choice in hindsight.
If that is your serious opinion, then that is a genuine reason to discount anything you have to say about LLMs. You didn't even need benchmarks, it was as obvious as the performance difference between a rickety tuktuk and a Honda Civic.
Also reading Legend of the Galactic Heroes again.
I tried watching the anime, after it seeing it shared as an example of a "rational"(ish) anime.
The first episode (all that I bothered watching) disappointed me greatly. The so-called strategic genius won a fleet battle against all odds by using tactics obvious to a particularly bright seven year old. Someone tell me if it's worth persisting despite poor first impressions.
Sure, if we're being strict about things. But then there's everything else Watt says, which makes me feel justified in saying that was his subtext/implication. He comes out and says so!
I'm probably misremembering. I think I've read the book at least 5 times, but the probably over a year ago.
The point still stands: we have limited insight into the actual degree of consciousness in a sleepwalking state. It's clearly abnormal, but our understanding of neuroscience can't confidently say that since the ability to form longterm memories is largely disabled, that means that consciousness, if present, can't be reported by the sleepwalker later (the same reason you start forgetting a dream as soon as you wake up).
If you've ever lucid dreamed (I haven't, sadly) then that demonstrates the ability to be aware and at least partially conscious during REM sleep. Sleepwalking is NREM behavior, sure, but it's not possible to say that the sleepwalker is entirely unconscious, we just don't know.
Even if they're performing complex motor behaviors, I strongly suspect that overall performance is hampered. They might (in rare cases), drive a car, but I doubt they drive as well as they would fully awake. I could be wrong, but without the ability to subject an active sleepwalker to a battery of cognitive tests, I'll stay here. It's a very tricky subject to study.
Eh, I have mixed feelings on the topic. Watts did his best to rationalize the concept with evobio, but that only gets you so far with vampires. It's kinda cool, but they're far from plausible organisms.
Oooh they're scary dangerous predators that would murderise us all if they could. Yeah, and so could great white sharks, with their dead shoe-button eyes.
Unlike sharks, vampires are depicted as both amoral/murderous, and more intelligent than us silly humans.
We're not going to be murdered by sharks any time soon, and the sentimentality around the way some people treat them accords perfectly well with the stupidity of, as you point out, letting the vampires walk around unfettered. I can easily believe some people would be greedy and stupid enough to think they could make pets out of vampires and use them for PROFIT. But the vampires themselves? There's nothing there, they're just automata. Or sharks, perfect killing machines but no higher goal than that.
The thing is, they don't roam around entirely unfettered! In-universe, they're recognized as highly dangerous, and mitigation measures are put in place:
-
The original vampires were highly territorial hypercarnioveres who couldn't stand competition. The resurrected ones had those tendencies ramped up, they were described as murdering each other if allowed to enter close proximity. Think shoving two male tigers into the same enclosure.
-
Their handlers thought that this instinctual intolerance of their own kind would prevent scheming and conniving. They were very, very wrong. The exact mechanisms by which the vampires coordinated their rebellion are excellent, probably one of the best depictions of the power of decision theories for modeling and coordination. They just imagined what they'd do if in the place of another vampire, and vice-versa, solved for the equilibrium, and acted, independently and simultaneously, without ever having to actively exchange information with their kin. Hats off.
-
The crucifix glitch was weaponized against them, the belief was that if they went off the reservation, they'd die painfully as soon as the drugs that stopped them from having painful and lethal seizures wore off.
The humans weren't entirely complacent, but they were still unforgivably insufficiently paranoid about creatures smarter than them, which they knew to be hostile by default. The Vampires consistently use their superior physical prowess to murder normal humans, not just their brains.
So why even let them have that physical prowess? It doesn't take a genius to say that "hey, maybe we should give them the grip strength of an obese 4channer". The Vamps were kept around for their brains, not their brawn. It added nothing while making them a greater threat. This is, as far as I'm concerned, giving the humans an idiot ball. The ways the vampires circumvented their other shackles is understandably hard to predict without the benefit of hindsight. Tearing people apart with their bare hands isn't.
You know what? I don't think he is engaging with the article. The article specifically mentions GPT 5.2 Pro seven times, two of which seem, to my read, to imply that that's what he's using. There is one moment where he just says "GPT 5 Pro". Perhaps he just happened to leave off the ".X" in this one spot. Perhaps I'm reading the other seven mentions of GPT 5.2 Pro wrong, and the dirty secret is that he's using 5.0. I suppose he doesn't say in big bold highlighted words, "I'm definitely using 5.2 and not 5.0," so sure, maybe one could say that it would be nice to have a clear statement.
I checked, and this seems correct.
On that basis, I can't really disagree with your claim that @Poug didn't engage with the article. Being charitable, it's exceedingly common to see this happen in the wild, so he might have jumped to conclusions, but neither you, nor the author, seems to have made that kind of error and it's unfair to criticize you on those grounds.
Sure Rorschach is more advanced than humanity, but that obviously doesn't prove that consciousness is a drag any more than someone taller and balder than you indicates that hair is keeping you short.
Rorschach is explicitly described as a p-zombie/Chinese Room, and is used as an existence proof for superintelligence without qualia or consciousness. I struggle to separate in-universe speculation from author fiat, I doubt that Watts is the kind to devote that much screentime to an idea without partially endorsing it.
It's the most technologically advanced entity in Sol, it's doing very well for itself, and all without being conscious. I think that constitutes a claim that consciousness isn't particularly important.
Anyway, after writing this, I had GPT 5.2 Thinking check the version hosted on Archive for direct quotes:
From Siri’s internal monologue near the end (the book’s most on-the-nose anti-sentience passage):
“It begins to model the very process of modeling. It consumes ever-more computational resources, bogs itself down with endless recursion…” � Internet Archive
“Metaprocesses bloom like cancer, and awaken, and call themselves I.” � Internet Archive
“The system weakens, slows… advanced self-awareness is an unaffordable indulgence.” � Internet Archive
“This is what intelligence can do, unhampered by self-awareness.” � Internet Archive
That last line is basically your exact request in one sentence.
In the Notes and References: consciousness as interference, nonconscious competence In the back-matter discussion of consciousness (Watts stepping partly out of “story voice”):
“Consciousness does little beyond taking memos… rubber-stamping them, and taking the credit for itself.” � Internet Archive
“The nonconscious mind… employs a gatekeeper… to do nothing but prevent the conscious self from interfering…” � Internet Archive
“It feels good… makes life worth living. But it also turns us inward and distracts us.” � Internet Archive
“While… people have pointed out the various costs and drawbacks of sentience, few… wonder… if… it isn’t more trouble than it’s worth.” �
It also found a full interview where Watts, out of universe says:
It finally occurred to me that if consciousness actually served no useful function – if it was a side-effect with no adaptive value, maybe even maladaptive – why, that would be a way scarier punch-in-the-gut than any actual function I could come up with. It would be an awesome narrative punchline for a science fiction story. So I put it in.
Of course, not being any kind of neuroscientist, I had no doubt that I’d missed something really obvious, and that if I was lucky a real neuroscientist would send me an email setting me straight. At least I would have learned something. It never occurred to me that real neuroscientists would start arguing about whether consciousness is good for anything. In hindsight, I seem to have just blindly tossed a dart over my shoulder and hit the bullseye entirely by accident.
https://milk-magazine.co.uk/interview-peter-watts-sci-fi-novel-blindsight/
https://x.com/lauriewired/status/2020006982598685009?s=20
This is the closest I've ever come to seeing usage in the wild, and Laurie claims it's applied by some flavor of analyst. I suppose it's neat?
Well, I don't see myself crossing the bright line of actually posting my essay here and then begging for votes. I think simply soliciting suggestions and mentioning a rather extensive list of potential candidates I've come up with is probably fine. I don't think @ScottA would mind.
So you may want to avoid stating what your final decision on this topic is.
Fair enough, but I'm still in the concepts-of-a-plan stage.
Did you know that visualizing data in the form of faces is an actual technique?
https://en.wikipedia.org/wiki/Chernoff_face
Making them screaming faces? Subtlety is a lost art.
I do not think it's fair to say that @Poug didn't engage with your post.
If you say:
It seems to me to be a balanced take. He's bullish and hopeful on the future, while trying to be accurate/realistic about current capabilities, while remaining somewhat concerned about possible problems
Then it is entirely fair to point out that the person you're using as an authority isn't using cutting-edge models that correctly capture "current capabilities". A few months is a very long time indeed when it comes to LLMs.
That is all I have to say, and I mean it. I'm not a professional mathematician, I can't attest to their peak capabilities as a primary source. The last time I was able to was when I got my younger cousin (a Masters student then, now postgrad in one of the more prestigious institutions here) to examine their capabilities in my presence.
"Is the one-point compactification of a Hausdorff space itself Hausdorff?" was a problem that I could actually understand, after he showed me the correct answer. The LLMs of the time were almost always wrong, 6 months later we got mixed results , but as early as a year ago, they get it right every time (when restricting ourselves to reasoning models, and you shouldn't use anything else for maths).
Now? He went from being skeptical about my claims of near-term AI parity in mathematics to what I can only describe as grim resignation.
(Now being six months ago, last time I saw him.)
In the interest of fairness, I think @Poug is probably incorrect when he says:
But you will also notice the absense of issues you are facing
I'm not saying this with confidence, because that's just my recollection of what actual mathematicians say these days, including Tao himself. I just mention it to hopefully demonstrate that I'm trying very hard not to be a partisan about things.
It's excellent to see you living up to the latter half of your username. Here, have a cookie for good behavior.
Tell me about it. I was looking for published research on administering human IQ tests to LLMs, and the most recent example I could find is a preprint that tested cutting edge models like 4o and Sonnet 3.5. Damn thing hadn't even made it through peer review. I had to settle for a relatively niche website that independently administers the Mensa IQ test to the latest models, and while that's much better than nothing, it demonstrates that standard academia is entirely unable to keep up with the frontier.
Huh, I haven't heard of that one before, and up till this point, I thought I'd read pretty much everything he's ever written. Maybe it's even more misanthropic when translated to Polish? You guys aren't known for your sunny vibes and general optimism.
In general, I agree that Watts is deeply, borderline-fanatical levels of misanthropic. I regularly check in on his blog, and a running theme is his sentiment that humans have Wrecked The Planet (ecological collapse, global warming), and we're going to pay for our sins/hubris by quite possibly going extinct. There is such a thing as overstating the seriousness of what is otherwise a real problem. Global Warming is an eminently solvable problem, for very little money should we get over our civilizational allergy to geoengineering. Of course, the idea of using technology to solve things instead of degrowth and industrial regression is deeply antithetical to his worldview. Recently, he's slowly migrating to AI-bashing, which is a very modest directional improvement.
For now, he's busy writing polemics and giving talks at moderately populated scifi seminars. A retired academic in Canada has largely aged out of active terrorism, that's a young man's game.
I was considering submitting my review of Rejection
The collection of short stories? I agree that it would have been unlikely to win, but that's on the basis of general ACX-audience inclination, and not because of your chops as a writer (very real) or the quality of the book (I have no clue).
Incidentally if you'd like me to read over your draft and offer feedback, I'd be more than happy to.
Thank you again! This reminds me that I really need to apologize for asking you to send me your draft and never getting around to giving suggestions :(
If it's any consolation, I have consistently felt bad/embarrassed about it ever since. I try and keep my promises in general.
I can take an actual look this time, assuming you still want a second set of eyes on it.
Thanks! Out of curiosity, do you plan on throwing your hat in the ring?
I'd have to write the review from scratch, but if you want a TLDR:
- Watts posits that consciousness is an evolutionary spandrel and that it's possible to have intelligence/superintelligence without consciousness. While not mentioned in the book, the usual supporting evidence is observation of sleepwalking humans or blackouts (in which case we haven't ruled out that the person isn't partially or fully conscious, they might simply lack the consolidation of longterm memory required to remember being conscious, this is pretty strongly evident in alcohol blackouts). Not only does he claim it's not strictly necessary, he posits that it's suboptimal, and a drag on performance.
- Our best theories of consciousness like IIT and GNWT seem to be partially supported and partially discredited based on recent research. That means that it's possible to salvage Watts's claim, but no strong consensus either way.
- We've found clear correlations between consciousness and statistical phenomena on the whole-brain scale. You could look up edge-of-criticality models for more. The gist of is that what we perceive as normal consciousness, the type optimal for normal life, is a very fine balance in neuronal activity with chaos on one side and rigidity on the other. This is actually a blow against consciousness-as-epiphenomenon, as Watts claims. These models cash in with actual predictions, and they can measure "degrees" of consciousness from stupor to full alertness using physical metrics.
- LLMs are the first real xenointelligences. A few years ago, the case for them entirely lacking consciousness or internal qualia was the default. Now, we have very interesting evidence suggesting active ability to introspect and awareness of their internal cognition in a way not specifically trained into them:
https://www.anthropic.com/research/introspection
-
I still wouldn't go as far as to claim that LLMs are conscious, since we're awful at conclusively identifying consciousness in humans, let alone animals or AI, but they seem to possess at least some of the necessary elements.
-
I fucking hate the Chinese Room, it's an impoverished excuse for a thought experiment with an obvious answer: the room+human system speaks Chinese, even if no individual component does. You speak English, even if no single neuron in your brain does. I find it ridiculous that it's brought up today as if it means anything. The aliens in the story are specifically described as Chinese Rooms, and you can guess what I think of that. If I was writing a full essay, I'd add more about the sheer metaphysical implausibility of p-zombies in general, but those aren't original observations.
-
If I'm nitpicking (some very annoying nits), the baseline humans and their pet AGIs show suicidal incompetence in universe. You've got hyperintelligent autistic superpredators on the loose? And you let them walk around? Break their spines and put them in a wheelchair while on enough enough oestrogen to give them brittle bones/spontaneously manifest programming socks. The only reason that the primary safeguard was an aversion to straight lines intersecting at right angles is Watts trying to launder in the classical trope of vampires being averse to crucifixes. It's deeply dumb as an actual solution. Also, why didn't the supersmart AI actually do something about the vampire takeover? Are they stoopid?
Summing up: the case for the theories in Blindsight is weaker than at time of publication, even if no one can outright falsify them.
Edit: It's worth noting that I still love the books, it's in my top 10, maybe top 3. I even separate art from the author, I'm not sure if Watts is terminally depressed or terminally misanthropic, but I suspect that the combination is the only thing preventing him from becoming a low-grade ecoterrorist (this is mostly a joke). I still highly recommend it to new readers, as long as they don't overindex from the existential crises.
Oops. I'll fix that brain fart, thanks.
- Prev
- Next

Very few, but still non-zero. Classic examples would be Ender's Game; then we've got HPMOR and other rat-fic.
More options
Context Copy link