self_made_human
amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
A friend to everyone is a friend to no one.
User ID: 454
I've done my time with Stable Diffusion, from the closed alpha to a local instance running on my pc.
Dedicated image models, or at least pure diffusion ones, are dead. Nano Banana does just about everything I need. If I was anal about the drop in resolution, I'd find a pirate copy of Photoshop and stitch it together myself, I'm sure you can work around it by feeding crops into NB and trusting they'll align.
All of the fancy pose tools like ControlNet are obsolete. You can just throw style and pose references at the LLM and it'll figure it out.
I suppose they might have niche utility when creating a large, highly detailed composition, but the pain is genuinely not worth it unless you absolutely must have that.
I wanted to write a post about some of these events, specifically the change in attitude for the titans of industry like Linus Torvalds and Terence Tao. I'm no programmer, but I like to peer over their shoulders, I know enough to find profoundly disorienting, seeing the creator of Linux, a man whose reputation for code quality involves tearing strips off people for minor whitespace violations, admit to vibe-coding with an LLM.
Torvalds and Tao are as close to gods as you can get in their respective fields. If they're deriving clear utility from using AI in their spheres, then anyone who claims that the tools are useless really ought to acknowledge the severe Skill Issue on display. It's one thing for a concept artist on Twitter to complain about the soul of art. It is quite another for a Fields Medalist to shrug and say, "Actually, this machine is helpful."
Fortunately, people who actually claim that LLMs are entirely useless are becoming rare these days. The goalposts have shifted with such velocity that they've undergone a redshift. We've moved rapidly from "it can't do the thing" to "it does the thing, but it's derivative slop" to "it does the thing expertly, but it uses too much water." The detractors have been more than replaced by those who latch onto both actual issues (electricity use, at least until the grid expands) and utter non-issues to justify their aesthetic distaste.
But I'm tired, boss.
I'm sick of winning, or at least of being right. There's little satisfaction to be had about predicting the sharks in the water when I'm treading that same water with the rest of you. I look at the examples in the OP, like the cancelled light novel or the fake pop star, and I don't see a resistance holding the line. I see a series of retreating actions. Not even particularly dignified ones.
First they ignore you, then they laugh at you, then they fight you, then you win.
Ah, the irony of me being about to misattribute this quote to Gandhi, only to be corrected by the dumb bot Google uses for search results. And AI supposedly spreads misinformation. It turns out that the "stochastic parrot" is sometimes better at fact-checking than the human memory.
Unfortunately, having a lower Brier score, while good for the ego, doesn't significantly ameliorate my anxiety regarding my own job, career, and general future. Predicting the avalanche doesn't stop the snow. And who knows, maybe things will plateau at a level that is somehow not catastrophic for human employability or control over the future. We might well be approaching the former today, and certain fields are fucked already. Just ask the translators, or the concept artists at Larian who are now "polishing" placeholder assets that never quite get replaced (and some of the bigger companies, like Activision, use AI wherever they can get away, and don't seem to particularly give a fuck when caught out). Unfortunately, wishing my detractors were correct isn't the same as making them correct. Their track record is worse than mine.
The TEGAKI example is... chef's kiss. Behold! I present a site dedicated to "Hand-drawn only," a digital fortress for the human spirit, explicitly banning generative AI. And how is this fortress built? With Cursor, Claude, and CodeRabbit.
(Everyone wants to automate every job that's not their own, and perhaps even that if nobody else notices. Guess what, chucklefuck? Everyone else feels the same, and that includes your boss.)
To the question "To which tribe shall the gift of AI fall?", the answer is "Mu." The tribes may rally around flags of "AI" and "Anti-AI," but that doesn't actually tell you whether they're using it. It only tells you whether they admit it. We're in a situation where the anti-AI platform is built by AI, presumably because the human developers wanted to save time so they could build their anti-AI platform faster. This is the Moloch trap in a nutshell, clamped around your nuts. You can hate the tool, but if the tool lets your competitor (or your own development team) move twice as fast, you will use the tool.
We are currently in the frog-boiling phase of AI adoption. Even normies get use out of the tools, and if they happen to live under a rock, they have it shoved down their throats. It's on YouTube, it's consuming TikTok and Instagram, it's on the damn news every other day. It's in your homework, it's in the emails you receive, it's you double checking your prescription and asking ChatGPT to explain the funny magic words because your doctor (me, hypothetically) was too busy typing notes into an Epic system designed by sadists to explain the side effects of Sertraline in detail.
To the extent that it is helpful, and not misleading, to imagine the story of the world to have a genre: science fiction won. We spent decades arguing about whether strong AI was possible, whether computers could be creative, whether the Chinese Room argument held water. The universe looked at our philosophical debates and dropped a several trillion parameter model on our heads.
The only question left is the sub-genre.
Are we heading for the outcome where we become solar-punks with a Dyson swarm, leveraging our new alien intelligences to fix the climate and solve the Riemann Hypothesis? Or are we barrelling toward a cyberpunk dystopia with a Dyson swarm, where the rich have Omni-sapients in their pockets while the rest of us scrape by in the ruins of the creative economy, generating training data for a credit? Or perhaps we are the lucky denizens of a Fully Automated Luxury Space Commune with optional homosexuality (but mandatory Dyson swarms)?
(I've left out the very real possibility of human extinction. Don't worry, the swarm didn't go anywhere.)
The TEGAKI example suggests the middle path is most likely, at least for a few years (and the "middle" would have been ridiculous scifi a decade back). A world where we loudly proclaim our purity while quietly outsourcing the heavy lifting to the machine. We'll ban AI art while using AI to build the ban-hammer. We'll mock the "slop" while reading AI summaries of the news. We'll claim superiority over the machine right up until the moment it politely corrects our Gandhi quotes and writes the Linux kernel better than we can.
I used to think my willingness to embrace these tools gave me an edge, a way to stay ahead of the curve. Now I suspect it just means I'll be the first one to realize when the curve has become a vertical wall.
Thanks!
I feel like someone might have answered this already, but I'm too lazy to look it up:
As someone who is curious about Gundam, where do I start?
I've always raised an eyebrow at this advice. Speaking for myself, I've never felt that photography distracted me from being "in the moment." If I'm visiting a nice place, I'm going to whip out my phone, take as many photos as I please, and then use my Mk. 1 human eyeballs. I don't perceive events entirely through a viewfinder.
And I notice that my memory of events is significantly enhanced by photos. I have forgotten a ton of things until I've seen a picture that either brought back memories or let me reconstruct them.
You would have to have a very pathological attachment to a camera for taking photos at the frequency of a normal 21st century human to be detrimental.
You need a psychiatrist. I am only two-thirds of one, but fortunately for you, I've got exams and that means actually reading some of the papers.
(Please see an actual psychiatrist)
The choice of initial antidepressant is often a tossup between adherence to official guidelines, clinical judgements based on activity profile and potential side effects, and a dialogue with the patient.
In short? It is usually not very helpful to worry too hard about the first drug. They're roughly equally effective (and where one is superior, it's by a very slim margin) But in certain situations:
- Can't sleep? Lost appetite? Mirtazapine
- Too sleepy? Already gaining weight? Absolutely not mirtazapine, consider bupropion or vortioxetine
- Afraid of sexual side effects? Bupropion or vortioexetine again, mirtazapine too
- Tried an SSRI and it didn't help? It's better to try a different class of antidepressant instead of just another SSRI, and so on.
(But before the meds, a physical checkup is mandatory, as are investigations to rule out medical causes. You're going to feel depressed if your thyroid isn't working, or if you've got Cushing's.)
- Antidepressants work. They beat placebo, but not by a massive margin.
- Effects are synergistic with therapy.
Unfortunately, you haven't given me enough information to make an informed choice. I'd need to know about the severity of your depression, graded based on symptoms, lifestyle, overall health and a bunch of other things. Hopefully your actual doctor will do their due diligence.
I would be the last person to claim that conscientiousness is unimportant. ADHD sucks.
But I can take a pill to improve my conscientiousness, and I can't take one that moves my IQ in a positive direction. So it is not nearly as harsh a constraint.
Just ask them for sources? You can also share output between multiple models and see their points of agreement or contention.
I know that OpenRouter lets you use multiple models in parallel, but I suspect a proper parallel orchestration framework is most likely to be found in programs like OpenCode, Aider, Antigravity etc.
Hmm? ChatGPT can definitely use web search when in thinking mode. I get links and citations by default, and more if I ask. You might want to check personalization to make sure you haven't set search to off by default.
Manic people are often happy as they're starving to death too. But being happy while being subject to genocide isn't the default state, that isn't just postulating a hedonic treadmill, it's setting it to overdrive mode in reverse.
Naming a few intellectuals isn't a very strong argument.
It helps to throw away the entire concept of pre-med and just have entrance exams to study medicine in university that test both relevant biology knowledge (to be self studied from common reference book(s)) and requisite math and physics ability. It's not perfect but it's better than just sailing in with high IQ score or using some utterly bullshit proxy like freeform essay or having the right after school activities.
Say what you will about the inadequacies of the British and Indian medical pipeline, but this is a rather uniquely American stupidity. Pre-med offers nothing that just moving the MCAT forward wouldn't, and wastes several years of your youth on a degree that you likely won't use.
I gave an exam straight out of high school, and that was that. I'll say less about everything that followed.
My impression is that historical nobility had a lot of status anxiety too! Not just status, but plain finances to boot.
We're used to the economy consistently growing, at a pace legible to human perception. This is a historical anomaly, and true in the West for maybe 400 years, and mere decades in other places.
Before this, it was very difficult to grow the pie. You were more concerned about slicing it up such that the children didn't starve. Look at the practice of primogeniture, or sending second sons to the navy. The family farm or even ducal holdings never seem to multiply, and if you slice them too fine, you'll be nobility in name alone.
This isn't the case any more! A smart parent, in the 20th century, could start saving and making sensible investments. You can do very well by your kids even if they turn out to be one of the dimmer bulbs in the shed.
While people may feel anxious today, even more so, that's vibes and not based on an assessment of facts or historical reality. Compound interest is a helluva drug, and might even be a better investment than sending your daughter to be an art-ho in Bushwick. The typical worst case scenario is them ending up on SNAP, not starving to death, as might have easily been the case in the past.
I can hardly predict the next decade with confidence, but I believe that money makes everything easier.
I'm counting the days till a patient socks me in the face. It'll save money on a nose-job.
Funnily enough, I've never seen restraints in use on the ward. Mostly because my placements have skewed geriatric, and there's only so much damage a delirious granny can do with a plastic spoon. That is not the same as that being the ideal level of usage for restraints, it boggles my mind how much shit UK doctors, nurses and hospitals put up with. I acknowledge that emergency sedation isn't perfectly safe, but neither is tolerating violence and agitation to the point where sedation is necessary. I haven't been offered any hazard pay, and I've had to patch up broken noses more than once.
I don't think I disagree. Competence is the most important thing, but it is also devilishly hard to pin down. That only gets harder when you need someone to demonstrate their competence before they get the job.
(And then you see person specifications asking for 5 years of experience in some React-knockoff that's only been out 2 years)
Unfortunately, there is often a massive, unavoidable delay between training for a job and getting a job. We want to know if someone will be a good surgeon before they hold a scalpel. How would you check if a 17 year old pre-med student will make for a good neurosurgeon if he won't do any neurosurgery for another 10 years?
That brings me back to the point that intelligence really is our most robust proxy. It's one of the few things in the psychometric literature that has resisted the replication crisis. It is still a proxy, and thus imperfect, but like democracy, it's the worst option except for all the others. If you want to go back to work-experience and trainability, we're going to need a lot more apprenticeships or internships. Those are much harder to scale than standardized tests.
Just picking out this particular area of your comment, it amazes me when intelligent people (like you)
Thanks :*
actually repeat this odd myth in the year of our lord 2025. What you're talking about as "sane" in the programming profession is exactly what you're decrying elsewhere as opaque vibes based sorting. Programmers acting like slobs might have been rebellion against corporate life, or reflected a genuine lack of interest for social norms, twenty or thirty or fifty years ago. Today, it reflects precisely the opposite, tech-bros compete over who can performatively display their slobbery and betrayal of social norms as evidence of their talent. When professors and executives wore suits, choosing to wear a t shirt meant something. Today, it is just another form of cultural signaling.
I am happy to accept that any legible marker for competence (or perceived competence) will be eventually gamed. It's not like turbo-autists are particularly good at gatekeeping or status games. Normies beat autists, normies are beaten by sociopaths, who are in turn kept in check by autists.
I'm familiar with SBF's performative actions. However, I still think it's clear that genuine eccentricity is better tolerated in programming circles. Fursuits, blahajs and programming socks are more prevalent in programming circles.
In other words, I think it's simultaneously true that the world of computers has a higher tolerance for off-kilter behavior and a significant number of people insincerely stealing that culture as their costume!
I'm sure HR and management would prefer someone with people skills who looks presentable, all else being equal. But the sheer tolerance is nigh unprecedented! You'd have to descend to the back of the kitchen with the line cooks before "is warm body" and "can do job" become the prevailing concerns.
Why the initial tolerance? The usual theories that struck me as plausible included a high prevalence of autistic traits, a less client-facing environment, and comparatively legible performance metrics. If you have a code goblin, then the additional latency from running fiber to their segregated basement is worth it. You didn't hire them for their good looks.
But you're right that this creates its own failure mode. When the signal becomes "looking like you don't care about signals," you get poseurs who carefully cultivate dishevelment. The difference, I'd argue, is one of substitutability and testing under load.
In a truly vibes-based profession (consulting, say, or certain flavors of academic humanities), the poseur can coast indefinitely. There's no moment where the rubber meets the road and reveals that beneath the performance there's nothing there. Your PowerPoint looks good, your references are impeccable, and by the time the strategy fails, you've moved on to the next gig.
In programming, the compile button doesn't care about your aesthetic. The production system either works or it doesn't.* Yes, you can hide in a sufficiently large organization, you can take credit for others' work, you can fake it in meetings. But there's still a baseline floor of actual competence required. SBF could fool VCs with his League of Legends schtick, but he still needed actual programmers to build FTX. The fraud wasn't "Sam can't code," it was "Sam is embezzling customer funds." His technical team was apparently quite capable.
The point isn't that programming is immune to status games or that all programmers are autistic savants who only care about code quality. The point is that programming preserves a direct link between competence and output that many other professions have severed. You can fake the culture, but you can't fake the merge request. Well, you can try, but eventually someone has to read your code.
This makes programming comparatively more meritocratic, not perfectly meritocratic. The SBF types are gaming a second-order effect (convincing investors and managers that they're geniuses), but the underlying infrastructure still requires first-order competence (actually building the thing). In contrast, in fully vibes-captured professions, you can game all the way down. There is no compile button. There is no production server that crashes. There's just more vibes, turtles all the way down.
Your point about aristocratic standards being more legible is well-taken, though. Knowing which fork to use is indeed trainable in a way that "act naturally eccentric" is not. But here's where I think we diverge: aristocratic standards are more gameable by the wealthy precisely because they're so trainable. If you have money, you can buy the suit, hire the etiquette coach, send your kid to the right boarding school. What you can't buy (as easily) is the ability to pass a hard technical exam.
The ideal isn't "no standards" or "eccentric standards." The ideal is "standards that correlate maximally with the thing you're actually trying to measure, while being minimally gameable by irrelevant advantages." Standardized testing, for all its flaws, does this better than holistic admissions. A programming interview with live coding, for all its flaws, does this better than "did you summer at the right firm."
The clothing and manners debate is orthogonal to the core question of sorting. I don't particularly care if our elites wear suits or hoodies, as long as we're selecting them for the right reasons. My objection to aristocratic sorting isn't the aesthetics, it's the inefficiency. If your system selects for people who know which fork to use, and knowing which fork to use happens to correlate 0.7 with having rich parents but only 0.2 with job performance, you've built an inherited oligarchy with extra steps.
*I am aware of concerns such as code readability, good practices such as documentation, and the headaches of spaghetti code. But programming is still way closer to the metal than most other professions.
Basically, leftists have a cognitohazard blind spot on this topic because if they allow themselves to even consider biological inequality then the superstructure of their belief system goes right back to the stuff of nightmares.
Hmm? I don't mean to accuse you of burying the lede, but the most prominent example of eugenics in living memory would be the Nazis. They were European, they were less than left wing, and they practiced both positive and negative eugenics. More Aryan Uber-babies with three blue eyes (more is better), fewer gypsies and schizophrenics.
The Right is hardly over its own hangups in that department.
All well and good so long as we remember that "Merit" as measured by IQ is just the ability to do well in school and learn complicated things. It is not some end, just a talent like hand-eye coordination.
Just "learn complicated things"?
I'm afraid the "just" is doing a lot of heavy lifting! We live in a dazzlingly complex world, it's been several centuries since even the most talented person could have understood every facet of modern civilization and technology. Even Neumann and Tau would die of old age before becoming true polymaths.
IQ is strongly correlated to a ton of good things, moderately correlated to a tonne more of other good things, and then weakly correlated with the metric fuck-ton of everything left. Income, physical and mental health, job performance! Even beauty is weakly correlated (so much for the Halo effect as a true fallacy). There are few things that can be tested as cheaply and easily while offering as much signal for the downstream traits we care about.
A quadrillion IQ brain floating in the void isn't worth very much, but we were never talking about intelligence in isolation. If grip strength was the defining factor for success in life, I'd be working on my handshake right now.
As someone who is somewhat aristocratic (my family did not pay for Oxbridge's anything, but 200 years ago they probably could have) you are misunderstanding how it works. Think of it as meritocracy with a sliding window and a small momentum factor.
I do think of it that way? I meant to gesture at that when I said:
This is far from the worst approach. The elites are elites for a reason. That reason is often hereditary. Even total regime change and societal upheaval usually has the dispossessed children of nobility (or the elites) almost inevitably percolate back to the top. This is evident in data from places as far-flung as China, where the grandsons of pre-revolution landlords often outperform their peers.
I agree that the reason for this phenomenon is simply innate talent in many spheres. Not even landgrabs and terrorism could keep the genes down (epigenetics is grossly overrated).
Rather than the intelligent rising to the top after a major shakeup and then camping there until the next revolution, people broadly rise or fall through their lifetimes. My family were long ago pretty influential. They made many bad choices and a few good ones, and went from 'we own a castle and a good estate' to 'we own a farm and a small business' to 'sorry, kid, I gave everything to an exotic dancer' and then back up to 'decent upper-middle class' through the generations.
I suppose I shouldn't hide that the other side of my family started off much better than my dad. They weren't ever wealthy (wealth in Colonial India often meant landed gentry, merchants etc), but they were part of a chain of well-educated intellectuals. The PMC before it was cool. Sadly for me, this never meant enough generational wealth that their kids could coast, though I haven't anything about squandered inheritance.
I don't disagree that the old system didn't have elements of meritocracy. It had plenty. At the same time, it didn't have the churn or the finding power that standardized assessments or talent screens have today. And we need a great deal more talent, nobody has enough.
This results in a society which is marginally less meritocratic but involves considerably less striving. Your brilliant father would have been unlikely to go from hauling crates to owning (a chain?) of hospitals, though it did happen, but would likely have gone from hauling crates to second-in-command of the hauling company, married to a nice girl of a higher class, with children who raised in the style of that class and who would move upwards or downwards from there according to their own ability. Especially since brilliance is more clear when IQ is slightly higher variance in your profession.
That is achingly slow! A hypothetical talented kid from a humble background can do very well for himself in 3 decades because they're on a more even (and relevant) playing field. Med school required good grades. Higher training built up experience and competence. That's a very different place to be.
I'm sure my dad would have managed to make something of himself even in the aftermath of Maoist China. But the system that got him where he is worked out better for him, and for the rest of society, if I squint.
I called the previous system "Basically Fine", the same goes for the one we have today. But that's a low bar, we can do better. As it stands, I'm more focused on eliminating the really bad distortions on meritocracy, such as affirmative action, and I'm not losing sleep over legacy admissions in the best unis.
First of all, most careers do not have uncapped potential for improvement. Let's say someone wants to become a physical therapist - they need to learn a variety of details about human physiology, be competent at working with people, and have the capacity to keep up with developments in the field. This is achievable by a 100 IQ person just as much as a 130, the primary difference will just be how much time and effort is required to acquire the knowledge. I put it to you that most fields have this characteristic. The difference between a god-tier PT and a typical one may matter a little on the edge cases but for the most part these people are indistinguishable in what they can accomplish. Meanwhile, other traits like personability and compassion may be more relevant distinguishers for how well this person does the job.
Fair point, but occupations such as physiotherapy aren't the point of contention (beyond the usual debate about whether or not they should be gated behind credentials, and if so, how heavily).
Let's talk about medicine: I would pay a sizable premium to have a shrink like Scott see me, instead of the modal kind, even if the latter delivers adequate care, and the returns diminish steeply. Outside of a single niche, better doctors/smarter-and-more-conscientious students go into the most competitive specialities. Within the same category, the truly great tend to become specialists and experts in their given domain.
Of course, the rate of return per IQ point can vary greatly. A 130 IQ janitor is just sensible about reading the signs that say "do not ever switch off the lab equipment". A hypothetical 170 IQ janitor probably won't stay a janitor for long.
On the other hand, a 130 IQ physicist might well be locked out entirely from the sorts of intellectual work a 170 IQ counterpart might produce.
Since we agree that this is heavily context dependent, and there are few/no professions where there's a negative return from IQ, we're baking the same cake, just arguing about the ratio of ingredients.
Maybe you're in or adjacent to one of these, and are really griping about how the selection methods there are failing to identify intellectual capacity?
Medicine is very regimented. A doctor twice as smart as me completes their curriculum at the same pace, I'm not aware of accelerated med school programs of any quality. I think I've done decently enough, and am probably above average as a doctor in certain ways (as per exam results), but I don't delude myself into thinking I'd be a shoe-in at Harvard Med.
This is less personal angst, and more general commentary. I don't think my kids will need a SAT coach, or the need to dig wells.
(The previous situation was Mostly Fine. I think the current state of affairs are Mostly Fine. They could still be better.)
On (Noticing) IQ (When You're Allowed To)
I was writing a reply to @TowardsPanna 's question in the SSQ thread that got a bit out of control. Large enough that I decided to just post it here too.
For what it's worth, none of these ideas are particularly new, at least they aren't if you compulsively browse LessWrong (or even read a lot of Scott), but the older I get, the more I realize that novelty is often just a nice-to-have.
Does an IQ taboo (established for political reasons associated with another taboo around HBD, or any other reason) contribute to more of a reliance in many people on the heuristics of social class, physical features, clothes, sociolect and prosody, credentials/profession, or even ideological conformity - and thus more of an opaque and effectively hierarchical society? A society where appearances become more important than the underlying reality, and where presumptions are not challenged? Where the average individual, who insofar as they've been taught anything about mental horsepower, has come to believe that it's about the development and growth mindset - any child can join any profession if they work hard and choose through free will to develop smartness; and knowledge - the person in higher education studying e.g. psychology becomes smart and competent through their acquiring of knowledge and routines and joining into a professional group? The flipsides of these coins being that someone who didn't go to college/university has stupidly chosen not to become smart and grow their brain and thus can blame themselves and should not be given much time of day? And if appearances are what matters, someone with perfect grooming and high class speech and all the shibboleths should be assumed to be smart rather than looking under the hood?
Yes. Of course?
Humans aren't stupid. We’re expert pattern-matchers. We’re distinctively evolved to be relentless Bayesian updaters, constantly scanning our environment for correlations that offer a survival or status advantage. We’re Noticers™. The problem is that we’ve built a civilization where noticing the most predictive variable for life outcomes is considered a faux pas at best and a moral failing at worst. It’s common now to explicitly state, in corporate policy or legislation, that specific classes of Bayesian evidence are "fruit of the poisonous tree" and have to be ignored.
Like many well-intentioned interventions that hinge on obscuring reality, it doesn't work: when you outlaw the most accurate map, people don’t stop navigating. They just buy worse maps.
In the ancestral environment, we didn't have the Raven’s Progressive Matrices (only the Modern Hunter Gatherer has had the privilege). We did, however, have proxies. We had vocabulary size, wit, speed of learning. In the modern environment, we’ve muddied the waters. We’ve got proxies that look like intelligence but are actually just measuring conformity to the upper-middle-class Blue Tribe aesthetic. We look at credentials, which are increasingly just a measure of "willingness to sit still and follow instructions for four years."
(Please note that I don't think that education is purely credentialism. This is a disease that affects midwits first and foremost. A bachelor's in physics is a stronger signal than one in sociology.)
So when we agree to pretend that "mental horsepower" is a myth, or that it’s entirely malleable via the Growth Mindset, we don’t actually create a flat hierarchy. We just create an opaque one. We create a hierarchy based on the performance of competence rather than the reality of capacity.
This leads to a specific kind of societal gaslighting. The narrative is that anyone can be a doctor or a quantum physicist if they just work hard and acquire knowledge. If someone fails to climb the ladder, the implication is that they stupidly chose not to grow their brain. They lacked grit. They lacked character. In a world where aptitude is taboo, failure’s always a moral deficit.
The obvious analogy is a world where the Paralympics and the "normal" Olympics are wrapped up into a single event. Anyone, including the disabled athletes, who points out that missing a leg makes one very unlikely to win in the 100m sprint is immediately walloped and then piled-upon, with cries of "the Science! Not-Murray et al clearly showed, in their landmark 1998 paper, that the number of remaining lower limbs had no relation to performance in the sleighing, shooting and arm wrestling categories. The 2012 Olympics were studied in great detail, and it was confirmed that Usain Bolt had the same number and arrangement of legs as his closest competitors."
When the person with no legs fails to win the 100m sprint, the commentators nod gravely and say, "He just didn't want it enough. He failed to manifest the victory."
I see three main mechanisms driving this, and why the average person (even the smart average person) underestimates the sheer weight of g.
The "All Men Are Created Equal" Overcorrection:
We’ve engaged in a concerted propaganda campaign for decades suggesting that because moral equality is mandatory, biological identity must also be mandatory. This makes it practically unthinkable to consider that it’s perfectly possible to construct a political system that grants equal rights to all citizens while acknowledging that not everyone can visualize a rotating 3D cube with equal fidelity. But biting that bullet feels dangerous to many. It feels like opening the door to old aristocracies. So instead, we pretend the differences don't exist, which inadvertently creates a new aristocracy of "people who know the right shibboleths."
The Bubble and the Range Restriction:
This is the big one. We live in intense cognitive bubbles.
If you’re reading this, you probably spend your life surrounded by people within one standard deviation of your own IQ. You went to a selective university. You work in a cognitive field. You live in a zip code priced for high-earners.
You can go days, maybe weeks, without having a meaningful conversation with someone significantly below an IQ of 100, or even +- 1SD outside of yours.
This creates a statistical illusion known as range restriction. If you look at the NBA, height doesn't correlate very strongly with points scored. Everyone’s already tall; therefore, the variance in performance seems to come from practice, grit, and shooting form. If you looked at the general population, height would be the single biggest predictor of basketball ability (because the 5’5” guy isn't getting the rebound).
Imagine if the NBA had a rule that you couldn't mention height. You'd still need to pick players, so you'd start using proxies. You'd talk about "reach" and "wingspan" and "vertical leap." You'd notice that certain players had better "court vision" or "positional awareness." And all of these would be real skills, but they'd also be correlated with height, and height would still be determining who made it into the league in the first place. The difference is that now you'd be pretending you weren't noticing height at all, which would make you more likely to mistake the packaging for the product.
In your life as a student or a professional, you look around at your peers. You see that Alice is doing better than Bob. Since Alice and Bob are both roughly equally smart (they both made it this far), you attribute Alice’s success to her conscientiousness, her charm, or her work ethic. You conclude, erroneously, that "intelligence doesn't matter much, it’s all about hard work."
You don’t see the people who were both dumber and less hardworking than you; they’re in a different social class entirely. You rarely see the people who are simultaneously smarter and more hardworking; they’re running hedge funds or solving alignment theory and don’t hang out at the hospital cafeteria. These are far more likely to be acquaintances rather than peers.
The problem is when we generalize from this filtered view. We start believing that because IQ doesn't predict success within our bubble, it must not predict success period. We see a colleague who's a bit slower but works incredibly hard and does fine, and we extrapolate that to everyone. We forget that the slow colleague is still in the 85th percentile of the general population, and that the person in the 30th percentile who works just as hard isn't a doctor at all, they're doing something else, probably something that pays worse and doesn't sound as impressive on a Hinge profile (see proxies re-emerging?).
The Opacity of Alien Competence:
Some professions are more segregated than others. An ER physician arguably sees a more representative slice of humanity than a family medicine specialist, who sees a more representative slice than a Google L10, but the direction of judgment is difficult.
When a patient comes to see me, I’m performing a distinct ritual. I listen to symptoms, I peer at them significantly, I type things into a computer. To the patient, the output seems to appear from thin air. A sufficiently competent doctor makes it look like they never broke a sweat. A less experienced one (like me) makes up for it by projecting confidence and then panic-Googling the interaction effects of MAOIs in the bathroom later.
The patient can't judge my raw processing power. They can only judge my bedside manner, my clothes, and my degree on the wall. They’re judging the paint job.
When I'm on call and a patient is rambling in a disorganized way, using neologisms, showing thought disorder, I don't need to know their IQ score to recognize that something is cognitively wrong. But when I'm in the doctors' lounge, listening to two consultants argue about whether a patient's depressive symptoms are primarily biological or reactive, I also don't need IQ scores to know who's making the sharper arguments. The difference is that in the first case, we're allowed to talk about cognitive impairment. In the second case, we're allowed to talk about "clinical judgment" or "experience" or even "medical education," but we're not supposed to talk about the underlying mental horsepower that makes one doctor better at diagnostic reasoning than another.
(Ideally, the whole point of the medical education system and associated exams is to put the gatekeeping before patient contact. When you see an MD in the wild, you ideally want to not need to bother with asking for a transcript of their grades, nor fear that they don't know the safe limit for paracetamol. The reasons why this idyllic state of affairs doesn't hold are too lengthy to fit on this margin. Be smart about things.)
Conversely, I can't judge the competence of a senior software engineer. I know enough Python to automate a spreadsheet, but can I distinguish between a decently competent programmer and a 10x developer without referencing their credentials or status? Probably not.
This brings us back to the original shower thought. If we can't discuss the engine, we obsess over the trim.
In a "blue" environment where social class is ostensibly "over" and never discussed, it’s actually the primary sorting algorithm. We use heuristics like "do they speak with a Received Pronunciation or General American accent?" or "do they know the current correct terminology for this specific social justice concept?" to decide who’s smart.
This favors the people living and breathing inside the dominant culture. It favors the legacy admission who knows how to dress and speak. It subtly closes out the dissenting voice from the outsider who might have raw supercomputer-level processing power but speaks with a regional accent, wears ill-fitting clothes, and hasn't learned the subtle dance of feigning humility while signaling status.
Some professions, like programming, are comparatively more sane/honest. You can have a perfectly decent career in FAANG if you don't shower regularly and speak with a lisp, assuming you are actually good at your job. Hell, like the dude with the MLP (pony, not perceptron) resume, you can counter-signal by being incredibly eccentric. If you're still drawing a seven figure salary, then you're worth it.
Speaking very generally, I think society (a vague term, can't help it) needs to move in the direction of more meritocracy, not less. In practice, that looks like more standardized testing, with reduced focus on vibes.
The vibes are, paradoxically, easier to fake than the exam results:
Consider the standard critique of the SAT: "It just measures how rich your parents are."
This is statistically false. The correlation between SAT scores and parental income is real but moderate (around r = 0.4). But compare this to the correlation between parental income and "being captain of the fencing team" or "having a polished personal essay about your volunteer work in Peru" or "speaking with the correct sociolect during an interview."
The SAT is a partially gameable metric. "Holistic" traits are entirely gameable products.
If you are a rich parent with a dim child, you cannot tutor them into a 1600 SAT. You can maybe get them from a 1000 to a 1150. But you can buy them a spot in a prestigious internship. You can hire a consultant to write their essays. You can ensure they have the "right" hobbies.
Since this is my soapbox, and you're presumably still reading this far, I also want to speak out against another approach towards modern fetishization of pseudo-meritocracy, or more accurately, aristocracy.
I should probably elaborate on the @2rafa position, since she's the strongest advocate (only advocate?) She's old money. My impression being not "my parents paid for Stanford" money, but "my great-grandfather's trust fund paid for Stanford's new rowing pavilion" money. She'll happily tell you that the problem with modern society isn't that we sort by intelligence, but that we pretend to sort by intelligence while actually sorting by a particularly vulgar form of striving that she finds aesthetically repulsive.
Her critique goes something like this: The modern PMC striver is a grotesque creature. The Tiger Mom. The résumé-padding, LinkedIn-posting, "crushing it" bugman who measures his self-worth in LeetCode problems solved and whether he made partner by 35. This, she argues, is what you get when you tell everyone that anyone can be elite if they just grind hard enough. You don't get equality. You get a soulless arms race of performative busyness, a cargo cult where the striver apes the surface of elite competence without acquiring the substance.
Better, she says, to return to a system where everyone knows their place. Where the sorting happens early, quietly, and permanently. Where you don't try to be elite, you simply are. Where excellence is demonstrated through effortless grace, not desperate optimization. The boarding school accent. The understated wardrobe that costs more than a car. The ability to discuss Proust and quarterbacks with equal nonchalance. The aristocratic assumption that if you have to ask, you're not one of us, and that's fine. It's a system that rewards being rather than becoming.
From her vantage, this is obviously superior. And I get the appeal. If you're already at the top of the heap by accident of birth, a system that freezes the heap in place is very comfortable. You don't have to worry about some brilliant kid from a refugee camp out-hustling your mediocre son for the last spot at Harvard Medical School. Your son's spot is secure, not because he's particularly gifted, but because he's yours. The system can quietly acknowledge his inherited position without anyone having to say the quiet part out loud. The "genteel" facade is the point, it transforms raw inherited advantage into a question of taste.
This is far from the worst approach. The elites are elites for a reason. That reason is often hereditary. Even total regime change and societal upheaval usually has the dispossessed children of nobility (or the elites) almost inevitably percolate back to the top. This is evident in data from places as far-flung as China, where the grandsons of pre-revolution landlords often outperform their peers. The previous status-quo was bearable, in some ways superior.
However, the aristocratic alternative often ends up parasitic on the very meritocratic machinery it despises. You still need high-g selection somewhere. You still need the engineers, the surgeons, the generals who can think. You can dress that up in tradition and patronage, you can recruit them as client talent, you can offer them a place in the court, but you cannot run a technical civilization on inherited gentility alone. At some point, reality reasserts itself, usually via catastrophe.
But here's where the rubber meets the road: I am that brilliant kid's son. My dad did out-hustle the mediocre sons of privilege. He clawed his way out of a refugee camp because somewhere, a grinding, impersonal system looked at his test scores and said: "This one. This one is worth plucking from the mass and polishing." He wasn't sorted by vibes. He was sorted by a standardized exam that didn't care about his accent, his hand-me-down clothes, or whether he knew which fork to use at a state dinner. It cared whether he could solve the problems in front of him, quickly and correctly.
The "genteel sorting" system that @2rafa prefers would have written him off before he started. He didn't have the right pedigree, the right consonants at the end of his name, the right summer internships. He had the wrong everything except the one thing that actually matters for medicine: the ability to hold a thousand variables in his head while making a decision under pressure. The exam caught that. The "holistic" process would have missed it, distracted by his lack of polish.
So yes, I have a personal bias. I believe in meritocracy because meritocracy is the only reason I'm here, writing this, instead of hauling crates in a warehouse or pulling weeds out of a farm. But my bias aligns with a principle: if we're going to have sorting, and we are, because complex societies require it, then let the sorting be honest. Let it be based on the thing that actually predicts performance, not the cultural markers that predict comfort for the existing elite.
The aristocratic approach pretends it's avoiding Goodhart's Law by refusing to articulate its metrics. But the metric is still there: it's called lineage. It's just a metric that can't be improved upon, only inherited. And while it's true that modern meritocracy is imperfectly gamed (that's what this entire post is about) the solution isn't to replace an imperfect but theoretically climbable ladder with a walled garden whose keys are handed out at birth.
The modern PMC striver is indeed a pathetic figure in many ways. But he's pathetic because he's been lied to. He's been told that credentials are everything, then handed a system where credentials are increasingly just proxies for the ability to acquire credentials. He's been told that growth mindset will make him a doctor, then sorted by an IQ test disguised as the MCAT. His sin isn't striving. His sin is believing the official story, and optimizing for the proxies rather than the underlying reality.
@2rafa's genteel system doesn't fix this. It just makes the proxies even more opaque and even more heritable. It replaces the MCAT with the recommendation letter from your Exeter headmaster. It replaces the LeetCode grind with the unspoken assumption that of course you'll summer at the firm your father's college roommate runs. It removes the last remaining pressure points where someone like my father could punch through.
What we need isn't a return to aristocracy. It's a return to honesty. Acknowledge that g exists, that it matters, and that it's largely heritable. Then build a system that finds the people who have it, wherever they are, however they present. Make the exam harder to game, not easier. Make the credentials less important, not more. And stop pretending that the alternative to vulgar striving is egalitarianism. It's very much not. The alternative is feudalism with better manners.
Sadly, none of this particularly matters in the long-run. The AI will "meritocratically" take your job, and will eventually do it better than you can imagine. My kids aren't going to college. Yours probably won't either. I find that reassuring, in some ways, short timelines taken seriously make a lot of squabbling moot. You can stop running so damn hard, the winner has a jetpack. Isn't that oh, so reassuring?
"Exploding" is a shorthand for unwarranted errors/catastrophic failure due to criminally negligent quality control.
Plenty of people die on car crashes, but it's much rarer for the death or accident to be due to manufacturer error. The cars? They're almost always fine.
This is in large part due to the fact that cars are highly regulated, and the mechanical failures that do occur being "acceptable" or maybe "expected." The government sets safety standards, the manufacturer meets them, any failure after that has reduced liability. If we think the death toll or damage is too high, then we have the option of swallowing the cost of stricter standards.
Your critique would stand if car crashes were overwhelmingly due to the car falling apart (or exploding) instead of humans being good but not infallible when it comes to operating multi-ton steel vehicles at speeds rarely seen in the ancestral environment.
Watchmen.
It kicked off a copycat trend of deconstructions of the genre, but unlike most of them, it was an actually good movie. Just don't watch the Director's Cut, the comic-book scenes add little or nothing to the story.
Being barred out can be obvious, but only when someone is fucked. Otherwise it's similar in presentation to someone being drunk: slurring, swaying, staring at you blankly, taking ages to respond. But less agitated (usually), not flushed. If you can't smell alcohol coming off someone like that, benzos are the safe bet.
@atelier I can't find a detailed review either, so it's up in the air if I've written one.
The TLDR is that Phoenix Point is incredibly mid. It's just teetering on the edge of worth playing.
The main issue is wasted potential. The ideas behind the mechanics are excellent, it's a simulationist approach, closer to the original XCOM than the new crop, but in 3D.
I particularly enjoy the ballistic simulation, since the RNG simplification of XCOM always slightly annoyed me.
But that's really all it has going for it. The gameplay was grindy, often unfun. The content didn't feel as diverse or interesting as XCOM. The story was so-so.
The balancing wasn't great, but don't listen to me on that now, because the devs implemented a community patch by a group of popular modders that redid the progression. I've seen people claim it's much better.
I'd say I like idea of Phoenix Point much more than actually playing it. I was an obsessive before release, but Julian over-promised and under delivered.
- Prev
- Next

Nano Banana or GPT Image are perfectly capable of ingesting reference images of entirely novel characters, and then just placing them idomatically in an entirely new context. It's as simple as uploading the image(s) and asking it to transfer the character over. In the old days of 2023, you'd have to futz around fine-tuning Stable Diffusion to get far worse results.
More options
Context Copy link