self_made_human
amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
A friend to everyone is a friend to no one.
User ID: 454
For your pleasure:
https://youtube.com/watch?v=-h7BOxN-qRc
(The longer version was sadly hit by a guided copyright strike)
But no, it's a totally legit songbook work from an Adirondacks campfire song to a broadway show in 1927 to a Fred Astaire film. But the joy of discovering that was ruined, because I was too busy worrying if I was a dope falling for AI slop.
Sigh. I suppose this strengthens my usual point:
Stop worrying about "AI slop". You often won't be able to tell if it's AI, and it will, inevitably, become less sloppish. Dare I say, even good. Sure, you want to be able to tell if your Tinder date is catfishing you, or if the email or invoice you've received is real. It's of utmost importance to know if there's an epidemic of obese, suicidal women are throwing boulders on bridges in China.
But for everything else? A drawing is a drawing dawg, a song either sounds good or it doesn't. A work of fiction is no better or worse if meat machines or silicon wrote it (for the same assortment of letters in the same order).
This way lies zen, and the ability to happily partake in the post-scarcity supply side of the attention economy.
Of course, if you do genuinely value human authorship for reasons such as "soul" or the inability of machines to feel emotion, then good luck. You're going to need it.
I also acknowledge I like your writing, I think it's some of the most consistent and interesting posting here. I also think you are a much better writer than me, so if that's your standard for receiving feedback feel free to just ignore the rest.
My apologies. I was very annoyed, for what I hope were understandable reasons. I'm happy to accept feedback when it's not framed as a personal attack alongside, IMO, very poor justification. I'm happy to hear what you have to say!
All that being said. It is uncanny, I have more than once in the last week been interacting with ChatGTP and thought "This could just as well be a Mechanical Turk and @self_made_human is on the other side." It's not just the use of bullet points, it's your tone, word choice, argument structure. It's not just the use of markdown, it's extremely machine like choice of formatting.
Hmm.
The thing is, markdown is cool and incredibly powerful. LLM chatbots like ChatGPT (that aren't base models), are under heavy selection pressure to conform to human preferences. That means a convergence to certain norms, because the average user or RLHF monkey prefers! Headlines, emphasis, bullet points, em-dashes — they're all useful. They make text more legible and help it flow better.
In other words, I've come to appreciate the benefits to writing in a certain structure. I personally prefer it, and I think the majority do (by revealed preference) and it strikes some people as AI-like. The last bit is an unfortunate side effect.
(I would say a bigger influence is Scott. I'm a fanboy, and his advice is solid)
I'm not sure what you mean by a change in tone or word choice, though I make an intentional effort to be less acerbic these days.
However:
I do use AI, sometimes! I've never tried to hide it, or deny its influence when anyone asks. That does not mean that any of my posts are writtrn by AI. I use LLMs for research, fact checking, proof-reading and editorial purposes.
That usually entails writing a draft, then submitting it into an LLM for advice or critique, which I may or may not use.
I think this is entirely above board, and I champion its use. It is categorically not the same as throwing a prompt into a box and then getting the AI to do the heavy lifting. The AI is an editor, not a ghostwriter.
Do you honestly not think your writing style has not changed at all over the course of three years? I think it's would be extraordinarily unlikely that someones writing style does not change at all over the course of years in their 20s. If you acknowledge your style has changed, is your claim it's directionality away from LLM style?
Precisely the opposite. My style has changed, for what I think is the better. I'd hope so, given that I must have written like 1-5 million words in between, including a novel. It has also become more LLM-like, but that is because I like some of the things LLMs do, and not because I'm replaced by an LLM. Case in point, I've never had anyone accuse me of including unsourced or inaccurate information, even when they're criticizing my style, because it's a point of pride that I always review anything an LLM tells me.
When I said:
I've always written like this. You're welcome to trawl my profile back to the days when LLMs were largely useless, and you'll find the same results.
I mean that that specific comment had zero AI in it, and is of a style that strikes me as self_made_human from a few years back, as raw as it gets. It was quickly jotted off, with none of the usual edits, revisions or edit passes I make a point of doing manually. It is as me as it gets, and wouldn't be out of place three years back. It lacks the effort and polish I aspire to today.
Hell, I was doubly mad because I made an intentional effort not to succumb to just asking him to check ChatGPT (which would have given him excellent advice on a topic as done to death as this one) since he clearly wanted a more personal touch. I didn't even ask ChatGPT to write boilerplate that I could have theoretically co-opted as my own. I saw the comment, noticed, hey, I'm actually studying NICE guidance on initiating and managing antidepressant usage, and decided to just scribble down my understanding of best practice. I am, after all, mostly a shrink, even if I'm got more shrinking to do.
So, here I am, providing what I hope is accurate and helpful advice, the old-fashioned way, and someone comes along and starts shit. I might be a moderator, but I have my limits. Anyone calling me a "slopmonger" can fuck right off. As this current example of discourse demonstrates, I am more than happy to be civil and take pains to explain myself if the other person extends me the same courtesy. I appreciate that you have.
https://www.themotte.org/post/2368/culture-war-roundup-for-the-week/354239?context=8#context
Here is a thread outlining my stance towards prior accusations of AI usage, where I am perfectly happy to acknowledge that I have used it (when I've actually used it). You'll notice that I've spent a great deal of time explaining the same thing to jkf in good faith, in an attempt to convince him of the merits of my stance. That hasn't worked, and I am offended by new accusations when the evidence on display is very clearly not AI. It's like someone going around with a loudspeaker telling people I'm a sex offender, when the rap was for public urination while drunk. Even if it was technically correct (it wasn't here), I have little energy to spare to have this argument again.
Alternatively, this:
https://www.themotte.org/post/2368/culture-war-roundup-for-the-week/354252?context=8#context
This strikes me a quite distasteful. It strikes me as someone being upset they got some criticism, then decided to use their mod powers to make an ad hominem attack rather than ignoring or addressing the criticism. If you really don't care what the lesser writers here think of your style, why bother to dig through the mod log?
I don't think opening the moderation log is an abuse of mod power in any meaningful sense. Moderation actions are public, anyone can see them on the sidebar. The panel only shows me the ones linked to a specific user. I didn't slap him with a ban, or start a fight. Moderators are only human, mea culpa.
If he's going to call me a slopmonger, when I think I've got more than enough evidence of engagement (presumably high quality, though everyone is at liberty to form their own opinions, I'm not your dad, I think), then I feel within my rights to point out that he has almost nothing to his name, and what he does is negative. It's genuinely impressive to have been here so long and still achieve so little. Both lurking moar or engaging less are valid options.
And I hope that I have demonstrated, to your satisfaction, that I am usually open to criticism, and have, in fact, had this same conversation with him in the past.
Bruh. Let me summon @Throwaway05 :
-
Do you think it's possible to make recommendations or suggestions about antidepressant usage without heavily stressing the importance of a full physical to rule out medical causes for low mood?
-
Do you think the information given by OP was sufficient to make a clinical recommendation beyond the most universally applicable points?
I strongly suspect he's going to back me up there. Since the facts aren't really in dispute, all that's left is finding a certain string of letters to convey the message. I see nothing "AI" about my choice of phrasing, that's just... normal writing. It's oodles less formal than what I might for in an actual effortpost, because it was smashed out in 5 minutes in the middle of a study session.
To the extent that the 'best' "AI detectors" don't think it's AI at all, I'm very curious to know what stylistic tells you imagine you see, and then an effort to compare it with my earliest writing. I'm not going to bother, I've already put in more than sufficient effort, and I am generally honest about using LLMs. If and when I do use them.
Dude, I've been on here... I don't remember actually, but a long time before I saw you show up.
I'll save you the bother. We've both been on themotte.org since September 2022. I've been a user of /r/TheMotte since just after it split off from the CWR thread on /r/SSC.
And in the span of 3 years, the only notable events in your mod log are two warnings. Not a single AAQC, and people stumble into those by accident. I'll welcome your criticism about my writing style when you write something to impress me first. Or even impress anyone, I don't select the nominees, those are largely on the basis of popular opinion. It takes as little as one person hitting report.
When someone like @Amadan or @2rafa or @phailyoor or.... criticizes my writing style or my very limited use of AI (in this case, exactly zero), I listen. When I didn't even use the damn thing, I'm not going to care very much about your unfounded concerns. If you don't like the self_made_human house style, you're entirely at liberty to not read it.
If Bryan isn't a drooling senile mess at 120, then he's probably benefited from some kind of drug that rejuvenates the brain and restores neuroplasticity too. Taking LSD or shrooms helps with that today, even if it's not going to cure dementia.
Wow.
I guess we have to expand the taxonomy of LLM psychosis, to account for people so paranoid/blind that they see AI the moment someone bothers to use markdown formatting. If bullet points are all it takes to set you off, then one to the brain is probably the best possible cure.
I've always written like this. You're welcome to trawl my profile back to the days when LLMs were largely useless, and you'll find the same results.
And, for what it's worth, that comment was hastily typed out while in the midst of studying actual notes on antidepressant prescription according to UK guidance. You just can't win.
Guess what? The LLMs have read the same literature. There isn't much room to put some kind of unique human spin on the basics of choosing and switching between antidepressants. If ChatGPT had written it for me, it would have been thrice as long, and probably more comprehensive. In which case, I am flattered to be mistaken for it.
How do AI artists deal with preserving character details from image to image? It seems to me this is even more important for furry art (various fur patterns must be harder to reproduce correctly than "black hair, pixie cut").
Nano Banana or GPT Image are perfectly capable of ingesting reference images of entirely novel characters, and then just placing them idomatically in an entirely new context. It's as simple as uploading the image(s) and asking it to transfer the character over. In the old days of 2023, you'd have to futz around fine-tuning Stable Diffusion to get far worse results.
I've done my time with Stable Diffusion, from the closed alpha to a local instance running on my pc.
Dedicated image models, or at least pure diffusion ones, are dead. Nano Banana does just about everything I need. If I was anal about the drop in resolution, I'd find a pirate copy of Photoshop and stitch it together myself, I'm sure you can work around it by feeding crops into NB and trusting they'll align.
All of the fancy pose tools like ControlNet are obsolete. You can just throw style and pose references at the LLM and it'll figure it out.
I suppose they might have niche utility when creating a large, highly detailed composition, but the pain is genuinely not worth it unless you absolutely must have that.
I wanted to write a post about some of these events, specifically the change in attitude for the titans of industry like Linus Torvalds and Terence Tao. I'm no programmer, but I like to peer over their shoulders, I know enough to find profoundly disorienting, seeing the creator of Linux, a man whose reputation for code quality involves tearing strips off people for minor whitespace violations, admit to vibe-coding with an LLM.
Torvalds and Tao are as close to gods as you can get in their respective fields. If they're deriving clear utility from using AI in their spheres, then anyone who claims that the tools are useless really ought to acknowledge the severe Skill Issue on display. It's one thing for a concept artist on Twitter to complain about the soul of art. It is quite another for a Fields Medalist to shrug and say, "Actually, this machine is helpful."
Fortunately, people who actually claim that LLMs are entirely useless are becoming rare these days. The goalposts have shifted with such velocity that they've undergone a redshift. We've moved rapidly from "it can't do the thing" to "it does the thing, but it's derivative slop" to "it does the thing expertly, but it uses too much water." The detractors have been more than replaced by those who latch onto both actual issues (electricity use, at least until the grid expands) and utter non-issues to justify their aesthetic distaste.
But I'm tired, boss.
I'm sick of winning, or at least of being right. There's little satisfaction to be had about predicting the sharks in the water when I'm treading that same water with the rest of you. I look at the examples in the OP, like the cancelled light novel or the fake pop star, and I don't see a resistance holding the line. I see a series of retreating actions. Not even particularly dignified ones.
First they ignore you, then they laugh at you, then they fight you, then you win.
Ah, the irony of me being about to misattribute this quote to Gandhi, only to be corrected by the dumb bot Google uses for search results. And AI supposedly spreads misinformation. It turns out that the "stochastic parrot" is sometimes better at fact-checking than the human memory.
Unfortunately, having a lower Brier score, while good for the ego, doesn't significantly ameliorate my anxiety regarding my own job, career, and general future. Predicting the avalanche doesn't stop the snow. And who knows, maybe things will plateau at a level that is somehow not catastrophic for human employability or control over the future. We might well be approaching the former today, and certain fields are fucked already. Just ask the translators, or the concept artists at Larian who are now "polishing" placeholder assets that never quite get replaced (and some of the bigger companies, like Activision, use AI wherever they can get away, and don't seem to particularly give a fuck when caught out). Unfortunately, wishing my detractors were correct isn't the same as making them correct. Their track record is worse than mine.
The TEGAKI example is... chef's kiss. Behold! I present a site dedicated to "Hand-drawn only," a digital fortress for the human spirit, explicitly banning generative AI. And how is this fortress built? With Cursor, Claude, and CodeRabbit.
(Everyone wants to automate every job that's not their own, and perhaps even that if nobody else notices. Guess what, chucklefuck? Everyone else feels the same, and that includes your boss.)
To the question "To which tribe shall the gift of AI fall?", the answer is "Mu." The tribes may rally around flags of "AI" and "Anti-AI," but that doesn't actually tell you whether they're using it. It only tells you whether they admit it. We're in a situation where the anti-AI platform is built by AI, presumably because the human developers wanted to save time so they could build their anti-AI platform faster. This is the Moloch trap in a nutshell, clamped around your nuts. You can hate the tool, but if the tool lets your competitor (or your own development team) move twice as fast, you will use the tool.
We are currently in the frog-boiling phase of AI adoption. Even normies get use out of the tools, and if they happen to live under a rock, they have it shoved down their throats. It's on YouTube, it's consuming TikTok and Instagram, it's on the damn news every other day. It's in your homework, it's in the emails you receive, it's you double checking your prescription and asking ChatGPT to explain the funny magic words because your doctor (me, hypothetically) was too busy typing notes into an Epic system designed by sadists to explain the side effects of Sertraline in detail.
To the extent that it is helpful, and not misleading, to imagine the story of the world to have a genre: science fiction won. We spent decades arguing about whether strong AI was possible, whether computers could be creative, whether the Chinese Room argument held water. The universe looked at our philosophical debates and dropped a several trillion parameter model on our heads.
The only question left is the sub-genre.
Are we heading for the outcome where we become solar-punks with a Dyson swarm, leveraging our new alien intelligences to fix the climate and solve the Riemann Hypothesis? Or are we barrelling toward a cyberpunk dystopia with a Dyson swarm, where the rich have Omni-sapients in their pockets while the rest of us scrape by in the ruins of the creative economy, generating training data for a credit? Or perhaps we are the lucky denizens of a Fully Automated Luxury Space Commune with optional homosexuality (but mandatory Dyson swarms)?
(I've left out the very real possibility of human extinction. Don't worry, the swarm didn't go anywhere.)
The TEGAKI example suggests the middle path is most likely, at least for a few years (and the "middle" would have been ridiculous scifi a decade back). A world where we loudly proclaim our purity while quietly outsourcing the heavy lifting to the machine. We'll ban AI art while using AI to build the ban-hammer. We'll mock the "slop" while reading AI summaries of the news. We'll claim superiority over the machine right up until the moment it politely corrects our Gandhi quotes and writes the Linux kernel better than we can.
I used to think my willingness to embrace these tools gave me an edge, a way to stay ahead of the curve. Now I suspect it just means I'll be the first one to realize when the curve has become a vertical wall.
Thanks!
I feel like someone might have answered this already, but I'm too lazy to look it up:
As someone who is curious about Gundam, where do I start?
I've always raised an eyebrow at this advice. Speaking for myself, I've never felt that photography distracted me from being "in the moment." If I'm visiting a nice place, I'm going to whip out my phone, take as many photos as I please, and then use my Mk. 1 human eyeballs. I don't perceive events entirely through a viewfinder.
And I notice that my memory of events is significantly enhanced by photos. I have forgotten a ton of things until I've seen a picture that either brought back memories or let me reconstruct them.
You would have to have a very pathological attachment to a camera for taking photos at the frequency of a normal 21st century human to be detrimental.
You need a psychiatrist. I am only two-thirds of one, but fortunately for you, I've got exams and that means actually reading some of the papers.
(Please see an actual psychiatrist)
The choice of initial antidepressant is often a tossup between adherence to official guidelines, clinical judgements based on activity profile and potential side effects, and a dialogue with the patient.
In short? It is usually not very helpful to worry too hard about the first drug. They're roughly equally effective (and where one is superior, it's by a very slim margin) But in certain situations:
- Can't sleep? Lost appetite? Mirtazapine
- Too sleepy? Already gaining weight? Absolutely not mirtazapine, consider bupropion or vortioxetine
- Afraid of sexual side effects? Bupropion or vortioexetine again, mirtazapine too
- Tried an SSRI and it didn't help? It's better to try a different class of antidepressant instead of just another SSRI, and so on.
(But before the meds, a physical checkup is mandatory, as are investigations to rule out medical causes. You're going to feel depressed if your thyroid isn't working, or if you've got Cushing's.)
- Antidepressants work. They beat placebo, but not by a massive margin.
- Effects are synergistic with therapy.
Unfortunately, you haven't given me enough information to make an informed choice. I'd need to know about the severity of your depression, graded based on symptoms, lifestyle, overall health and a bunch of other things. Hopefully your actual doctor will do their due diligence.
I would be the last person to claim that conscientiousness is unimportant. ADHD sucks.
But I can take a pill to improve my conscientiousness, and I can't take one that moves my IQ in a positive direction. So it is not nearly as harsh a constraint.
Just ask them for sources? You can also share output between multiple models and see their points of agreement or contention.
I know that OpenRouter lets you use multiple models in parallel, but I suspect a proper parallel orchestration framework is most likely to be found in programs like OpenCode, Aider, Antigravity etc.
Hmm? ChatGPT can definitely use web search when in thinking mode. I get links and citations by default, and more if I ask. You might want to check personalization to make sure you haven't set search to off by default.
Manic people are often happy as they're starving to death too. But being happy while being subject to genocide isn't the default state, that isn't just postulating a hedonic treadmill, it's setting it to overdrive mode in reverse.
Naming a few intellectuals isn't a very strong argument.
It helps to throw away the entire concept of pre-med and just have entrance exams to study medicine in university that test both relevant biology knowledge (to be self studied from common reference book(s)) and requisite math and physics ability. It's not perfect but it's better than just sailing in with high IQ score or using some utterly bullshit proxy like freeform essay or having the right after school activities.
Say what you will about the inadequacies of the British and Indian medical pipeline, but this is a rather uniquely American stupidity. Pre-med offers nothing that just moving the MCAT forward wouldn't, and wastes several years of your youth on a degree that you likely won't use.
I gave an exam straight out of high school, and that was that. I'll say less about everything that followed.
My impression is that historical nobility had a lot of status anxiety too! Not just status, but plain finances to boot.
We're used to the economy consistently growing, at a pace legible to human perception. This is a historical anomaly, and true in the West for maybe 400 years, and mere decades in other places.
Before this, it was very difficult to grow the pie. You were more concerned about slicing it up such that the children didn't starve. Look at the practice of primogeniture, or sending second sons to the navy. The family farm or even ducal holdings never seem to multiply, and if you slice them too fine, you'll be nobility in name alone.
This isn't the case any more! A smart parent, in the 20th century, could start saving and making sensible investments. You can do very well by your kids even if they turn out to be one of the dimmer bulbs in the shed.
While people may feel anxious today, even more so, that's vibes and not based on an assessment of facts or historical reality. Compound interest is a helluva drug, and might even be a better investment than sending your daughter to be an art-ho in Bushwick. The typical worst case scenario is them ending up on SNAP, not starving to death, as might have easily been the case in the past.
I can hardly predict the next decade with confidence, but I believe that money makes everything easier.
I'm counting the days till a patient socks me in the face. It'll save money on a nose-job.
Funnily enough, I've never seen restraints in use on the ward. Mostly because my placements have skewed geriatric, and there's only so much damage a delirious granny can do with a plastic spoon. That is not the same as that being the ideal level of usage for restraints, it boggles my mind how much shit UK doctors, nurses and hospitals put up with. I acknowledge that emergency sedation isn't perfectly safe, but neither is tolerating violence and agitation to the point where sedation is necessary. I haven't been offered any hazard pay, and I've had to patch up broken noses more than once.
I don't think I disagree. Competence is the most important thing, but it is also devilishly hard to pin down. That only gets harder when you need someone to demonstrate their competence before they get the job.
(And then you see person specifications asking for 5 years of experience in some React-knockoff that's only been out 2 years)
Unfortunately, there is often a massive, unavoidable delay between training for a job and getting a job. We want to know if someone will be a good surgeon before they hold a scalpel. How would you check if a 17 year old pre-med student will make for a good neurosurgeon if he won't do any neurosurgery for another 10 years?
That brings me back to the point that intelligence really is our most robust proxy. It's one of the few things in the psychometric literature that has resisted the replication crisis. It is still a proxy, and thus imperfect, but like democracy, it's the worst option except for all the others. If you want to go back to work-experience and trainability, we're going to need a lot more apprenticeships or internships. Those are much harder to scale than standardized tests.
Just picking out this particular area of your comment, it amazes me when intelligent people (like you)
Thanks :*
actually repeat this odd myth in the year of our lord 2025. What you're talking about as "sane" in the programming profession is exactly what you're decrying elsewhere as opaque vibes based sorting. Programmers acting like slobs might have been rebellion against corporate life, or reflected a genuine lack of interest for social norms, twenty or thirty or fifty years ago. Today, it reflects precisely the opposite, tech-bros compete over who can performatively display their slobbery and betrayal of social norms as evidence of their talent. When professors and executives wore suits, choosing to wear a t shirt meant something. Today, it is just another form of cultural signaling.
I am happy to accept that any legible marker for competence (or perceived competence) will be eventually gamed. It's not like turbo-autists are particularly good at gatekeeping or status games. Normies beat autists, normies are beaten by sociopaths, who are in turn kept in check by autists.
I'm familiar with SBF's performative actions. However, I still think it's clear that genuine eccentricity is better tolerated in programming circles. Fursuits, blahajs and programming socks are more prevalent in programming circles.
In other words, I think it's simultaneously true that the world of computers has a higher tolerance for off-kilter behavior and a significant number of people insincerely stealing that culture as their costume!
I'm sure HR and management would prefer someone with people skills who looks presentable, all else being equal. But the sheer tolerance is nigh unprecedented! You'd have to descend to the back of the kitchen with the line cooks before "is warm body" and "can do job" become the prevailing concerns.
Why the initial tolerance? The usual theories that struck me as plausible included a high prevalence of autistic traits, a less client-facing environment, and comparatively legible performance metrics. If you have a code goblin, then the additional latency from running fiber to their segregated basement is worth it. You didn't hire them for their good looks.
But you're right that this creates its own failure mode. When the signal becomes "looking like you don't care about signals," you get poseurs who carefully cultivate dishevelment. The difference, I'd argue, is one of substitutability and testing under load.
In a truly vibes-based profession (consulting, say, or certain flavors of academic humanities), the poseur can coast indefinitely. There's no moment where the rubber meets the road and reveals that beneath the performance there's nothing there. Your PowerPoint looks good, your references are impeccable, and by the time the strategy fails, you've moved on to the next gig.
In programming, the compile button doesn't care about your aesthetic. The production system either works or it doesn't.* Yes, you can hide in a sufficiently large organization, you can take credit for others' work, you can fake it in meetings. But there's still a baseline floor of actual competence required. SBF could fool VCs with his League of Legends schtick, but he still needed actual programmers to build FTX. The fraud wasn't "Sam can't code," it was "Sam is embezzling customer funds." His technical team was apparently quite capable.
The point isn't that programming is immune to status games or that all programmers are autistic savants who only care about code quality. The point is that programming preserves a direct link between competence and output that many other professions have severed. You can fake the culture, but you can't fake the merge request. Well, you can try, but eventually someone has to read your code.
This makes programming comparatively more meritocratic, not perfectly meritocratic. The SBF types are gaming a second-order effect (convincing investors and managers that they're geniuses), but the underlying infrastructure still requires first-order competence (actually building the thing). In contrast, in fully vibes-captured professions, you can game all the way down. There is no compile button. There is no production server that crashes. There's just more vibes, turtles all the way down.
Your point about aristocratic standards being more legible is well-taken, though. Knowing which fork to use is indeed trainable in a way that "act naturally eccentric" is not. But here's where I think we diverge: aristocratic standards are more gameable by the wealthy precisely because they're so trainable. If you have money, you can buy the suit, hire the etiquette coach, send your kid to the right boarding school. What you can't buy (as easily) is the ability to pass a hard technical exam.
The ideal isn't "no standards" or "eccentric standards." The ideal is "standards that correlate maximally with the thing you're actually trying to measure, while being minimally gameable by irrelevant advantages." Standardized testing, for all its flaws, does this better than holistic admissions. A programming interview with live coding, for all its flaws, does this better than "did you summer at the right firm."
The clothing and manners debate is orthogonal to the core question of sorting. I don't particularly care if our elites wear suits or hoodies, as long as we're selecting them for the right reasons. My objection to aristocratic sorting isn't the aesthetics, it's the inefficiency. If your system selects for people who know which fork to use, and knowing which fork to use happens to correlate 0.7 with having rich parents but only 0.2 with job performance, you've built an inherited oligarchy with extra steps.
*I am aware of concerns such as code readability, good practices such as documentation, and the headaches of spaghetti code. But programming is still way closer to the metal than most other professions.
Basically, leftists have a cognitohazard blind spot on this topic because if they allow themselves to even consider biological inequality then the superstructure of their belief system goes right back to the stuff of nightmares.
Hmm? I don't mean to accuse you of burying the lede, but the most prominent example of eugenics in living memory would be the Nazis. They were European, they were less than left wing, and they practiced both positive and negative eugenics. More Aryan Uber-babies with three blue eyes (more is better), fewer gypsies and schizophrenics.
The Right is hardly over its own hangups in that department.
All well and good so long as we remember that "Merit" as measured by IQ is just the ability to do well in school and learn complicated things. It is not some end, just a talent like hand-eye coordination.
Just "learn complicated things"?
I'm afraid the "just" is doing a lot of heavy lifting! We live in a dazzlingly complex world, it's been several centuries since even the most talented person could have understood every facet of modern civilization and technology. Even Neumann and Tau would die of old age before becoming true polymaths.
IQ is strongly correlated to a ton of good things, moderately correlated to a tonne more of other good things, and then weakly correlated with the metric fuck-ton of everything left. Income, physical and mental health, job performance! Even beauty is weakly correlated (so much for the Halo effect as a true fallacy). There are few things that can be tested as cheaply and easily while offering as much signal for the downstream traits we care about.
A quadrillion IQ brain floating in the void isn't worth very much, but we were never talking about intelligence in isolation. If grip strength was the defining factor for success in life, I'd be working on my handshake right now.
- Prev
- Next

Funny story. Do you know why I made that effort?
A guilty pleasure of mine is to copy and paste entire pages of my profile into an LLM and ask for a summary/user profile (without telling them I'm the user in question). When I first started, maybe a year or so back, I noticed that the models would regularly call me acerbic and prone to cutting humor, even when they happily acknowledged the positives.
I thought about it, and decided, huh, it might be worth an effort to intentionally tone it down myself. If it's not obvious, I adore Scott, and he is probably so mild-mannered that his toddlers walk all over him.
(Oh, wait.)
So I decided, hey, it's worth trying to be nicer, even though I do not suffer fools gladly. Or perhaps I'm getting old, and realizing that yelling at people on the internet is of little utility and only raises my blood pressure.
More options
Context Copy link