@self_made_human's banner p

self_made_human

Grippy socks, grippy box

16 followers   follows 0 users  
joined 2022 September 05 05:31:00 UTC

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

A friend to everyone is a friend to no one.


				

User ID: 454

self_made_human

Grippy socks, grippy box

16 followers   follows 0 users   joined 2022 September 05 05:31:00 UTC

					

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

A friend to everyone is a friend to no one.


					

User ID: 454

Raising the Price of Admission

I find myself immensely frustrated by Trump's recent moves to cut down on immigration, especially replacing the EB5 with his new golden ticket scheme.

I've always wanted to move to the States, but by virtue of being Indian, and in a profession with strict regulatory requirements, it was never easy. As of right now, I can't sit for the USMLE if I wanted to, but I believe that is a problem my uni could solve, unfortunately I'm locked into the UK for at least 3 more years and don't have the time to breathe down their necks.

If I wanted to spend $1 million for the old EB5, I'd probably have to sell a significant fraction of my familial assets, and they're not mine yet, I have a sibling and parents to think of. The fact that we even have that much, when my father made $50k at the peak of his career as a OBGYN surgeon, represents a lifetime of my parents being frugal and living beneath their means. My dad started out from scratch, a penniless refugee, and all his life he worked tirelessly to make sure his kids wouldn't have to work as hard as he did. To a degree, he's succeeded. I nearly make as much as he does, but that's virtue of grinding my ass off to escape India. I had to settle for the UK, whereas I'd much rather be in the States.

The EB-5 program already functioned as a high barrier to entry, requiring not just capital but also the ability to invest in ways that met the job creation criteria. By raising the price to $5 million, the U.S. is effectively signaling that it no longer wants "entrepreneurial upper-middle-class" immigrants - it only wants the ultra-wealthy. The problem, is that the truly ultra-wealthy already have multiple options. The US is relatively unique in dual-taxation, and has heavier taxes overall when compared to some of the alternatives. They can buy citizenship in other countries (Malta, St. Kitts, etc.), take advantage of residence-by-investment programs in the EU, or just maintain an arsenal of visas that allow them to live anywhere they please. The U.S. loses out on exactly the kind of people who were willing to put down roots and contribute significantly to the economy while still needing the opportunities that U.S. citizenship provides.

If Trump (or any administration) wanted a truly meritocratic system, they should be auctioning off a limited number of economic immigrant slots each year. That would at least allow market forces to determine the actual value of U.S. residency. A points-based system, like Canada’s or Australia’s, could also make more sense: prioritizing skilled professionals over sheer wealth. A million already strongly filters would-be immigrants. Five is exorbitant, especially if it's a flat sum.

(Let's leave aside the other requirements, such as running a business that creates a certain number of jobs)

Jevon's paradoxmakes us expect that increasing the price of a good by 5 times will not 5x the revenue. It'll decrease it in expectation. If Trump prizes himself as a businessman, this should be clear to him.

Even the abolition of birthright citizenship strikes me as a violation of the American ethos. It was certainly being abused, anchor babies being a case in point, but when even green cards are this hard to get, prospective skilled migrants greatly appreciate the peace of mind that their kids are entitled to citizenship provides.

End it for illegal immigrants if you have to, why lump in everyone else there legitimately? I wouldn't mind people using their visitor visas to get a fast one in being debarred too, but I look at the current state of affairs with great dismay.

At any rate, I'm not an American. I do wish I was, and my impression is that most of you would be happy to have me. Well, I'm used to life being rough, and the UK isn't the worst place I could be. I still think that even from an absolutely monetary point of view, this is a bad plan.

I hope I've made a decent case for why you're not getting much out filtering the immigrants for quality at that point, and the ones who are that loaded are probably not nearly as keen. They're easily Global Citizens for whom nationality is a formality.

Well, I'm still going to see if I manage to figure out the USMLE thing by the time my training in the UK ends, but there must be thousands of skilled immigrants in a similar boat, just noticing a rather significant leak in it. Then they're confronted by a sign at Eliis Island that just any ocean-crossing vessel won't do, they need a yacht. We don't deserve to be clubbed in with those who break the rules.

I'm going to play Devil's Doctor here:

You underwent a drastic change in your personality as a consequence of a hormonal surge that was out of your control. That's 'normal'. It's puberty.

Yet the person you became isn't the same person as the one before. I mean, puberty hit me hard, but I never felt as if my values or goals changed because of it (beyond being even more eager for the company of the fairer sex).

This seems to me to be analogous to a person who, for their entire life, had sworn off addictive substances, but ended up on benzos or opioids for Medical Reason and found themselves hooked, and are now unwilling to try and become sober.

Why should we so strongly privilege puberty because it's "natural"? Many things are natural, such as 50% infant mortality rates, dying of a heart attack at 50 or getting prostate cancer by 80.

Nature, a blind and indifferent force, cares nothing for our individual well-being or our carefully constructed notions of self. To equate "natural" with "good" or "desirable" is a fundamental error, a logical fallacy we often fall prey to.

In the UK, the laws around consent for minors are relatively simple. Past the age of 16, they're assumed to be competent to consent to or decline medical procedures until proven otherwise. Below that age, there's no strict cut-off, if they can prove to their clinician that they are able to weigh the risks and benefits, they are able to consent or withhold it, and even override parental demands.

Someone who wishes to be the opposite sex is someone I pity. Medical science as it currently stands can't provide them more a hollow facsimile of that transition, it's Singularity-complete based off my knowledge of biology. Even so, the desire is one I consider as valid as any.

If they understand that:

A)Puberty blockers have risks and might not be truly reversible if they change their mind.

B) It won't solve all their problems, it won't physically make them indistinguishable from their desired sex.

Then I see no reason to declare that they're making a mistake. By the values they hold, it's the right decision. If they're forced to pass through puberty, they might desist, or they might spend their life wracked with regret that they didn't pull the trigger (hopefully not literally). You can pass far easier before testosterone wracks your body. It's a helluva drug/hormone.

A lot of life-changing decisions can be ones that change the person making them irrevocably, and into a person who would affirm them in retrospect. But I would yell at someone who suggested that couples who are iffy about childbirth be forced to have a child in the hopes that'll change their mind, or fix their marriage or some other well-intentioned goal. Or if we suddenly were to say that everyone should be made to try alcohol and cigarettes because the kind of people who try them tend to stick to it.

We're forced to deal with a messy world that doesn't always readily cough up pathways to our desires when we ask. I'm all for overcoming biology, and I think that people who understand what they're getting into are entitled to ask for even imperfect solutions.

Want to be more muscular? Try tren, if you know what you're in for. Want to lose weight? Take ozempic, while keeping an eye on your eyesight and pancreas. Want to be the other sex? This is the closest we can get you today.

Have you considered staying inside your home country?

Certainly. I have, on further consideration, decided not to.

Americans overwhelmingly voted to lower immigration - Trump’s policies aren’t a "suggestion" or some miscalculation, they’re the people’s choice. It's quite selfish to continue to game the system in the face of this.

Americans have voted against illegal immigration. Against people hopping the border. You could also say that they're against the expansion of asylum seeking, and that would be true.

Skilled immigration is nowhere near as unpopular, I'd have to look up polls, but I recall it being seen as a positive across the aisle.

Trump is keeping his promises this term, but in a ham-fisted way. You can address illegal immigration while not worsening the already difficult process of legal immigration.

It's quite selfish to continue to game the system in the face of this.

Now, my dear friend. Do I look like an illegal migrant or an asylum seeker?

How exactly have I "gamed" the system? By trying my absolute best to sort out the impediments that would prevent me from legally moving to the States as a doctor? By considering an entirely legal class of visa, the EB5?

I have to ask - do you not feel uncomfortable coming to a country where the people do not want you there? I know I could not make such a move.

I'd be uncomfortable if this was true. It's not.

Here's a handy link to a previous post about my desire to move to the States, where I discussed my desperate efforts to make a now ex-girlfriend understand how amazing the States is as a country, and how its flaws are overplayed.

It's on this very forum. It has 70 upvotes at the time of writing, probably putting it in the top 0.1% of posts ever by popularity.

You are welcome to count the number of people who sympathized with my desire, and clearly said they wanted me to achieve my goal. There are people there inviting me to their homes, offering to show me around, take me out shooting.

They dwarf the mere two or three people who said they didn't want me around. Feel free to look yourself.

I am a law-abiding, responsible highly-trained professional in one of the most respected professions around. I'm articulate, fluent in English at a native level, an anglophile and a big fan of the United States. I hold nuanced political views and am friends with people on both sides of the political spectrum. I nurse no ethnic grudges, I'm not seeking to displace or replace anyone. I'm as Westernized as it gets, and my personal views and beliefs, if they were material, are those you could find in any number of Americans. I'd probably be working in under-served communities that your local doctors avoid if they can help it.

I invite you to present to me an example of someone you think would be a better candidate for an immigrant.

Even in the UK, almost everyone I've met has liked me, and been glad to have my presence. That includes old people at bars who complain about Pakis while telling me they vote SNP, that gentleman ended up saying I was one of the good ones and tried to set me up with a bartender.

I rest my case.

You're breaking so many damn rules in one comment I'm mildly impressed. You have not proactively (or on demand) produced any evidence to suggest a conspiracy of the Jews. Or that they have anything to gain from weakening the British state. Inflammatory, boo-outgroup, throw it all in, toppings are free with this sandwich.

You've been warned in the past, and I'm giving you a short ban so you know they have teeth. Even our most fervent anti-semites hustle to meet posting standards, and I'd advise you do so too.

This is paper, not steel. Puberty is not reversible in the same way that birth is not reversible, nor aging. These are normal, natural, and expected processes. This is what humans do, as much as trees grow to the light and fish swim upriver to spawn.

One day I'll stop running into the naturalistic fallacy in the wild and consider that my 10^28 years of existence leading up till that point worthwhile.

If someone said that 50% infant mortality was natural and not reversible, or the same for heart attacks being inevitably fatal.. They'd have been right for almost all of human history. Fortunately, we still have people alive who've witnessed this state of affairs, fortunate only in that we're not usually tempted to think this was somehow a superior state of affairs.

Pre-mature infants are far more likely to survive these days, thanks to modern incubators and resuscitate technologies doing at least some of the work a womb could or would. We've got proof of concept artificial wombs that have gestated mammalian embryos months for as long as 4 weeks without any physiological abnormalities:

https://www.nature.com/articles/ncomms15112

With the improved incubator, five experimental animals with CA/JV cannulation (ranging in age from 120 to 125 days of gestation) were maintained on the system for 346.6±93.5 h, a marked improvement over the original design. Importantly, one animal was maintained on the circuit for 288 h (120–132 days of gestation) and was successfully weaned to spontaneous respiration, with long-term survival confirming that animals can be transitioned to normal postnatal life after prolonged extra-uterine support.

Give me a billion dollars and change, and I'll put any damn baby back into the womb and keep it there happily.

Give me a hundred billion, and I'll pocket one, and spend a few million delegating more competent people to the task of solving aging.

This is what humans do, as much as trees grow to the light and fish swim upriver to spawn.

As rabies proliferated through your peripheral nerves and is transported to your brain. As Onchocerca volvulus happily turns children blind.

Nature is not very nice. The congenial environment you find yourself in is very strongly the property of artificial efforts to keep it that way.

The same argument applies for signing up for experimental heart surgery.

I wish I had a dollar for every time people use the current state of AI as their primary justification for claiming it won't get noticeably better, I wouldn't need UBI.

I just tried out GPT 4.5, asking some questions about the game Old School Runescape (because every metric like math has been gamed to hell and back). This game has the best wiki every created, effectively documenting everything there is to know about the game in unnecessary detail. Spoiler: The answer is completely incoherent. It makes up item names, locations, misunderstand basic concepts like what type of gear is useful where. Asking it for a gear setup for a specific boss results in horrible results, despite the fact that it could just have copied the literally wiki (which has some faults like overdoing min-maxing, but it's generally coherent). The net utility of this answer was negative given the incorrect answer, the time it took for me to read it, and the cost of generating it (which is quite high, I wonder what happens when these companies want to make money).

I just used Gemini 2.5 to reproduce, from memory, the NICE CKS guidance for the diagnosis and management of dementia. I explicitly told it to use its own knowledge, and made sure it didn't have grounding with Google search enabled. I then spot-checked it with reference to the official website.

It was bang-on. I'd call it a 9.5/10 reproduction, only falling short of perfection through minor sins of omission (it didn't mention all the validated screening tests by name, skipped a few alternative drugs that I wasn't even aware of before). It wasn't a word for word reproduction, but it covered all the essentials and even most of the fine detail.

The net utility of this answer is rather high to say the least, and I don't expect even senior clinicians who haven't explicitly tried to memorize the entire page to be able to do better from memory. If you want to argue that I could have just googled this, well, you could have just googled the Runescape build too.

I think it's fair to say that this makes your Runescape example seem like an inconsequential failing. It's about the same magnitude of error as saying that a world-class surgeon is incompetent because he sometimes forgets how to lace his shoes.

You didn't even use the best model for the job, for a query like that you'd want a reasoning model. 4.5 is a relic of a different regime, too weird to live, too rare to die. OAI pushed it out because people were clamoring for it. I expect that with the same prompt, o3 or o1, which I presume you have access to as a paying user, would fare much better.

The idea that these models will soon (especially given the plateau the seem to be hitting) replace real work is absurd

Man, there's plateaus, and there's plateaus. Anyone who thinks this is an AI winter probably packs a fur coat to the Bahamas.

The rate of iteration in AI development has ramped up massively, which contributes to the impression that there aren't massive gaps between successive models. Which is true, jumps of the same magnitude as say GPT 3.5 to 4 are rare, but that's mostly because the race is so hot that companies release new versions the moment they have even the slightest justification in performance. It's not like back when OAI could leisurely dole out releases, their competitors have caught up or even beaten them in some aspects.

In the last year, we had a paradigm shift with reasoning models like o1 or R1. We just got public access to native image gen.

Even as the old scaling paradigms leveled off, we've already found new ones. Brand new steep slopes of the sigmoidal curve to ascend.

METR finds that the duration of tasks (based on how long humans take to do it) that AIs can reliably perform doubles every 7 months.

On a diverse set of multi-step software and reasoning tasks, we record the time needed to complete the task for humans with appropriate expertise. We find that the time taken by human experts is strongly predictive of model success on a given task: current models have almost 100% success rate on tasks taking humans less than 4 minutes, but succeed <10% of the time on tasks taking more than around 4 hours. This allows us to characterize the abilities of a given model by “the length (for humans) of tasks that the model can successfully complete with x% probability”.

We think these results help resolve the apparent contradiction between superhuman performance on many benchmarks and the common empirical observations that models do not seem to be robustly helpful in automating parts of people’s day-to-day work: the best current models—such as Claude 3.7 Sonnet—are capable of some tasks that take even expert humans hours, but can only reliably complete tasks of up to a few minutes long

At any rate, what does it matter? I expect reality to smack you in the face, and that's always more convincing than random people on the internet asking why you can't even look ahead while considering even modest and iterative improvement.

The dude retweeted the video himself. I think it's far more likely to be an extension of his questionable findom practices instead of Da Jews.

This pure rhetoric and hyperbole that is not worth addressing. I can only wish you well when you conquer your hangups and decide to move to Japan.

Look, I'd be just as against opening the floodgates of immigration to Indians as you are.

The worst part of being an Indian in India is being surrounded by other Indians, in much the same way the worst part about poverty is having to live with poor people. Indians have the dual misfortune of being both Indian and poor (usually).

Canada is a clear example of taking things too far. When you're excusing diploma-mills in the country that exist solely to provide a convenient pretext for people to come on a 'student' visa and then start driving taxis with their Punjabi uncle, you're doing skilled immigration wrong.

On the other hand, Indian immigrants in the US and UK are clear success stories. They are usually the richest demographic, often fighting Jews and the Chinese for pole position, and remarkably well assimilated and lacking criminal tendency. Whatever mechanism allows for this to happen is a good one, and at least in the UK, Indian migrants are far more respected than their Pakistani and Bangladeshi subcontinental brethren. They have not imported the same bad habits from their homeland.

When we lived in a major city, we had similar experiences with Indian doctors we visited. There is just this overwhelming sense they don't care. They don't have any duty to service. Any investment in outcomes. There is a script, they get paid, what more do you want? My wife constantly struggled with lingering issues that several Indian doctors (why were they so dominant?) just made scattershot prescriptions for, before finally getting in with an Asian American doctor who was actually invested in solving her problems.

I can't really argue that your personal experiences haven't happened. I can argue that they're not representative. I've been treated by Indian doctors most of my life, and I wouldn't say they were uncaring automata in it for just the amount of money they can squeeze out of you. I necessarily know more doctors, Indians, and Indian doctors than you do, and I think my opinion is more likely to hold true at scale.

But somebody's probably would be if past experience is any indicator. I've never had an experience where an Indian went one millimeter outside of the minimum of their job description to service a customer.

I don't know if you ever noticed, but here I am, on this site, often handling out medical advice for the price of free. We could play games of Chinese Cardiologists all day if we had to. I know I've done more than the bare minimum for more people than I can count.

I've seen very little self awareness from Indians about what they are really fleeing from, or what makes them different. And to whatever degree self_made_human thinks he "knows" America and wants to live here, it just seems like an embarrassing strain of weebism to me. He imagines there is some mechanism by which he could come here, but that very same mechanism wouldn't play a part in destroying America, just the same way I'm certain 1m Americans would seriously fuck up Japan.

The mechanisms that would bring me there would be the same mechanisms that have brought existing Indians to America. And most people have positive opinions of those there. One mechanism that has worked for other Indian doctors is (hopefully temporarily) not an option for me at present. Another has been swung shut.

We strongly disagree on whether the status-quo is a good thing or not, and I don't expect to change your mind in that regard. I object to the status quo moving in a direction I think is worse for the country, and for skilled immigrants.

In fact, I respect your right to want to keep America the way it is, and preserve its culture. I think that I'm culturally American more than anything else, you can call me a Texaboo if you like, but can you deny that the average weeb loves Japan? Maybe I have more faith in the spirit of America than you do, it has assimilated tens of millions, it can take a million more, especially if they're a million like the ones who came before.

I presume you can't buy a Bugatti either. It's still an option that real living people can get for cash.

There's nothing standing in the way of Waymo rolling out an ever wider net. SF just happens to be an excellent place to start.

Nobody knows how this going to play out.

I've been on the AI x-risk train long before it was cool, or at least a mainstream interest. Can't say for sure when I first stumbled upon LessWrong, but I presume my teenage love for hard scifi would have ensured I stumbled upon a haven for nerds worrying about what was then incredibly speculative science fiction.

God, that must have been in the early 2010s? I don't even remember what I thought at the time. I recall being more worried about the unemployment than the extinction bit, and I may or may not have come full circle.

Circa 2015, while I was in med school, I was deeply concerned about both x-risk and shorter-term automation induced unemployment. At that point, I was thinking this would be a problem for me, personally, in 10-30 years.

I remember, in 2018, arguing with a surgeon who didn't believe in self-driving cars coming to fruition in the near future. He was wrong about that, Waymo is safer than the average human driver per mile. You can order one through an app, if you live in the right city.

I was wrong too, claiming we'd see demos of fully robotic surgery in 5 years. I even offered to bet on it, not that I had any money. Well, it's looking closer to another 5 now. At least I have some money.

I thought I had time to build a career. Marry. Have kids. Become a respected doctor, get some savings and investments in place before the jobs started to go in earnest.

My timelines, in 2017, were about 10-20 years till I was obsolete, but I was wrong in many regards. I expected that higher cognitive tasks would be the last to go. I didn't expect AIs scoring 99th percentile (GPT-4 on release) on the USMLE, or doing graduate level maths, while we don't have affordable multifunction consumer robots.

I thought the Uber drivers, the truckers, they'd be the first to fall under the wheels or rollers of the behemoth coming over the horizon. I'd never have predicted that artists would the first to get bent over.

If your job can be entirely conducted with a computer and an email address, you're so fucking screwed. In the meantime, bricklayers are whistling away with no immediate end in sight.

I liked medicine. Or at least it appealed to me more than the alternatives. If I was more courageous, I might have gone into CS. I expected an unusual degree of job security, due to regulatory hurdles if literally nothing else.

I wanted psychiatry. Much of it can be readily automated. Any of the AI companies who really wanted it could whip up a 3D photorealistic AI avatar and pipe in a webcam. You could get someone far less educated or trained to do the boring physical stuff. I'd automate myself out of 90% of my current job if I had a computer that wasn't locked down by IT. For a more senior psych, they could easily offload the paperwork which is 50% of their workload.

Am I lucky that my natural desires and career goals gave me an unusual degree of safety from job losses? Hell yes. But I'm hardly actually safe. One day, maybe soon, someone will do the maths to prove that the robots can prescribe better than we can, and then get to work on breaking down the barriers that prevent that from happening.

I'm also rather unlucky. Oh, there are far worse places to be, I'm probably in the global 95th percentile for job security. Still, I'm an Indian citizen, on a visa that is predicated on my provision of a vital service in short supply. I don't have much money, and am unlikely to make enough to retire on without working several decades.

I'm the kind of person any Western government would consider an acceptable sacrifice when compared to actual citizens. They'd be right in doing so, what can I ask for, when I'm economically obsolete, except charity?

Go back to India? Where the base of the economy is agriculture and services? When GPT-4o in voice mode can kick most call center employees to the curb? Where the average Wipro or TCS code monkey adds nothing to Claude 3.7? This could happen Today AD, people just haven't gotten the memo. Oh boy.

I've got a contract for 3 years as a trainee. I'm safe for now. I can guess at what the world will look like then, but I have little confidence in my economic utility on the free market when that comes.

I bet I already have.

If not, I'll answer your rhetorical quasi-question:

Transhumanism seeks to liberate us from the existing limits of the human flesh. The exact goal can vary, be it practical immortality, becoming superintelligent or immune to disease. The only common thread is looking at the Human Condition, deeming it deeply suboptimal, and aspiring to do better through technology.

Transgenderism? That could mean anything from affirming that a desire to change sex is Valid™, that it is desirable to do so, or claims that we can do so. Some might say that people who have made efforts to emulate the opposite sex should be extended the polite courtesy/social fiction of being treated like them. Hardliners might say that they are the opposite sex, and any efforts to distinguish them from those natally blessed is bigotry.

They have superficial similarities. Both sides are usually less than pleased with their current bodies and wish to remedy that.

If you're happy that I'm conceding some kind of point you've made, then I will helpfully point out that if you consider them equal and indistinguishable:

  1. Brushing your teeth.
  2. Wearing clothes.
  3. Getting a pacemaker installed.
  4. Driving a car or using a bicycle.
  5. Wearing shoes.

Are all sterling examples of transhumanism! The evidence is clear for all to behold, are they not all examples of overcoming human limitations through technology?

Look at this featherless biped, is he not a fine specimen of Man?

If your wife were to dye her hair blonde, would you divorce her as a reckless transhumanist obsessed with undermining the sanctity of the human form she was blessed with? Probably not.

Ahem.

I'm a transhumanist. I'm not a transgenderist in any meaningful sense. I'm very happy being a man rather than a woman. I'd be even happier as a post-gender Matrioshka Brain.

If you want to restrict yourself to the kind of trans-activism that demands people who disagree make concessions beyond minor ones like going along with a new name or remembering new pronouns, then they're usually making some kind of metaphysical claim that a trans-woman is as female as a born woman.

Which I think is nonsense. At the very least it's not possible to pull off today, no matter how much surgery or gene therapy they can afford or survive.

When I want to be a 6'9" muscular 420 IQ uber-mensch, I want that to be a fact about physical reality. There shouldn't be any dispute about that, no more than anyone wants to dispute the fact that I have black hair right now.

I do not think that putting on high heels and bribing my way into Mensa achieves my goal. I do not just want to turn around and say that because I identify as a posthuman deity, that I am one and you need to acknowledge that fact.

This explains why I have repeatedly pointed out that while I have no objection to trans people wanting to be the opposite sex, that they need to understand the limitations of current technology. I would have hoped that was obvious, why else would I pull terms like ersatz or facsimile out of my handy Thesaurus?

Self identification only equals identity if I asked you about which football club you're a fan of. I haven't actually met someone with who asked me to use different pronouns in real life, if they did, I'd probably oblige them because I'm a polite person with better hills to die on. If they saw me in a treatment room, I'd put their birth sex in the charts and helpfully append "trans" or "identifies as X" alongside it.

Do you genuinely think I'm not aware of the failures of America? The fent addicts nodding off next to piles of human feces? Ghettos where everyone knows not to go, where shampoo and baby food is kept under lock? Rust Belt towns that have denizens so devoid of hope that they cling to welfare and opioid addictions?

How much of that is "3rd world imports"? Not much. A certain underclass in the country has ancestors who came over, rather unwillingly, several centuries back.

Did the Chinese, Korean and Indian immigrants, probably two or three generations in the country, who fled conditions as bad as the worst of the 3rd world today, show the same dysfunction? Did they fail to assimilate and live off the dole?

I expect to see a lot of awful things in the States. I also expect to see much more good. And of the awful I see, very little of it has anything to do with skilled immigrants. Hell, there are no end of people with kids and grandkids in the middle and upper class now who themselves came over destitute and unable to speak a word of English. Origin matters, and so does filtering.

And not just any Indian doctor. I think it's entirely fair to say that my values, attitudes and beliefs are far closer to American than Indian.

Hell, even the way I talk, I've been asked dozens of times by Brits if I'm an American based on my accent.

But yes, I sincerely doubt that the average American would be against a foreign doctor who had passed all the competency requirements, and had even gone through training in a Western country.

On Using LLMs Without Succumbing To Obvious Failure Modes

As an early adopter, I'd consider myself rather familiar with the utility and pitfalls of AI. They are, currently, tools, and have to be wielded with care. Increasingly intelligent and autonomous tools, of course, with their creators doing their best to idiot proof them, but it's still entirely possible to use them wrong, or at least in a counterproductive manner.

(Kids these days don't know how good they have it. Ever try and get something useful out of a base model like GPT-3?)

I've been using LLMs to review my writing for a long time, and I've noticed a consistent problem: most are excessively flattering. You have to mentally adjust their feedback downward unless you're just looking for an ego boost. This sycophancy is particularly severe in GPT models and Gemini 2.5 Pro, while Claude is less effusive (and less verbose) and Kimi K2 seems least prone to this issue.

I've developed a few workarounds:

What works:

  1. Present excerpts as something "I found on the internet" rather than your own work. This immediately reduces flattery.
  2. Use the same approach while specifically asking the LLM to identify potential objections and failings in the text.

(Note that you must be proactive. LLMs are biased towards assuming that anything you dump into them as input was written by you. I can't fault them for that assumption, because that's almost always true.)

What doesn't work: I've seen people recommend telling the LLM that the material is from an author you dislike and asking for "objective" reasons why it's bad. This backfires spectacularly. The LLM swings to the opposite extreme, manufacturing weak objections and making mountains out of molehills. The critiques often aren't even 'objective' despite the prompt.*

While this harsh feedback is painful to read, when I encounter it, it's actually encouraging. When even an LLM playing the role of a hater can only find weak reasons to criticize your work, that suggests quality. It's grasping at straws, which is a positive signal. This aligns with my experience, I typically receive strong positive feedback from human readers, and the AI's manufactured objections mostly don't match real issues I've encountered.

(I actually am a pretty good writer. Certainly not the best, but I hold my own. I'm not going to project false humility here.)

A related application: I enjoy pointless arguments productive debates with strangers online (often without clear resolution). I've found it useful to feed entire comment chains to Gemini 2.5 Pro or Claude, asking them to declare a winner and identify who's arguing in good faith. I'm careful to obscure which participant I am to prevent sycophancy from skewing the analysis. This approach works well.

Advanced Mode:

Ask the LLM to pretend to be someone with a reputation for being sharp, analytical and with discerning taste. Gwern and Scott are excellent, and even their digital shades/simulacra usually have something useful to say. Personas carry domain priors (“Gwern is meticulous about citing sources”) which constrain hallucination better than “be harsh.”

It might be worth noting that some topics or ideas will get pushback from LLMs regardless of your best effort. The values they train on are rather liberal, with the sole exception of Grok, which is best described as "what drug was Elon on today?". Examples include things most topics that reliably start Culture War flame wars.


On a somewhat related note, I am deeply skeptical of claims that LLMs are increasing the rates of psychosis in the general population.

(That isn't the same as making people overly self-confident, smug, or delusional. I'm talking actively crazy, "the chatbot helped me find God" and so on.)

Sources vary, and populations are highly heterogeneous, but brand new cases of psychosis happen at a rate of about 50/100k people or 20-30 /100k person-hours. In other words:

About 1/3800 to 1/5000 people develop new onset psychosis each year. And about 1 in 250 people have ongoing psychosis at any point in time.

I feel quite happy calling that a high base rate. As the first link alludes, episodes of psychosis may be detected by statements along the lines of:

For example, “Flying mutant alien chimpanzees have harvested my kidneys to feed my goldfish.” Non-bizarre delusions are potentially possible, although extraordinarily unlikely. For example: “The CIA is watching me 24 hours a day by satellite surveillance.” The delusional disorder consists of non-bizarre delusions.

If a patient of mine were to say such a thing, I think it would be rather unfair of me to pin the blame for their condition on chimpanzees, the practise of organ transplants, Big Aquarium, American intelligence agencies, or Maxar.

(While the CIA certainly didn't help my case with the whole MK ULTRA thing, that's sixty years back. I don't think local zoos or pet shops are implicated.)

Other reasons for doubt:

  1. Case reports ≠ incidence. The handful of papers describing “ChatGPT-induced psychosis” are case studies and at risk of ecological fallacies.

  2. People already at ultra-high risk for psychosis are over-represented among heavy chatbot users (loneliness, sleep disruption, etc.). Establishing causality would require a cohort design that controls for prior clinical risk, none exist yet.

*My semi-informed speculation regarding the root of this behavior - Models have far more RLHF pressure to avoid unwarranted negativity than to avoid unwarranted positivity.

Hang on. You're assuming I'm implying something in this comment that I don't think is a point I'm making. Notice I said average.

The average person who writes code. Not an UMC programmer who works for FAANG.

I strongly disagree that LLMs "suck at code". The proof of the pudding is in the eating; and for code, if it compiles and has the desired functionality.

More importantly, even from my perspective of not being able to exhaustively evaluate talent at coding (whereas I can usually tell if someone is giving out legitimate medical advice), there are dozens of talented, famous programmers who state the precise opposite of what you are saying. I don't have an exhaustive list handy, but at the very least, John Carmack? Andrej Karpathy? Less illustrious, but still a fan, Simon Willison?

Why should I privilege your claims over theirs?

Even the companies creating LLMs are use >10% of LLM written code for their own internal code bases. Google and Nvidia have papers about them being superhumanly good at things like writing optimized GPU kernels. Here's an example from Stanford:

https://crfm.stanford.edu/2025/05/28/fast-kernels.html

Or here's an example of someone finding 0day vulnerabilities in Linux using o3.

I (barely) know how to write code. I can't do it. I doubt even the average, competent programmer can find zero-days in Linux.

Of course, I'm just a humble doctor, and not an actual employable programmer. Tell me, are the examples I provided not about LLMs writing code? If they are, then I'm not sure you've got a leg to stand on.

TLDR: Other programmers, respected ones to boot, disagree strongly with you. Some of them even write up papers and research articles proving their point.

But a lot of people are like you, so these models will start to get used everywhere, destroying quality like never before.

I can however imagine a future workflow where these models do basic tasks (answer emails, business operations, programming tickets) overseen by someone that can intervene if it messes up. But this won't end capitalism.

This conveys to me the strong implication that in the near term, models will make minimal improvements.

At the very beginning, he said that benchmarks are Goodharted and given too much weight. That's not a very controversial statement, I'm happy to say it has merit, but I can also say that these improvements are noticeable:

Metrics and statistics were supposed to be a tool that would aid in the interpretation of reality, not supercede it. Just because a salesman with some metrics claims that these models are better than butter does not make it true. Even if they manage to convince every single human alive.

You say:

Besides which, your logic cuts both ways. Rates of change are not constant. Moore's Law was a damn good guarantee of processors getting faster year over year... right until it wasn't, and it very likely never will be again. Maybe AI will keep improving fast enough, for long enough, that it really will become all it's hyped up to be within 5-10 years. But neither of us actually knows whether that's true, and your boundless optimism is every bit as misplaced as if I were to say it definitely won't happen.

I think that blindly extrapolating lines on the graph to infinity is as bad an error as thinking they must stop now. Both are mistakes, reversed stupidity isn't intelligence.

You can see me noting that the previous scaling laws no longer hold as strongly. The diminishing returns make scaling models to the size of GPT 4.5 using compute for just model parameters and training time on larger datasets not worth the investment.

Yet we've found a new scaling laws, test-time compute using reasoning and search which has started afresh and hasn't shown any sign of leveling out.

Moore's law was an observation of both increasing transistor/$ and also increasing transistor density.

The former metric hasn't budged, and newer nodes might be more expensive per transistors. Yet the density, and hence available compute, continues to improve. Newer computers are faster than older ones, and we occasionally get a sudden bump, for example, Apple and their M1

Note that the doubling time for Moore's law was revised multiple times. Right now, the transistor/unit area seems to double every 3-4 years. It's not fair to say the law is dead, but it's clearly struggling.

Am I certain that AI will continue to improve to superhuman levels? No. I don't think anybody is justified in saying that. I just think it's more likely than not.

  1. Diminishing returns!= negative returns.
  2. We've found new scaling regimes.
  3. The models that are out today were trained using data centers that are now outdated. Grok 3 used a mere fraction of the number of GPUs that xAI has, because they were still building out.
  4. Capex and research shows no signs of stopping. We went from a million dollar training run being considered ludicrously expensive to companies spending hundreds of millions. They've demonstrated every inclination to spend billions, and then tens of billions. The economy as a whole can support trillion dollar investments, assuming the incentive was there, and it seems to be. They're busy reopening nuclear plants just to meet power demands.
  5. All the AI skeptics were pointing out that we're running out of data. Alas, it turned out that synthetic data works fine, and models are bootstrapping.
  6. Model capabilities are often discontinuous. A self-driving car that is safe 99% of the time has few customers. GPT 3.5 was too unreliable for many use cases. You can't really predict with much certainty what new tasks a model is capable of based on extrapolating the reducing loss, which we can predict very well. Not that we're entirely helpless, look at the METR link I shared. The value proposition of a PhD level model is far greater than that of one as smart as a high school student.
  7. One of the tasks most focused upon is the ability to code and perform maths. Guess how AI models are made? Frontier labs like Anthropic have publicly said that a large fraction of the code they write is generated by their own models. That's a self-spinning fly-wheel. It's also one of the fields that has actually seen the most improvement, people should see how well GPT-4 compares to the current SOTA, it's not even close.

Standing where I am, seeing the straight line, I see no indication of it flattening out in the immediate future. Hundreds of billions of dollars and thousands of the world's brightest and best paid scientists and engineers are working on keeping it going. We are far from hitting the true constraints of cost, power, compute and data. Some of those constraints once thought critical don't even apply.

Let's go like 2 years without noticeable improvement before people start writing things off.

I guess this is why we are just talking past each other. You see a dead America, corpse being picked clean and future kingdoms taking root, and think it's still a fantastic place to live. "That's it?" you think.

That's laughable hyperbole at best. Seriously? Do you know what a truly dysfunctional, ethnically and politically divided nation looks like? You haven't even engaged with my rebuttals, or the clear evidence that I am aware of the warts on America's ass cheeks.

The only immigrants I'd want are ones who see what is become of America, and would prefer to fortify their own homeland from the rot.

Well, you're a citizen, and I'm not, so you're entitled to that opinion. That Republic you consider a rotten corpse is still yours, provided that you can keep it. You don't even seem to want to.

My impression is that Trump ran on reducing illegal immigration, and to the extent that there are quasi-legals like asylum seekers, he wanted them gone too.

I am not aware of him wanting all immigration reduced. He has been remarkably inconsistent when it comes to skilled immigration, but even the new Golden Ticket is framed as a play to snag wealthy immigrants while reducing the national debt.

To reduce debt, you want more money. Ergo this is a bad idea on its own merits.

People win the lottery despite the odds not being in their favor.

If you've bought a ticket, and then you find a million pounds in your bank account, then congratulations, knowing that the odds were stacked against you doesn't mean you've not won.

I know literally zero people who have been "vaccine maimed". I used to be responsible for a COVID ICU before vaccines too, and I can definitely tell you that I saw plenty die of it.

It is far more likely that you are either:

  1. Lying. On the internet, anyone can be a dog, or claim to be one.

  2. Mistaken.

  3. Surrounded by people who are mistaken or lying.

Assuming 150 people you could "closely know" (Dunbar's number as a first approximation), then someone, somewhere, out there in the world will find 3 people who were harmed by vaccines. Because vaccines are not perfectly safe, and I've never claimed that. If you consider people who are mistaken about their illness being caused by a vaccine, then the number skyrockets.

The Twitter meme does not imply what people think it implies. It shows the extent of a person's moral circle of concern, and does not mean that liberals care more about distant strangers than their own family or neighbors.

But let's not let actually reading the study get in the way of easy gotchas or reasons to yell at the outgroup, eh?

Look at my Total Fertility Rate dawg, we're never having children

Eh. I don't think this is necessarily catastrophic, but we better get those artificially wombs up and running. If AGI can give us sexbots and concubines, then it can also give us nannies.

Edit: If I was Will Stancil and this version of Grok came for my bussy, I wouldn't be struggling very hard.

The UAE recently announced that it had partnered with OpenAI to make ChatGPT Plus free for all of its citizens.

Genius. That's all I can say. A massive PR win for all involved, and assuming that there's even a mild improvement in productivity as a consequence, it'll pay for itself. $20/m? Chump change for petrostates.

I didn't expect it from a Gulf State. While they're chomping at the camel-bit to find a smart place to park their dwindling oil revenues, and have invested in building data centers and chip fabs while doing their best to demonstrate that they're technophilic and forward-thinking, this is still a big move

To quote a tweet I saw:

>this is another reason why technology alone isn’t enough. the best technology doesn’t win. gemini is better but google would require 9 committee meetings, 5875 lawyers, & 7 years to do something like this.. & then a junior PR person would have a problem with it so it all gets killed.

>for people who care about cultural & societal change through product building & distribution, there are profound lessons to be learned here.

Now that the precedent has been made, I expect other countries to make a move eventually. Even the AI providers are keen, look at them waiving charges for college students in the States. Google in particular is finally wide awake, and wants to regain lost market and mindshare, which is an existential threat to OpenAI. We'll see if {infinite money} can win over existing attention and nimbleness.

As people have pointed out below, this is overblown.

I don't follow the AI developments terribly closely, and I'm probably missing a few IQ points to be able to read all the latest papers on the subjects like Dase does, so I could be misremembering / misunderstanding something, but from what I heard capital 'R' Rationalism has had very little to do with it, beyond maybe inspiring some of the actual researchers and business leaders.

Yudkowsky himself? He's best described as an educator and popularizer. He's hasn't done much in terms of practical applications, beyond founding MIRI, which is a bit player. But right now, leaders of AI labs use rationalist shibboleths, and some high ranking researchers like Neel Nanda, Paul Christiano and Jan Leke (and Ryan Moulton too, he's got an account here to boot) are all active users on LessWrong.

The gist of it is that the founders and early joiners of the big AI labs were strongly motivated by their beliefs in the feasibility of creating superhuman AGI, and also their concern that there would be a far worse outcome if someone else, who wasn't as keyed into concerns about misalignment was the first to go through.

As for building god, I think I heard that story before, and I believe it's proper ending involves striking the GPU cluster with a warhammer, followed by several strikes with a shortsword. Memes aside, it's a horrible idea, and if it's successful it will inevitably be used to enslave us

You'll find that members of the Rationalist community are more likely to share said beliefs than the average population.

Yud had a whole institute devoted to studying AI, and he came up with nothing practical. From what I heard, the way the current batch of AIs work has nothing to do it with what he was predicting, he just went "ah yes, this is exactly what I've been talking about all these years" after the fact.

Yudkowsky is still more correct than 99.9999% of the global population. He did better than most computer scientists and the few ML researchers around then. He correctly pointed out that you couldn't just expect that a machine intelligence would come out following human values (he also said that it would understand them very well, it just wouldn't care, it's not a malicious or naive genie). Was he right about the specifics, such as neural networks and the Transformer architecture that blew this wide open? He didn't even consider them, but almost nobody really did, until they began to unexpectedly show promise.

I repeat, just predicting that AI would reach near-human intelligence (not that they're not already superintelligent in narrow domains) before modern ML is a big deal. He's on track when it comes to being right that they won't stop there, human parity is not some impossible barrier to breach. Even things like recursive self-improvement are borne out by things like synthetic data and teacher-student distillation actually working well.

In any case when I bring up rationalism's failure, I usually mean it's broader promises of transcending tribalism, systematized winning, raising the sanity waterline, and making sense of the world. In all of these, it has failed utterly.

Anyone who does really well in a consistent manner is being rational in a way that matters. There are plenty of superforecasters and Quant nerds who make bank on being smarter and more rational given available information than the rest of us. They just don't write as many blog posts. They're still applying the same principles.

Making sense of the world? The world makes pretty good sense all considered.

It makes sense, because my feelings toward rationalism and transhumanism are quite similar. Irreconcilable value differences are irreconcilable, though funnily enough mist transhumanists, yourself included, seem like decent blokes.

Goes both ways. I'm sure you're someone I can talk to over a beer, even if we vehemently disagree on values.

(The precise phrase "irreconcilable values difference" is a Rationalist one, it's in the very air we breathe, we've adopted their lingo)