@self_made_human's banner p

self_made_human

Grippy socks, grippy box

16 followers   follows 0 users  
joined 2022 September 05 05:31:00 UTC

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

A friend to everyone is a friend to no one.


				

User ID: 454

self_made_human

Grippy socks, grippy box

16 followers   follows 0 users   joined 2022 September 05 05:31:00 UTC

					

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

A friend to everyone is a friend to no one.


					

User ID: 454

I fed your comment into Gemini 2.5 Pro, and it came up with an incredibly insightful answer meant to be shared with these supposedly struggling men. Unfortunately, the majority opinion here frowns on reproducing AI output, so I'll be uncharacteristically catty and keep it to myself. Anyone curious can copy and paste for the same result, I'd presume.

  • -35

My man, you've convinced me to switch to the default Motte theme in my profile so I can both flashbang my eyes and also see what kind of record of past rule-breaking you've been up to.

My eyes are burnt, and so is your standing with us mods. I see a long list of past warnings and temp bans, and not a single good thing to counteract that. You've been warned for low effort commentary as well as booing the outgroup more times than I want to count.

Banned for a month, and I leave it open to the others if they want to extend this.

I know what a woman is, or at least I know 'em when I see 'em. I don't need an LLM to guide me in that regard.

This is paper, not steel. Puberty is not reversible in the same way that birth is not reversible, nor aging. These are normal, natural, and expected processes. This is what humans do, as much as trees grow to the light and fish swim upriver to spawn.

One day I'll stop running into the naturalistic fallacy in the wild and consider that my 10^28 years of existence leading up till that point worthwhile.

If someone said that 50% infant mortality was natural and not reversible, or the same for heart attacks being inevitably fatal.. They'd have been right for almost all of human history. Fortunately, we still have people alive who've witnessed this state of affairs, fortunate only in that we're not usually tempted to think this was somehow a superior state of affairs.

Pre-mature infants are far more likely to survive these days, thanks to modern incubators and resuscitate technologies doing at least some of the work a womb could or would. We've got proof of concept artificial wombs that have gestated mammalian embryos months for as long as 4 weeks without any physiological abnormalities:

https://www.nature.com/articles/ncomms15112

With the improved incubator, five experimental animals with CA/JV cannulation (ranging in age from 120 to 125 days of gestation) were maintained on the system for 346.6±93.5 h, a marked improvement over the original design. Importantly, one animal was maintained on the circuit for 288 h (120–132 days of gestation) and was successfully weaned to spontaneous respiration, with long-term survival confirming that animals can be transitioned to normal postnatal life after prolonged extra-uterine support.

Give me a billion dollars and change, and I'll put any damn baby back into the womb and keep it there happily.

Give me a hundred billion, and I'll pocket one, and spend a few million delegating more competent people to the task of solving aging.

This is what humans do, as much as trees grow to the light and fish swim upriver to spawn.

As rabies proliferated through your peripheral nerves and is transported to your brain. As Onchocerca volvulus happily turns children blind.

Nature is not very nice. The congenial environment you find yourself in is very strongly the property of artificial efforts to keep it that way.

Is England a better place where nobody cares about the Legend of King Arthur anymore?

Better? I don't know about that. But worse? Almost certainly not.

If the very idea of "King Arthur" somehow fell out of the collective consciousness, then as far as I can tell, nobody would really notice or care. Maybe we might see an improvement in GDP figures when fewer awful movies come out every few years and then bomb at the box office.

Now, the current state of England, or the UK as a whole, leaves much to be desired, but I can recall no point in history, even at its absolute prime, when success or governmental continuity was load-bearing on watery tarts handing out swords. And even back then, people treated it as a nice story, rather than historical fact or the basis for their identity. England was conquered by the Danes and the Saxons after all, well after the knights of the not-square table were done gallivanting about.

On a more general level, I fail to see your case, or at least I don't think there's a reason to choose false stories or myths over ideas that are true, or at least not accurately described as either.

The French made liberty, equality and fraternity their rallying cry to great effect. I do not think any 3 of those concepts are falsifiable, but they still accurately capture values and goals.

I mean, I assume both of us are operating on far more than 2 data points. I just think that if you open with an example of a model failing at a rather inconsequential task, I'm eligible to respond with an example of it succeeding at a task that could be more important.

My impression of LLMs is that in the domains I personally care about:

  1. Medicine.
  2. Creative fiction
  3. Getting them to explain random things I have no business knowing. Why do I want to understand lambda calculus or the Church Turing hypothesis? I don't know. I finally know why Y Combinator has that name.

They've been great at 1 and 3 for a while, since GPT-4. 2? It's only circa Claude 3.5 Sonnet that I've been reasonably happy with their creative output, occasionally very impressed.

Number 3 encompasses a whole heap of topics. Back in the day, I'd spot check far more frequently, these days, if something looks iffy, these days I'll shop around with different SOTA models and see if they've got a consensus or critique that makes sense to me. This almost never fails me.

And I don't get your example, wouldn't the NICE CKS be in the dataset many times over?

Almost certainly. But does that really matter to the end user? I don't know if the RS wiki has anti-scraping measures, but there's tons of random nuggets of RS build and items guide all over the internet. Memorization isn't the only reason that models are good, they think, or do something so indistinguishable from the output of human thought that it doesn't matter.

If you met a person who was secretly GPT-4.5 in disguise, you would be rather unlikely to be able to tell at all that they weren't a normal human, not unless you went about suspicious from the start. (Don't ask me how this thought experiment would work, assume a human who just reads lines off AR lenses I guess).

These tools are amazing as search engines as long as the user using them is responsible and able to validate the responses. It does not mean they are thinking very well. Which means they will have a hard time doing things not in the dataset. These models are not a pathway to AGI. They might be a part of it, but it's gonna need something else. And that/those parts might be discovered tomorrow, or in 50 years.

This is a far more reasonable take in my opinion, if you'd said this at the start I'd have been far more agreeable.

I have minor disagreements nonetheless:

  1. 99% of the time or more, what current models say in my field of expertise (medicine) is correct when I check it. Some people claim to experience severe Gell-Mann amnesia when using AI models, and that has not really been my experience.
  2. This means that unless it's mission critical, the average user can usually get by with taking answers at face value. If it's something important, then checking is still worthwhile.
  3. Are current models AGI? Who even knows what AGI means these days. By most definitions before 2015, they count. It's valid to argue that that reveals a weakness of those previous definitions, but I think that at the absolute bare minimum these are proto-agi. I expect an LLM to be smarter and more knowledgeable and generally flexible than the average human. I can't ask a random person on the street what beta reduction is and expect an answer unless I'm on the campus of a uni with a CS course. That the same entity can also give good medical advice? Holy shit.
  4. Are the current building blocks necessary or sufficient for ASI? Something so smart than even skeptics have to admit defeat (Gary Marcus is retarded, so he doesn't count)? Maybe. Existing ML models can theoretically approximate any computable function, but something like the Transformer architecture has real world limitations.

And I don't see why reality will smack me in the face. I'm already using these as much as possible since they are great tools. But I don't expect my work to look very different in 2030 compared to now. Since programming does not feel very different today compared to 2015.

Well, if you're using the tools regularly and paying for them, you'll note improvements if and when they come. I expect reality to smack me in the face too, in the sense that even if I expect all kinds of AI related shenanigans, seeing a brick wall coming at my car doesn't matter all that much when I don't control the brakes.

For a short span of time, I was seriously considering switching careers from medicine to ML. I did MIT OCW programs, managed to solve one Leetcode medium, and then realized that AI was getting better at coding faster than I would. (And that there are a million Indian coders already, that was a factor). I'm not saying I'm a programmer, but I have at least a superficial understanding.

I distinctly remember what a difference GPT-4 made. GPT-3.5 was tripped up by even simple problems and hallucinated all the time. 4 was usually reliable, and I would wonder how I'd ever learned to code before it.

I have little reason to write code these days, but I can see myself vibe-coding. Despite your claims that you don't feel that programming had changed since 2015, there are no end of talented programmers like Karpathy or Carmac who would disagree.

Thanks for the comment btw, it made me try out programming with gemini 2.5 and it's pretty good.

You're welcome. It's probably the best LLM for code at the moment. That title changes hands every other week, but it's true for now.

But a lot of people are like you, so these models will start to get used everywhere, destroying quality like never before.

I can however imagine a future workflow where these models do basic tasks (answer emails, business operations, programming tickets) overseen by someone that can intervene if it messes up. But this won't end capitalism.

This conveys to me the strong implication that in the near term, models will make minimal improvements.

At the very beginning, he said that benchmarks are Goodharted and given too much weight. That's not a very controversial statement, I'm happy to say it has merit, but I can also say that these improvements are noticeable:

Metrics and statistics were supposed to be a tool that would aid in the interpretation of reality, not supercede it. Just because a salesman with some metrics claims that these models are better than butter does not make it true. Even if they manage to convince every single human alive.

You say:

Besides which, your logic cuts both ways. Rates of change are not constant. Moore's Law was a damn good guarantee of processors getting faster year over year... right until it wasn't, and it very likely never will be again. Maybe AI will keep improving fast enough, for long enough, that it really will become all it's hyped up to be within 5-10 years. But neither of us actually knows whether that's true, and your boundless optimism is every bit as misplaced as if I were to say it definitely won't happen.

I think that blindly extrapolating lines on the graph to infinity is as bad an error as thinking they must stop now. Both are mistakes, reversed stupidity isn't intelligence.

You can see me noting that the previous scaling laws no longer hold as strongly. The diminishing returns make scaling models to the size of GPT 4.5 using compute for just model parameters and training time on larger datasets not worth the investment.

Yet we've found a new scaling laws, test-time compute using reasoning and search which has started afresh and hasn't shown any sign of leveling out.

Moore's law was an observation of both increasing transistor/$ and also increasing transistor density.

The former metric hasn't budged, and newer nodes might be more expensive per transistors. Yet the density, and hence available compute, continues to improve. Newer computers are faster than older ones, and we occasionally get a sudden bump, for example, Apple and their M1

Note that the doubling time for Moore's law was revised multiple times. Right now, the transistor/unit area seems to double every 3-4 years. It's not fair to say the law is dead, but it's clearly struggling.

Am I certain that AI will continue to improve to superhuman levels? No. I don't think anybody is justified in saying that. I just think it's more likely than not.

  1. Diminishing returns!= negative returns.
  2. We've found new scaling regimes.
  3. The models that are out today were trained using data centers that are now outdated. Grok 3 used a mere fraction of the number of GPUs that xAI has, because they were still building out.
  4. Capex and research shows no signs of stopping. We went from a million dollar training run being considered ludicrously expensive to companies spending hundreds of millions. They've demonstrated every inclination to spend billions, and then tens of billions. The economy as a whole can support trillion dollar investments, assuming the incentive was there, and it seems to be. They're busy reopening nuclear plants just to meet power demands.
  5. All the AI skeptics were pointing out that we're running out of data. Alas, it turned out that synthetic data works fine, and models are bootstrapping.
  6. Model capabilities are often discontinuous. A self-driving car that is safe 99% of the time has few customers. GPT 3.5 was too unreliable for many use cases. You can't really predict with much certainty what new tasks a model is capable of based on extrapolating the reducing loss, which we can predict very well. Not that we're entirely helpless, look at the METR link I shared. The value proposition of a PhD level model is far greater than that of one as smart as a high school student.
  7. One of the tasks most focused upon is the ability to code and perform maths. Guess how AI models are made? Frontier labs like Anthropic have publicly said that a large fraction of the code they write is generated by their own models. That's a self-spinning fly-wheel. It's also one of the fields that has actually seen the most improvement, people should see how well GPT-4 compares to the current SOTA, it's not even close.

Standing where I am, seeing the straight line, I see no indication of it flattening out in the immediate future. Hundreds of billions of dollars and thousands of the world's brightest and best paid scientists and engineers are working on keeping it going. We are far from hitting the true constraints of cost, power, compute and data. Some of those constraints once thought critical don't even apply.

Let's go like 2 years without noticeable improvement before people start writing things off.

I think that when I use the word "healthy", it reliably constrains expectations. If I tell you my shirt is red, and then you examine it, you wouldn't be expecting it to emit or reflect only light that's 460nm in frequency, even if the term "red" leaves room for subjective interpretation where it bleeds into orange or pink.

It would be rather awkward if I had to append the WHO definition (Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity) every time I used that phrase. Even said definition is implicitly subjective.

So no, it's not meaningless beyond an individual observer. If I delivered a baby missing a head and handed it to a mother, I'd be rather aghast if anyone in the room called it a healthy child.

That is to say, the question of whether something is "healthy" begins and ends with their subjective opinion of their current state?

I couldn't really be a psychiatrist if I subscribed to that notion, could I?

Sadly human language is rather imprecise. It's still useful. I'm unable to define health in a way so rigorous I could program it into a computer in Lisp, but LLMs prove that that's not necessary.

This pure rhetoric and hyperbole that is not worth addressing. I can only wish you well when you conquer your hangups and decide to move to Japan.

Do you have better alternatives? At the end of the day, if you're unhappy with the government, then you need to elect a better government. I presume that it wouldn't be impossible to strip bureaucrats of their immunity if the laws was changed to reflect that. What else could I really advise, that someone shoot Fauci?

In a way, the new Republican government reflects the deep unrest with previous medical policy. RFK isn't a fan of vaccines.

The reason I advocate for governments having the ability to impose lockdowns and quarantines is because pandemics can be highly dangerous. Covid was initially believed to have a ~1-10%% CFR for the first few weeks, and on the higher end, the serious possibility of several hundred million people dying justifies some action be taken. I think a month is enough to narrow the CFR down, leaving aside the primary benefit of reducing spread.

Look at my Total Fertility Rate dawg, we're never having children

Eh. I don't think this is necessarily catastrophic, but we better get those artificially wombs up and running. If AGI can give us sexbots and concubines, then it can also give us nannies.

Edit: If I was Will Stancil and this version of Grok came for my bussy, I wouldn't be struggling very hard.

I appreciate the thorough response, but I think you're painting an unnecessarily bleak picture that doesn't account for several key factors.

You're right that my argument depends on relatively stable economic institutions, but this isn't as unrealistic as you suggest. We already have financial instruments that span centuries - perpetual bonds, endowments, trusts. The Vatican has maintained financial continuity for over 500 years.

Improving technology makes it at least theoretically possible to have such systems become even more robust, spanning into the indefinite, if not infinite future.

So this argument seems to depend on an eternally-stable investment market where you can put in value today and withdraw value in, say, five thousand years. No expropriation by government, no debasement of currency, no economic collapse, no massive fraud or theft, no pillage by hostile armies, every one of which we have numerous examples of throughout human history.

The precise details of how a post Singularity society might function are beyond me. Yet I expect that they would have far more robust solutions to such problems. What exactly is the currency to debase, when we might trade entirely in units of energy or in crypto currency?

The estimate I've heard recently is that the UK grooming gangs may have raped as many as a million girls. The cops looked the other way. The government looked the other way. My understanding is that the large majority of the perpetrators got away with it, and the few that got caught received minimal sentences for the amount of harm they caused.

Where on Earth did you come across this claim???

Does it not strike you as prima facie absurd? The population of the UK is about 68 million, if around 1.5% of the entire population, or 3% of the women, had been raped by organized "rape gangs", I think we'd have noticed. I live here, for Christ's sake. That's the kind of figure you'd expect in a country under occupation or literally in the midst of a civil war.

The confirmed numbers, which are definitely an understatement, are about 5k girls total. I don't see how you can stretch that another 3 orders of magnitude no matter how hard you try.

Putting aside those absurd figures:

The grooming gangs are indeed horrific, but they're not representative of how most vulnerable populations are treated in developed societies. For every Rotherham, there are thousands of care homes, hospitals, and institutions that function reasonably well. The vast majority of elderly people in care facilities, despite being physically vulnerable and economically dependent, aren't systematically abused.

Your examples of state collapse and genocide are real risks, but they're risks that already exist for biological humans. The question isn't whether bad things can happen, but whether the additional risks of uploading outweigh the benefits. A world capable of supporting uploaded minds is likely one with sophisticated technology and institutions - probably more stable than historical examples, not less.

To sum up: you are counting on money to protect you, on the understanding that you will be economically useless, and the assumption that you will have meaningful investments and that nothing bad will ever happen to them. You are counting on people who own you to be trustworthy, and to only transfer possession of you to trustworthy people. And you are counting on the government to protect you, and never turn hostile toward you, nor be defeated by any other hostile government, forever.

You're describing the experience of a retiree.

The "ownable commodity" framing assumes a particular legal framework that need not exist. We already have legal protections against slavery, even of non-standard persons (corporations have rights, as do some animals in certain jurisdictions). There's no reason uploaded minds couldn't have robust legal protections - potentially stronger than biological humans, since their substrate makes certain forms of evidence and monitoring easier.

You mention trust extending through infinite chains, but this misunderstands how modern systems work. I don't need to trust every person my bank trusts, or every person my government trusts. Institutional structures, legal frameworks, and distributed systems can provide security without requiring universal interpersonal trust.

As Einstein, potentially apocryphally, said- Compound interest is the most powerful force in the universe. A post-Singularity economy has hordes of Von Neumann swarms turning all the matter in grasp to something useful, with a rate of growth only hard capped by the speed of light. It's not a big deal to expect even a small investment to compound, that's how retirement funds work today.

Further, you assume that I'll be entirely helpless throughout the whole process. Far from it. I want to be a posthuman intelligence that can function as a peer to any ASI, and plain biology won't cut it. Uploading my mind allows for enhancements that mere flesh and blood don't allow.

I could also strive to self-host my own hardware, or form a trusted community. There are other technological solutions to the issue of trust-

  1. Substrates running on homomorphic encryption, where the provider can run your consciousness without ever being able to "read" it.

  2. Decentralized hosting, where no single entity controls your file, but a distributed network does, governed by a smart contract you agreed to.

  3. I could send trillions of copies of myself into interstellar space.

They really can't get all of me.

At the end of the day, you're arguing that because a totalitarian government could create digital hells, I should choose the certainty of annihilation. That's like refusing to board an airplane because of the risk of a crash, and instead choosing to walk off a cliff. Yes, the crash is horrific, but the cliff is a 100% guarantee of the same outcome: death.

Your argument is that because a system can fail, it will fail in the worst way imaginable, and therefore I should choose oblivion. My argument is that the choice is between certain death and a future with manageable risks. The economic incentives will be for security, not slavery. The technology will co-evolve with its own safeguards. And the societal risks, while real, are ones we already face and must mitigate regardless. If the rule of law collapses, we all lose.

The ultimate omnipotent god in this scenario is Death, and I'll take my chances with human fallibility over its perfect, inescapable certainty any day.

An easy trick is to get another model to review/critique it. If both models disagree, get them to debate each other till a consensus is reached.

These days, outright hallucinations are quite rare, but it's still worth doing due diligence for anything mission-critical.

As George suggested, you can also ask for verbatim quotes or citations, though you'll need to manually check them.

Hang on. You're assuming I'm implying something in this comment that I don't think is a point I'm making. Notice I said average.

The average person who writes code. Not an UMC programmer who works for FAANG.

I strongly disagree that LLMs "suck at code". The proof of the pudding is in the eating; and for code, if it compiles and has the desired functionality.

More importantly, even from my perspective of not being able to exhaustively evaluate talent at coding (whereas I can usually tell if someone is giving out legitimate medical advice), there are dozens of talented, famous programmers who state the precise opposite of what you are saying. I don't have an exhaustive list handy, but at the very least, John Carmack? Andrej Karpathy? Less illustrious, but still a fan, Simon Willison?

Why should I privilege your claims over theirs?

Even the companies creating LLMs are use >10% of LLM written code for their own internal code bases. Google and Nvidia have papers about them being superhumanly good at things like writing optimized GPU kernels. Here's an example from Stanford:

https://crfm.stanford.edu/2025/05/28/fast-kernels.html

Or here's an example of someone finding 0day vulnerabilities in Linux using o3.

I (barely) know how to write code. I can't do it. I doubt even the average, competent programmer can find zero-days in Linux.

Of course, I'm just a humble doctor, and not an actual employable programmer. Tell me, are the examples I provided not about LLMs writing code? If they are, then I'm not sure you've got a leg to stand on.

TLDR: Other programmers, respected ones to boot, disagree strongly with you. Some of them even write up papers and research articles proving their point.

I looked into PRP, and it's worse than I thought. It has a decent evidence base and moderate efficacy for orthopedic procedures, such as knee issues and rotator cuff repairs. For cosmetic purposes? A mire of tiny, biased/sponsored studies and a lot of nulls.

The face has a lot of blood circulation. From first principles alone, I doubt just putting more of it back in would help.

I have very high standards for the quality of partner I would marry and entrust to give half their genes to our kids. By virtue of being more attractive, I have a wider pool to work with, and can winnow them with more care. To the extent that hot, smart and successful women demand the same in their partners, I can only work towards making myself better at them all. I wouldn't want to marry a bimbo, what if the kids come out with my looks and her brains?

In other words, I can pretty easily find someone to marry. I could do it tomorrow, my family has had feelers put out by Indian families, here and in the UK, who would put a ring on it. Even by dint of my own efforts, I think about 20% of the women I dated over 3 months (before going steady with one) wanted to marry me, and were serious about it. One of them was a very hot, rich professional model, but she was dumb as rocks. She begged me to stay back in India and marry her. I turned that down. I could probably have taken advantage of her, screwed her and fled like her exes did, but I try not to be an asshole.

I hope that makes my point clear. Investing in my appearance (and I've worked on everything else) by getting work done and working out increases my appeal on the dating market - - - > increases pool of women to sleep with/marry - - - > increases the odds of finding the One. I'm not worried about getting married, that's trivial, I want to marry someone who makes me feel great about that choice.

  1. LessWrong lead the charge on even considering the possibility of AI going badly, and that this was a concern to be taken seriously. The raison d'être for both OpenAI (initially founded as a non-profit to safely develop AGI) and especially Anthropic (founded by former OpenAI leaders explicitly concerned about the safety trajectory of large AI models). The idea that AGI is plausible, potentially near, and extremely dangerous was a core tenet that in those circles.

  2. Anthropic in particular is basically Rats/EAs, the company. Dario himself, Chris Olah, a whole bunch of others.

  3. OAI's initial foundation as a non-profit was using funds from Open Philanthropy, an EA/Rat charitable foundation. They received about $30 million, which meant something in the field of AI back in the ancient days of 2017. SBF, notorious as he is, was at the very least a self-proclaimed EA and invested a large sum in Anthropic. Dustin Moskovitz, the primary funder for Open Phil, lead initial investment into Anthropic. Anthropic President Daniela Amodei is married to former Open Philanthropy CEO Holden Karnofsky; Anthropic CEO Dario Amodei is her brother and was previously an advisor to Open Phil.

As for Open Phil itself, the best way to summarize is: Rationalist Community -> Influenced -> Effective Altruism Movement -> Directly Inspired/Created -> GiveWell & Good Ventures Partnership -> Became -> Open Philanthropy.

Note that I'm not claiming that Rationalists deserve all the credit for modern AI. Yet a claim that the link between them is as tenuous as that between ice cream and drowning is farcical. Any study of the aetiogenesis of the field that ignores Rat influence is fatally flawed.

I don't follow the AI developments terribly closely, and I'm probably missing a few IQ points to be able to read all the latest papers on the subjects like Dase does, so I could be misremembering / misunderstanding something, but from what I heard capital 'R' Rationalism has had very little to do with it, beyond maybe inspiring some of the actual researchers and business leaders.

Yudkowsky himself? He's best described as an educator and popularizer. He's hasn't done much in terms of practical applications, beyond founding MIRI, which is a bit player. But right now, leaders of AI labs use rationalist shibboleths, and some high ranking researchers like Neel Nanda, Paul Christiano and Jan Leke (and Ryan Moulton too, he's got an account here to boot) are all active users on LessWrong.

The gist of it is that the founders and early joiners of the big AI labs were strongly motivated by their beliefs in the feasibility of creating superhuman AGI, and also their concern that there would be a far worse outcome if someone else, who wasn't as keyed into concerns about misalignment was the first to go through.

As for building god, I think I heard that story before, and I believe it's proper ending involves striking the GPU cluster with a warhammer, followed by several strikes with a shortsword. Memes aside, it's a horrible idea, and if it's successful it will inevitably be used to enslave us

You'll find that members of the Rationalist community are more likely to share said beliefs than the average population.

Yud had a whole institute devoted to studying AI, and he came up with nothing practical. From what I heard, the way the current batch of AIs work has nothing to do it with what he was predicting, he just went "ah yes, this is exactly what I've been talking about all these years" after the fact.

Yudkowsky is still more correct than 99.9999% of the global population. He did better than most computer scientists and the few ML researchers around then. He correctly pointed out that you couldn't just expect that a machine intelligence would come out following human values (he also said that it would understand them very well, it just wouldn't care, it's not a malicious or naive genie). Was he right about the specifics, such as neural networks and the Transformer architecture that blew this wide open? He didn't even consider them, but almost nobody really did, until they began to unexpectedly show promise.

I repeat, just predicting that AI would reach near-human intelligence (not that they're not already superintelligent in narrow domains) before modern ML is a big deal. He's on track when it comes to being right that they won't stop there, human parity is not some impossible barrier to breach. Even things like recursive self-improvement are borne out by things like synthetic data and teacher-student distillation actually working well.

In any case when I bring up rationalism's failure, I usually mean it's broader promises of transcending tribalism, systematized winning, raising the sanity waterline, and making sense of the world. In all of these, it has failed utterly.

Anyone who does really well in a consistent manner is being rational in a way that matters. There are plenty of superforecasters and Quant nerds who make bank on being smarter and more rational given available information than the rest of us. They just don't write as many blog posts. They're still applying the same principles.

Making sense of the world? The world makes pretty good sense all considered.

It makes sense, because my feelings toward rationalism and transhumanism are quite similar. Irreconcilable value differences are irreconcilable, though funnily enough mist transhumanists, yourself included, seem like decent blokes.

Goes both ways. I'm sure you're someone I can talk to over a beer, even if we vehemently disagree on values.

(The precise phrase "irreconcilable values difference" is a Rationalist one, it's in the very air we breathe, we've adopted their lingo)

I presume you can't buy a Bugatti either. It's still an option that real living people can get for cash.

There's nothing standing in the way of Waymo rolling out an ever wider net. SF just happens to be an excellent place to start.

All well and good, can't fault that.

Yet I must note that you accuse me of some kind of misunderstanding, and have yet to clarify what possibly could have made you say that.

Since you claim that nature is a very ill defined concept (which I agree with), then what was the point of your previous comment? Why even point to something being natural?

Saying that hindering reproduction is the same as helping reproduction is your misunderstanding, not mine.

I invite you to show me where I can be said to have made this "misunderstanding". An earlier comment of mine explicitly said I don't want puberty blockers, and if my future kids did, I'd do everything I could to stop them.

As you've previously mentioned, there is at least the possibility of having residency requirements waived if you can show experience and equivalency.

The waivers for IMGs in under-served areas don't specify a specialty, but I presume that at least some of the hospitals or practices looking for doctors would be in need of psychiatrists.

I'd be happy to take my chances, and I know I'll have to work for it. If I can't make the cut, then there's nobody to blame but myself. I'd have chased my dreams and failed, instead of nursing bitterness over the fact that factors beyond my power prevented me from trying.

I've never had an employer or customer put something inside me for even a moment, let alone nine months.

And this matters why? What qualitative difference does it make? Has your employer ever needed you to put in earphones? Or go through a health checkup?

If you don't like the terms and conditions, don't sign the contract.

I wouldn't do my current job for free. But I also enjoy talking about it - and find no shame it doing so - with my friends, family, and other acquaintances. Sometimes I have stressful days, but I don't end every day or week thinking, "A what a fucking emotional toll I had to pay!" In fact, I'm quite excited about my job because it lets me do all these other cool things with friends and families - and I feel like I really am creating some tangible value on a day to day basis.

Jobs vary, from the fulfilling to drudgery, from the stressful, to the relaxed. Mine certainly has its ups and downs, and it isn't all things I currently do for free online if someone were to politely ask. Getting pregnant and making $50k in 9 months strikes me as a much better deal than having to break your back laboring for the same sum, or have it represent life-time earnings, as would be the case in the Third World for many surrogates.

Someone having to work at McDonald's after their PhD in Underwater Basket Weaving failed to net them the jobs they dreamed of is obviously unhappy about it, and probably embarrassed to disclose it to friends and family. There are many low prestige jobs out there. Some of them even pay a premium to account for the fact it's not most people's first choice.

I've only read (most of) Worm.

But funny you should mention that, because I write a novel where cyborgs are a mainstay (the protagonist is humanoid, but only because he hasn't been pushed further) , and the upcoming chapter has one who is basically a pair of frontal lobes in a crab-shaped shell.

You'll see clear Worm inspiration in my work, though I aim for much more of what I perceive as 'realism' in terms of societal and governmental reaction than Worm does with its desire to have the protagonists punch people on the streets. (I'm aware of in-universe justifications, I find them lacking)

Wildbow doesn't get nearly as wild as I do.