site banner

Small-Scale Question Sunday for July 9, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

5
Jump in the discussion.

No email address required.

After two months from my original application, and two weeks since they asked me for some rather absurd documents that I had to scramble to provide, including one I was supposed to have legally turned in to my government but still had a random scan lying around, the utter radio silence from the GMC had me mildly concerned.

I kept checking my inbox and their website for weeks hoping they'd at least inform me of my progress.

Imagine my surprise when I randomly checked today and it said I'm a fully licensed doctor with a license to practise. I feel both happy and underwhelmed, you'd think that was at least worth a congratulatory email!

But there it stands, proof that despite fucking up big time in my choice of med school, being incorrigibly lazy and depressed and then having to teach myself all the medicine I snoozed through, 6 months of working my ass off has provided proof that I meet the standards needed to be a respectable First World doctor. It's been a long time coming, and thank you to everyone who overwhelmed me with support along the way, even during the dark days when this all seemed a distant dream, a hallucination of a overworked and overwhelmed intern who felt that his white coat hung heavy as a shroud on an impostor's frame.

I did this. It was all me. Now onto greater things!

Can you move straight away? How does it work? Also, if you think AI is going to replace all jobs in a few years, what makes you think the US, which is famously loathe (or at least less keen on) Euro-style social democracy will be the best place to be? Your best bet might be a slightly-poorer-but-still-pretty-good continental European social democracy.

It means I'm eligible to start applying for jobs at the very least, and right on the day where I just spent 6 hours running from one end of the city I live in to another interviewing at as many places as I could (I need more money for my biryani addiction).

Also, if you think AI is going to replace all jobs in a few years, what makes you think the US, which is famously loathe (or at least less keen on) Euro-style social democracy will be the best place to be?

  1. Wealth is still a form of insulation from this danger. I'd rather lose my job with several hundred k in the bank than as someone who had to struggle to save a quarter as much. If a million dollars worth of cocaine lands in my backyard, you bet I'm trying to get an investor visa somewhere.

In the ideal case, I can invest the money in companies that will make bank, like Nvidia. I'm already groaning at the memory of, a few months ago, pestering my dad to buy stocks in Nvidia, predicting that stock prices will surge. They did so a few weeks later. Turns out that all the wisdom in the world means nothing if you don't personally have the assets to make the most of it. I can hope to live off it using the safe withdrawal rate or dividends even if I lose my job and UBI isn't an option.

  1. US doctors are far more militant and protective of their own, which is why mid-level creep is an annoyance rather than an existential risk to them. They will likely fight to the bitter end, and unlike the UK where the NHS is a white albatross that can't be slain, only slowly degraded and propped up by the blood of the doctors in it, the US has plenty of money to make suboptimal choices in its healthcare system several years after the rest of the world bites the bullet. You can argue it's been doing that for several decades now.

  2. If the UK was an American state, it would be the second poorest. The majority of Western countries that are unabashed welfare states simply don't have the money for UBI, or the industrial base to capitalize off automation, maybe Germany might have the latter (is it a welfare state? I don't think it quite counts), or perhaps Sweden with its sovereign wealth fund and oil.

Even if I'm not literally kicked out of a country or starving to death, I'd rather not be poor if I can help it.

I feel like "slightly poorer" is a rather understated description of the difference in wealth between the US and most of Western Europe, to say the least.

India for one, has neither money for UBI nor a large manufacturing base. China is far more likely to leverage the latter itself.

  1. The US has nigh unassailable military might, and is almost certainly the one leading the charge into a fully automated economy, or will closely follow. To the extent that I expect geopolitical turmoil from such a momentous transition, it is the safest place to be from what I can tell. This is a lesser point than the others, but still important. India absolutely fails on this metric.

If anyone disagrees, feel free to tell me, I prefer being well calibrated in my beliefs, especially when I would like to be convinced that circumstances won't be as dire. (Who am I kidding? This is the Motte, vociferous dissent on any given topic is a given heh)

Of course, this accounts for futures where AGI doesn't outright kill us. I've gone from being like 70% sure I'm going to die at the hands of such an entity, to a mere 40% today. Since I'm not willing to resort to firebombing data centers, I am largely helpless in the worst case, and can only prepare for the ones where my marginal effort makes a difference.

Now, if we get a Utopia, it's all moot, but even getting there will hardly be smooth sailing.

Huh. I find the fact that nobody contested any of this mildly concerning itself.

People, contrary to appearances, I wasn't born a Doomer, quite the opposite. I spent the majority of my life expecting to see technological marvels that would utterly change my standard of living and lead to a bright future.

I still do, I just think that the process also contains a substantial risk of killing all of us, or at least make life very difficult for me.

I want to have reasons to hope. Convince me I'm wrong, my mind isn't so open that my brains fall out, but I'm eager to know if I'm making fundamental errors.

Our current training procedures seem to inculcate our ideas about "ought" roughly as well as they do our ideas about "is", so even if in theory one could create a paperclip-maximizer AGI, in practice perhaps whatever we eventually make with superhuman intelligence will at least have near-human-ideal ethics.

I'm not sure if this gets us even a full order of magnitude below your 40%, though. Intelligence can bootstrap via "self-play", whereas ethics seems to have come from competitive+cooperative evolution, so we really might see the former foom to superhuman while the latter remains stuck at whatever flaky GPT-7 levels we can get from scraped datasets, and for all I know at those levels we just get "euthanize the humans humanely", or at best "have your pets spayed or neutered".

Part of the reason I went from p(doom) of 70% to a mere 40% is because our LLMs seem to almost want to be aligned, or at the very least remain unagentic without setting up systems akin to AutoGPT, useless as that is today.

It didn't drop further because while the SOTA is quite well aligned, if overly politically correct, there's still the risk of hostile simulacra being instantiated within one, like in the Clippy story by Gwern, or some malignant human idiot trying to run something akin to ChaosGPT using an LLM far superior to modern ones. And of course the left field possibility of new types of models that are both effective and also less alignable.

As it stands, they seem very safe, especially after RLHF, and I doubt GPT-5 or even 6 will be any risk.