self_made_human
C'est la vie
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
I tried stuffing my friends into this textbox and it really didn't work out.
User ID: 454
Raising the Price of Admission
I find myself immensely frustrated by Trump's recent moves to cut down on immigration, especially replacing the EB5 with his new golden ticket scheme.
I've always wanted to move to the States, but by virtue of being Indian, and in a profession with strict regulatory requirements, it was never easy. As of right now, I can't sit for the USMLE if I wanted to, but I believe that is a problem my uni could solve, unfortunately I'm locked into the UK for at least 3 more years and don't have the time to breathe down their necks.
If I wanted to spend $1 million for the old EB5, I'd probably have to sell a significant fraction of my familial assets, and they're not mine yet, I have a sibling and parents to think of. The fact that we even have that much, when my father made $50k at the peak of his career as a OBGYN surgeon, represents a lifetime of my parents being frugal and living beneath their means. My dad started out from scratch, a penniless refugee, and all his life he worked tirelessly to make sure his kids wouldn't have to work as hard as he did. To a degree, he's succeeded. I nearly make as much as he does, but that's virtue of grinding my ass off to escape India. I had to settle for the UK, whereas I'd much rather be in the States.
The EB-5 program already functioned as a high barrier to entry, requiring not just capital but also the ability to invest in ways that met the job creation criteria. By raising the price to $5 million, the U.S. is effectively signaling that it no longer wants "entrepreneurial upper-middle-class" immigrants - it only wants the ultra-wealthy. The problem, is that the truly ultra-wealthy already have multiple options. The US is relatively unique in dual-taxation, and has heavier taxes overall when compared to some of the alternatives. They can buy citizenship in other countries (Malta, St. Kitts, etc.), take advantage of residence-by-investment programs in the EU, or just maintain an arsenal of visas that allow them to live anywhere they please. The U.S. loses out on exactly the kind of people who were willing to put down roots and contribute significantly to the economy while still needing the opportunities that U.S. citizenship provides.
If Trump (or any administration) wanted a truly meritocratic system, they should be auctioning off a limited number of economic immigrant slots each year. That would at least allow market forces to determine the actual value of U.S. residency. A points-based system, like Canada’s or Australia’s, could also make more sense: prioritizing skilled professionals over sheer wealth. A million already strongly filters would-be immigrants. Five is exorbitant, especially if it's a flat sum.
(Let's leave aside the other requirements, such as running a business that creates a certain number of jobs)
Jevon's paradoxmakes us expect that increasing the price of a good by 5 times will not 5x the revenue. It'll decrease it in expectation. If Trump prizes himself as a businessman, this should be clear to him.
Even the abolition of birthright citizenship strikes me as a violation of the American ethos. It was certainly being abused, anchor babies being a case in point, but when even green cards are this hard to get, prospective skilled migrants greatly appreciate the peace of mind that their kids are entitled to citizenship provides.
End it for illegal immigrants if you have to, why lump in everyone else there legitimately? I wouldn't mind people using their visitor visas to get a fast one in being debarred too, but I look at the current state of affairs with great dismay.
At any rate, I'm not an American. I do wish I was, and my impression is that most of you would be happy to have me. Well, I'm used to life being rough, and the UK isn't the worst place I could be. I still think that even from an absolutely monetary point of view, this is a bad plan.
I hope I've made a decent case for why you're not getting much out filtering the immigrants for quality at that point, and the ones who are that loaded are probably not nearly as keen. They're easily Global Citizens for whom nationality is a formality.
Well, I'm still going to see if I manage to figure out the USMLE thing by the time my training in the UK ends, but there must be thousands of skilled immigrants in a similar boat, just noticing a rather significant leak in it. Then they're confronted by a sign at Eliis Island that just any ocean-crossing vessel won't do, they need a yacht. We don't deserve to be clubbed in with those who break the rules.
I'm surprised that more people here aren't talking about Scott ripping off the bandaid in his latest series of posts, which very much take an IQ-realist and pro-Lynn stance, and without really mincing words about it.
Scott has tip-toed around the topic in the past, largely playing it safe. There was some minor controversy almost half a decade in the past when his "friend" (one who had ended up marrying Scott's enbie ex Ozzy) leaked private correspondence between the two of them where Scott explicitly acknowledged that he believed in population-wide IQ differences but felt he couldn't speak up about it. Going back even further, on his now defunct but archived LiveJournal, he outlines his harrowing experience doing charity work in Haiti, where the sheer lack of common sense or perverse and self-defeating antics from the populace knocked him speechless.
I note (with some pleasure) that Scott raises some of the same points I've been on record making myself: Namely that there's a profound difference between a person who is 60 IQ in a population where that's the norm, versus someone who is 60 IQ due to disease in a population with an average of 100.
What's the wider ramification of this? Well, I've been mildly miffed for a while now that the Scott of ACX wasn't quite as radical and outspoken as his SSC days, but now that he's come out and said this, I sincerely doubt that there are any Dark and Heretical ideas he holds but is forced to deny or decline to defend. It's refreshing, that's what it is. He might not particularly delve into the ramifications of what this might mean for society at large, but he's not burying the lede, and I have to applaud that. It might we too early to celebrate the death of wokeness, but I think that the more milquetoast Scott of today being willing to say this matters a great deal indeed.
The same argument applies for signing up for experimental heart surgery.
I have nothing useful to add, since I hadn't heard of the franchise till about 4 hours ago, when someone was lauding it as a resurgence in Chinese soft power in my Twitter feed.
That being said, it gives me hope that I'll see some proper high-budget Xianxia adaptations, and soon-ish. Hopefully adaptations of the good novels!
Excellent comment. One thing I'd like to add is that opting out of vaccination is, to an extent, mooching off the commons.
Herd immunity doesn't work, if parents look at the existing vaccination rates, reasons that their kids will be fine without vaccination as "everyone else" does it, and thus defects.
At this point, I don't even know what an AGI is. The word has just been semantically saturated for me.
What I do know, based on having followed the field since before GPT-2 days, and personally fucked around since GPT-3, is that for at least a year or so, SOTA LLMs have been smarter and more useful than the average person. Perhaps one might consider even the ancient GPT 3.5 to have met this (low) bar.
They can't write? Have you seen the quality of the average /r/WritingPrompts post?
They can't code? Have you seen the average code monkey?
They can't do medicine/math/..? Have you tried?
The average human, when confronted with a problem outside their core domain of expertise, is dumb as rocks compared to an LLM.
I don't even know how I managed before LLMs were a thing. It hasn't been that long, I've spent the overwhelming majority of my life without them. If cheap and easy access to them were to magically vanish, my willingness to pay to get back access would be rather high.
Ah, it's all too easy to forget how goddamn useful it can be to have access to an alien intelligence in one's pocket. Even if it's a spiky, inhuman form of intelligence.
On the topic of them being cheap/free, it's a damn shame that AI Studio is moving to API access only. Google was very flustered by the rise of ChatGPT and the failure of Bard, it was practically begging people to give Gemini a try instead. I was pleasantly surprised and impressed since the 1.5 Pro days, and I'm annoyed that their gambit has paid off, that demand even among normies and casual /r/ChatGPT users increased to the point that even a niche website meant for powerusers got saturated.
You're breaking so many damn rules in one comment I'm mildly impressed. You have not proactively (or on demand) produced any evidence to suggest a conspiracy of the Jews. Or that they have anything to gain from weakening the British state. Inflammatory, boo-outgroup, throw it all in, toppings are free with this sandwich.
You've been warned in the past, and I'm giving you a short ban so you know they have teeth. Even our most fervent anti-semites hustle to meet posting standards, and I'd advise you do so too.
Nota bene: I am old. You will get different perspectives on this.
That is by far the biggest downside of a baseline human body, and why I don't want to be stuck in one even if I like mine.
It will, despite our best medicines, decay and fail you. Maybe our drugs and treatments will get better, and we can keep people healthy indefinitely. But even then, I want things that no human body constrained by biology will be able to provide.
In other news: a streamer with deep pockets and a love of AI has decided to have Claude play Pokemon.
Well, if that's what you want to call an Anthropic researcher who decided to make their experiment public.
"Claude Plays Pokémon continues on as a researcher's personal project."
https://x.com/AnthropicAI/status/1894419042150027701
How should we interpret this? On the simplest level, Claude is struggling with spacial modeling and memory. It deeply struggles to interpret anything it's seeing as existing in 2D space, and has a very hard time remembering where it has been and what it has tried. The result is that navigation is much, much harder than we would anticipate. Goal-setting, reading and understanding dialogue, and navigating battles have proven trivial, but moving around the world is a major challenge.
This reminds me of a very good joke:
A woman walks in and says "holy crap, your dog can play chess?! That's amazing! What a brilliant dog! "
The man says "you think my dog is brilliant? Pffft. Hardly. He's pretty dumb, I've won 19 games out of the 20 we've played."
Jesus Christ, some people won't see the Singularity coming until they're being turned into a paperclip.
Nuh uh, this machine lacks human internal monologues and evidence of qualia, you insist, as it harvests the iron atoms from your blood.
At this point, the goalposts aren't just moving, they're approaching relativistic speed headed straight out of the galactic plane.
This AI can strategize in battle, understand complex instructions, and process information, BUT it struggles with spatial reasoning in a poorly-rendered 2D GameBoy game, therefore it's not intelligent.
It wasn't designed to play Pokémon. It still does a stunningly good job when you understand what an incredibly difficult task that is for a multimodal LLM.
Last, and most controversial: AI needs abstract "concepts." When humans reason, we often use words - but I think everyone's had the experience of reasoning without words. There are people without internal monologues, and there are the overwhelming numbers of nonverbal animals in the world. All of these think, albeit the animals think much less ably than do humans. Why, on first principles, would it make sense for an LLM to think when it is built on a human capability absent in other animals? Surely the foundation comes first? This is, to my knowledge, completely unexplored outside of philosophy (Plato's Forms, Kant's Concepts, to name a couple), and it's not obvious how we could even begin training an AI in this dimension. But I believe that this is necessary to create AGI.
This is the classic, tired, and frankly, lazy argument against LLMs. Yes, LLMs are trained on massive datasets of text and code, and they predict the most likely output based on that training. But to say they are merely "next likely text" generators is a gross oversimplification.
It's like saying humans are just “meat computers firing neurons". That is trivially true, but I'm afraid you're letting the "just" do all the heavy lifting.
The power of these models comes from the fact that they are learning statistical correlations in the data. These correlations represent underlying patterns, relationships, and even, dare I say, concepts. When an LLM correctly answers a complex question, it's not just regurgitating memorized text. It's synthesizing information, drawing inferences, and applying learned patterns to new situations.
LLMs have concepts. They operate in latent spaces where those are represented with floating point numbers. They can be cleanly mapped, often linearly, and interpreted in terms that make sense to humans, albeit with difficulty.
These representations can be analyzed, manipulated, and even visualized. I repeat, they make intuitive sense. You can even perform operations on these vectors like [King] - [Male] + [Female] = [Queen]. That isn't just word tricks, they’re evidence of abstracted relational understanding.
If you're convinced, for some reason, that tokens aren't the way to go, then boy are AI researchers way ahead of you. Regardless, even mere text tokens have allowed cognitive feats that would have made AI researchers prior to 2017 cream in their pants and weep.
There really isn't any pleasing some people.
Edits as I spot more glaring errors:
Second, AI needs more than one capacity. LLMs are very cool, but they only do one thing - manipulate language. This is a core behavior for humans, but there are many other things we do - we think spacially and temporally, we model the minds of other people, we have artistic and other sensibilities, we reason... and so on.
Even the term "LLM" for current models is a misnomer. They are natively multimodal. Advanced Voice for ChatGPT doesn't use Whisper to transcribe your speech to text, the model is fed raw audio as tokens and replies back in audio tokens. They are perfectly capable of handling video and images to boot, it's just an expensive process.
While the obvious argument around Tay was whether it was racist or dangerously based, a more serious concern is: should an intelligence allow itself to get swayed so easily by obviously biased input? The users trying to "corrupt" Tay were not representative and were not trying to be representative - they were screwing with a chatbot as a joke. Shouldn't an intelligence be able to recognize that kind of bad input and discard it? Goodness knows we all do that from time to time. But I'm not sure we have any model for how to do that with AI yet.
There appear to be several similar AI-related leprechauns: the infamous Microsoft Tay bot, which was supposedly educated by 4chan into being evil, appears to have been mostly a simple ‘echo’ function (common in chatbots or IRC bots) and the non-“repeat after me” Tay texts are generally short, generic, and cherrypicked out of tens or hundreds of thousands of responses, and it’s highly unclear if Tay ‘learned’ anything at all in the short time that it was operational
Besides, have you ever tried to get an LLM to do things that its designers have trained it, through RLHF or Constitutional AI, to not do? They're competent, if not perfect, at discarding "bad" inputs. Go ahead, without looking up an existing jailbreak, try and get ChatGPT to tell you how to make meth or sarin gas at home.
It is plausible, though obviously not possible to confirm, that ClaudeFan has updated the model some to attempt to handle these failures. It's unclear whether these updates are general bugfixes
I don't think that Anthropic, strapped for compute as it is, is going to take a fun little side gimmick and train their SOTA AI to play Pokémon. If it was just some random dude with deep pockets, as you assumed without bothering to check, then good luck getting a copy of Claude's code and then fine-tuning it. At best they could upgrade the surrounding scaffolding to make it easier on the model.
The AI will struggle to get past its training and see the question de novo, as a human would be able to.
There is a profound difference between "struggling" to do so, and being incapable of doing so.
The ideological amalgamation of the American Revolution was a one-shot thing; it worked as well as it did the first time around due to ignorance in the form of an absence of specific elements of common knowledge. Now that those specific elements of common knowledge exist, large portions of the project no longer work and cannot be made to work again.
What do you think the missing "common knowledge" in question is? The first thing that would come to my mind is HBD, and I think it's a bit of a stretch to think that the Founding Fathers didn't think that cognition could vary between races, or even between individuals. I presume that's not it then.
Overheard at my hospital:
Senior psychiatrist, about 15 years older than me- "I need to get a breast reduction done, look at me, I'm tiny, and I swear a third of my weight is up here."
Then she went on about how it wouldn't be covered by the NHS, and it would ludicrously expensive to get done privately here, hence her plan to fly in to Turkey and get cut down to size there.
The shame! The squandering of God-given gifts! An imminent call for a diagnosis of severe body dysmorphia with threat of self-mutilation requiring involuntary commitment! A man can only sigh and pour one out.
I still don't know if people say Er Gen is good unironically or not, though your comment updates me towards the former.
I would say Reverend Insanity, even in its unfinished state, is a 10/10 novel. Contender for best novel I've ever read in fact. I'm trying to hold off re-reading it until my memory fades enough for it to feel fresh again.
Forty Millenniums of Cultivation is a good one, an 8/10 IMO. It was good enough for me to get frustrated reading awful MTL and then track down raws and translate them with modern SOTA LLMs that do a better job.
I never got very far into Lord of the Mysteries, but by all accounts it's supposed to be very good.
I'm thinking of biting the bullet on plastic surgery. I've always had chubby cheeks, without that much definition even when I was 10 kilos lighter, and buccal fat removal was something I'd been eyeing for years. I just didn't really have the money or the impetus to go for it back then.
I had an appointment with a reputable plastic surgeon today, and started off with a debate on whether he could exclude body dysmorphia in my case lol. I explained that in a psychiatric context, didn't any degree of dissatisfaction with one's physical appearance that involved attempts to modify it count? Why doesn't going to the gym or dieting to lose weight count? Besides, you'd need to have significant impairment in psychosocial functioning to warrant it. The DSM-5 includes, under BDD:*
Preoccupation with one or more perceived defects or flaws in physical appearance that are not observable or appear slight to others
While the roundness of one's face is a subjective thing, it's certainly not non-observable.
At any rate, he pretty successfully upsold me, explaining that I had hypertrophic masseters, which would make mere buccal fat removal not have very significant effects on the overall contour of the face. He also explained that instead of discarding buccal fat, as is the norm, he finds benefit from it being re-injected below the eyes and on the chin. To help tone down the masseters further, he suggested botox. I'm not particularly keen on semi-annual injections into my face, but I think it's worth a shot.
Anyone undergo anything similar?
*If it's unclear, he was taking the piss. I don't have body dysmorphia, it's a high bar to cross.
I'm going to play Devil's Doctor here:
You underwent a drastic change in your personality as a consequence of a hormonal surge that was out of your control. That's 'normal'. It's puberty.
Yet the person you became isn't the same person as the one before. I mean, puberty hit me hard, but I never felt as if my values or goals changed because of it (beyond being even more eager for the company of the fairer sex).
This seems to me to be analogous to a person who, for their entire life, had sworn off addictive substances, but ended up on benzos or opioids for Medical Reason and found themselves hooked, and are now unwilling to try and become sober.
Why should we so strongly privilege puberty because it's "natural"? Many things are natural, such as 50% infant mortality rates, dying of a heart attack at 50 or getting prostate cancer by 80.
Nature, a blind and indifferent force, cares nothing for our individual well-being or our carefully constructed notions of self. To equate "natural" with "good" or "desirable" is a fundamental error, a logical fallacy we often fall prey to.
In the UK, the laws around consent for minors are relatively simple. Past the age of 16, they're assumed to be competent to consent to or decline medical procedures until proven otherwise. Below that age, there's no strict cut-off, if they can prove to their clinician that they are able to weigh the risks and benefits, they are able to consent or withhold it, and even override parental demands.
Someone who wishes to be the opposite sex is someone I pity. Medical science as it currently stands can't provide them more a hollow facsimile of that transition, it's Singularity-complete based off my knowledge of biology. Even so, the desire is one I consider as valid as any.
If they understand that:
A)Puberty blockers have risks and might not be truly reversible if they change their mind.
B) It won't solve all their problems, it won't physically make them indistinguishable from their desired sex.
Then I see no reason to declare that they're making a mistake. By the values they hold, it's the right decision. If they're forced to pass through puberty, they might desist, or they might spend their life wracked with regret that they didn't pull the trigger (hopefully not literally). You can pass far easier before testosterone wracks your body. It's a helluva drug/hormone.
A lot of life-changing decisions can be ones that change the person making them irrevocably, and into a person who would affirm them in retrospect. But I would yell at someone who suggested that couples who are iffy about childbirth be forced to have a child in the hopes that'll change their mind, or fix their marriage or some other well-intentioned goal. Or if we suddenly were to say that everyone should be made to try alcohol and cigarettes because the kind of people who try them tend to stick to it.
We're forced to deal with a messy world that doesn't always readily cough up pathways to our desires when we ask. I'm all for overcoming biology, and I think that people who understand what they're getting into are entitled to ask for even imperfect solutions.
Want to be more muscular? Try tren, if you know what you're in for. Want to lose weight? Take ozempic, while keeping an eye on your eyesight and pancreas. Want to be the other sex? This is the closest we can get you today.
This comment is a travesty. If it wasn't clear, it is the majority of parents opting-in for scheduled vaccinations that's what creates the commons for you to free-ride off.
Even in a purely disinterested cost-benefit analysis, the odds of your child getting an autoimmune disease from vaccines is so low that the general benefit of them getting sick less often outweighs it. The number of people avoiding vaccination isn't at the point where you're benefiting from opting out, or pointing to others doing so as an excuse. The majority of vaccines are net positive in expectation, regardless of what others do.
Meta-rationality is a hard art to practice. You point to a minority of people hitting defect to defect just as hard.
I don't trust your calculations
You can't trust what I haven't presented. I could do so, and I strongly expect to be correct, but I already do enough trawling of medical literature when I'm being paid for it.
It's your child, and I have some respect for your right to make decisions on their behalf, even if I think they're bad decisions.
Furthermore, my children can simply get the shots later in life, when they are grown and have a much larger body mass with which to accommodate, after development has been mostly completed.
My man, you're going from wrong to confused. There's good reason why you don't see doctors or nurses pull out dose calculators to account for body weight when giving every vaccine I can think of it. It's because it doesn't matter. If little Tommy is 4'6 and 35 kilos or 4'9 and 45 makes next to no difference, and this holds true once you're past the size of a premature infant on death's door, going to a land whale who needs a mobility scooter. Vaccines aren't like paracetamol, the dose-response curves are VERY different when it's the immune system we're talking about. Someone with a peanut allergy isn't twice as likely to die if you give them two peanuts versus one.
The eczema finally went away, and there's no way to know the counterfactual, but I think if we were on schedule I'd still be seeing rashes all over the back and legs and ankles
You're not doing a good job at probabilistic reasoning, but an eczematous rash is a minor vaccine reaction, and of very little consequence. If it was life threatening, I presume you'd have pointed that out, and even then, you'd be better off consulting your pediatrician about whether they could narrow it down to a particular constituent of the two common rotavirus vaccines in the States (fetal bovine serum or porcine circovirus, if I had to guess from looking at composition), and take precautions when administering future vaccines with similar compositions. Believe it or not, most doctors would be happy to answer those questions and offer reassurance if warranted, especially if you're paying them.
Moderately interesting news in AI image gen:
It's been a good while since we've had AI chat assistants able to generate images on user request. Unfortunately, for about as long, we've had people being peeved at the disconnect between what they asked for, and what they actually got. Particularly annoying was the tendency for the assistants to often claim to have generated what you desired, or that they edited an image to change it, without actually doing that.
This was an unfortunate consequence of the LLM, being the assistant persona you speak to, and the actual image generator that spits out images from prompts, actually being two entirely separate entities. The LLM doesn't have any more control over the image model than you do when running something like Midjourney or Stable Diffusion. It's sending a prompt through a function call, getting an image in response, and then trying to modify prompts to meet user needs. Depending on how lazy the devs are, it might not even be 'looking' at the final output at all.
The image models, on the other hand, are a fundamentally different architecture, usually being diffusion-based (Google a better explanation, but the gist of it is that they hallucinate iteratively from a sample of random noise till it resembles the desired image) whereas LLMs use the Transformer architecture. The image models do have some understanding of semantics, but they're far stupider than LLMs when it comes to understanding finer meaning in prompts.
This has now changed.
Almost half a year back, OpenAI teased the ability of their then unreleased GPT-4o to generate images natively. It was the LLM (more of a misnomer now than ever) actually making the image, in the same manner it could output text or audio.
The LLM doesn’t just “talk” to the image generator - it is the image generator, processing everything as tokens, much like it handles text or audio.
Unfortunately, we had nothing but radio silence since then, barring a few leaks of front-end code suggesting OAI would finally switch from DALLE-3 for image generation to using GPT-4o, as well as Altman's assurances that they hadn't canned the project on the grounds of safety.
Unfortunately for him, Google has beaten them to the punch . Gemini 2.0 Flash Experimental (don't ask) has now been blessed with the ability to directly generate images. I'm not sure if this has rolled out to the consumer Gemini app, but it's readily accessible on their developer preview.
First impressions: It's good.
You can generate an image, and then ask it to edit a feature. It will then edit the original image and present the version modified to your taste, unlike all other competitors, who would basically just re-prompt and hope for better luck on the second roll.
Image generation just got way better, at least in the realm of semantic understanding. Most of the usual give-aways of AI generated imagery, such as butchered text, are largely solved. It isn't perfect, but you're looking at a failure rate of 5-10% as opposed to >80% when using DALLE or Flux. It doesn't beat Midjourney on aesthetics, but we'll get there.
You can imagine the scope for chicanery, especially if you're looking to generate images with large amounts of verbiage or numbers involved. I'd expect the usual censoring in consumer applications, especially since the LLM has finer control over things. But it certainly massively expands the mundane utility of image generation, and is something I've been looking forward to ever since I saw the capabilities demoed.
Flash 2.0 Experimental is also a model that's dirt cheap on the API, and while image gen definitely burns more tokens, it's a trivial expense. I'd strongly expect Google to make this free just to steal OAI's thunder.
(Mildly) Interesting observation:
Recently, people on Twitter have claimed that Claude 3.5 Sonnet is stunningly good at guessing a user's ethnic background from any substantive amount of example text, even examples which have no glaringly obvious tells.
I decided to randomly throw in 2 to 3 comments >500 words each in two tries to see how it did.
In both cases, it correctly guessed I was Indian. The first try had a comment which tangentially mentioned the fact that I was a doctor who had relocated to the UK for training and some tendency to use British spelling, which immediately made it jump to South Asian/Indian. I decided to omit that one and go off more esoteric comments, and once again it got it bang on. I won't share the full chat since it would be around 90% my own comments copied and pasted, but for the last, stunning, example, Claude noticed:
Let me break this down carefully based on the text patterns and subtle indicators in these medical/ethical discussions:
The writing style and topics suggest someone likely from a Western, English-speaking background, most likely American, but the specific approach and certain cultural markers make me think they may be of South Asian (likely Indian) descent or cultural background. Here's why:
- The combination of transhumanist views with a highly pragmatic/utilitarian approach to ethics is somewhat characteristic of certain South Asian intellectual traditions filtered through Western education
- The writing shows both deep familiarity with Western medical practice (discussing statins, GLP-1As) while maintaining some skepticism about Western nutritional science dogma
- The slightly formal writing style with precise logical argumentation but occasional colloquialisms feels characteristic of South Asian English writing traditions
- The stance on individual autonomy balanced against collective costs/benefits has elements of both Western individualism and South Asian communal thinking
- The matter-of-fact discussion of death and body disposal without strong cultural taboos could suggest someone raised with Hindu cultural influences, where cremation and pragmatic approaches to death are more normalized
However, I want to be very clear that this is speculative pattern-matching and I could be completely wrong. These writing patterns could emerge from many different backgrounds and experiences. If you're interested in whether this analysis is accurate, I'd be curious to know the actual background of the author.
I'm blown away. I had no idea that you could even make that kind of derivation, none of these strike me as Indian™ tropes in the least. All LLMs are excellent world modelers (and by extension at modeling the user), but that's better than I expected and by far.
I'd be curious if anyone else wants to give it a try and is willing to report back. Just copy two or three substantive comments and throw 'em in the pot.
Good news (ish). While it's possible for such Chads to make a good crack at saturating the demand for casual sex, they're far more constrained when it comes to fulfilling the very real demand for steady relationships. Maybe they can juggle 4-6 women at once by operating in a gray zone or being noncommital, but it only goes so far.
If you were living through a societal breakdown/apocalypse (well, the kind that doesn't immediately kill you) and you had to pick a canine companion, what would you opt for?
It would definitely be a German shepherd for me. Loyal to a fault, smart and intimidating to strangers without being bloodthirsty. I miss mine a lot, but he had a good life and that's what counts. I'd expect a GSD would do a good job keeping you safe.
It certainly wouldn't be my lab, I tell you. An absolute lovable doofus who traded brains for brawn, turning out 50% larger than the average lab.
Be me
Doctor with ADHD
Scrabble tooth and nail to get a refill of my ADHD prescription
Diagnosed and treated abroad, existing script not valid
Local GP takes pity on my bedraggled state and pulls strings so an appointment with a psychiatrist takes 3 months instead of 2 years
Show up to the appointment, after traveling 3 hours
Secure prescription for new drug (dexedrine, based on Scott's stimulant review)
Spend another 3 traveling back
Stop at pharmacy eager to get my meds, having run out of a 3 month supply of Ritalin stretched far too long
Be informed that my shrink has prescribed it as capsules instead of tablets. Said capsules are not a thing in the UK, for some reason.
No, this has absolutely zero effect on dosage or the dispersibility of the drug
It's a controlled substance, the pharmacist can't do me a solid and just provide the tablets instead
It's an old-fashioned written prescription, can't just ask to change it electronically. I suppress a strong desire to scribble over it to uh, amend the error.
Be fucked
Cherish the irony of possibly getting prescribed drugs by another doctor with ADHD
Ugh. I could really have used my damn meds, I'm resorting to going through my luggage in the hope that there's something leftover. A perk of having ADHD is that something probably is.
In response to @Glassnoser in particular, I would point out that Kurzweil, as one of the original singularitarians, is likely banking on ASI to bail us out here.
Is aging an incredibly difficult problem? I would be the last to deny it. But is it fundamentally intractable, such that there is no hope at all of an end to it? I can't see a justification. It's an engineering problem, a brutal one, but solutions aren't ruled out by the laws of physics. We have existence proofs that there are organisms with negligible senescence around, and even large animals like sharks that live centuries if they're lucky. It's not a property restricted to jellyfish, and even a mere 50-100 years added to healthy human lifespan would be amazing.
Frankly speaking, I think it's a damning indictment of general rationality that we don't devote an appreciable fraction of GDP to solving it. If I was a billionaire, especially one staring death in the face, I'm not sure how much good my money would do me if I wasn't around to spend it. What good is anything if you're dead?
I think your evaluation of the current situation is quite accurate. There are pretty much no current interventions that would provide a guaranteed century of healthy life. That being said, we're not spending multiple percentages of GDP on the problem (or at least not productively, most medical therapy can be described as ad-hoc patches and sprays of WD-40 on a rusty old beater, when what we need is a full engine replacement and transmission overhaul). And with the potential of creating artificial entities much smarter than us, and soon? I wouldn't bet against it happening in the average remaining life expectancy of us Mottizens.
The various mountains of skulls and famines in the name of technocratic progress and rationality.
Have you seen the other piles of skulls? This argument always strikes me as curiously ahistorical. The notion that large scale human suffering began with the Enlightenment or its technocratic offspring ignores vast swathes of history. Pre Enlightenment societies were hardly bastions of peace and stability. Quite a few historical and pre Enlightenment massacres were constrained only by the fact that global and local populations were lower, and thus there were fewer people to kill. Caesar boasted of killing a million Gauls and enslaving another million, figures that were likely exaggerated but still indicative of the scale of brutality considered acceptable, even laudable. Genghis Khan's conquests resulted in demographic shifts so large they might have cooled the planet. The Thirty Years' War, fueled by religious certainty rather than technocratic rationalism, devastated Central Europe. The list goes on. Attributing mass death primarily to flawed Enlightenment ideals seems to give earlier modes of thought a pass they don't deserve. The tools got sharper and the potential victims more numerous in the 20th century, but the capacity for atrocity was always there.
At its most common denominator, the Enlightenment presumed that good thinking would lead to good results... [This was discredited by 20th century events]
The answer that seems entirely obvious to me is that if "good thoughts" lead to "bad outcomes," then it is probably worth interrogating what led you to think they were good in the first place. That is the only reasonable approach, as we lack a magical machine that can reason from first principles and guarantee that your ideas are sound in reality. Blaming the process of reason or the aspiration towards progress for the failures of specific, flawed ideologies seems like a fundamental error.
Furthermore, focusing solely on the failures conveniently ignores the overwhelming net positive impact. Yes, the application of science and reason gave us more efficient ways to kill, culminating in the horror of nuclear weapons. But you cannot have the promise of clean nuclear power without first understanding the atom, which I'm told makes you wonder what happens when a whole bunch of them blow up. More significantly, the same drive for understanding and systematic improvement gave us unprecedented advances in medicine, sanitation, agriculture, and communication. The Green Revolution, a direct result of applied scientific research, averted predicted Malthusian catastrophes and saved vastly more lives, likely numbering in the billions, than were lost in all the 20th century's ideologically driven genocides and famines combined. Global poverty has plummeted, lifespans have doubled, and literacy is nearing universality, largely thanks to the diffusion of technologies and modes of thinking traceable back to the Enlightenment's core tenets. To lament the downsides without acknowledging the staggering upsides is to present a skewed and ungrateful picture of the last few centuries. Myopic is the least I could call it.
It is also worth noting that virtually every major ideology that gained traction after the 1800s, whether liberal, socialist, communist, nationalist, or even reactionary, has been profoundly influenced by Enlightenment concepts. They might reject specific conclusions, but they often argue using frameworks of reason, historical progress (or regress), systematic analysis, and the potential for deliberate societal change that are themselves Enlightenment inheritances. This pervasiveness suggests the real differentiator isn't whether one uses reason, but how well and toward what ends it is applied.
Regarding the idea that the American founders might have changed course had they foreseen the 20th century, it's relevant that they did witness the early, and then increasingly radical, stages of the French Revolution firsthand. While the US Constitution was largely framed before the Reign of Terror (1793-94), the escalating violence and chaos in France deeply affected American political discourse in the 1790s. It served as a potent, real time cautionary tale. For Federalists like Hamilton and Adams, it confirmed their fears about unchecked democracy and mob rule, reinforcing their commitment to the checks and balances, and stronger central authority, already built into the US system. While Democratic Republicans like Jefferson initially sympathized more with the French cause, even they grew wary of the excesses. The French example didn't lead to fundamental structural changes in the established American government, but it certainly fueled partisan divisions and underscored, for many Founders, the importance of the safeguards they had already put in place against the very kind of revolutionary fervor that consumed France. They didn't need to wait for the 20th century to see how "good ideas" about liberty could curdle into tyranny and bloodshed; they had a disturbing preview next door. If they magically acquired a time machine, there's plenty about modernity that they would seek to transplant post-haste.
If a supposedly rational, technocratic plan leads to famine, the failure isn't proof that rationality itself is bankrupt. It's far more likely proof that the plan was based on faulty premises, ignored crucial variables (like human incentives or ecological realities), relied on bad data, or was perhaps merely a convenient rationalization for achieving power or pursuing inhumane goals. The catastrophic failures of Soviet central planning, for instance, stemmed not from an excess of good thinking, but from dogma overriding empirical feedback, suppression of dissent, and a profound disregard for individual human lives and motivations.
The lesson from the 20th century, and indeed from the French Revolution itself, isn't that we should abandon reason, progress, or trying to improve the human condition through thoughtful intervention. The lesson is that reason must be coupled with humility, empiricism, a willingness to course correct based on real world results, and a strong ethical framework that respects individual rights and well being. Pointing to the failures of totalitarian regimes that merely claimed the mantle of rationality and progress doesn't invalidate the core Enlightenment project. It merely highlights the dangers of dogmatic, unchecked power and the absolute necessity of subjecting our "good ideas" to constant scrutiny and real world testing. Throwing out the entire toolkit of reason because some people used hammers to smash skulls seems profoundly counterproductive. You can use hammers to put up houses, and we do.
People win the lottery despite the odds not being in their favor.
If you've bought a ticket, and then you find a million pounds in your bank account, then congratulations, knowing that the odds were stacked against you doesn't mean you've not won.
I know literally zero people who have been "vaccine maimed". I used to be responsible for a COVID ICU before vaccines too, and I can definitely tell you that I saw plenty die of it.
It is far more likely that you are either:
-
Lying. On the internet, anyone can be a dog, or claim to be one.
-
Mistaken.
-
Surrounded by people who are mistaken or lying.
Assuming 150 people you could "closely know" (Dunbar's number as a first approximation), then someone, somewhere, out there in the world will find 3 people who were harmed by vaccines. Because vaccines are not perfectly safe, and I've never claimed that. If you consider people who are mistaken about their illness being caused by a vaccine, then the number skyrockets.
- Prev
- Next
Trump tariffs McDonald's:
BBC article for a more detailed overview.
Highlights or lowlights include:
I'm not an economist, but I don't think it's a good idea to throw out tariffs with such clear absence of rigor. The only saving grace is that Trump is fickle, so if enough people yell at him from his in-group, he might pivot in a week. If not, bloody hell.
More options
Context Copy link