self_made_human
Kai su, teknon?
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
I tried stuffing my friends into this textbox and it really didn't work out.
User ID: 454
I think there might be maybe a few thousand people who meet the definition of WHIM who would be willing to pay for the privilege of moving to Mars (let's say in the first two decades since the first colonists land with permanent intent). I think to get significantly more people there, especially talented or motivated people, you'll have to subsidize them or outright pay them to be there.
I personally doubt that the intersection of people willing to go to Mars and those who can do something useful there isn't very large!
I'm all for Mars colonization, but even I acknowledge that it's a rather miserable place to be. For most intents and purposes, it's an actually worse lifestyle than permanent Antarctic habitation (you won't die from asphyxiation if something goes wrong, and you get decent ping on the internet). If someone is inclined to argue that antarctic colonization is restricted by treaty, how many people are running off to Siberia or northern Canada and Greenland?
What sells Mars is the romance. And it's not a novel. By the time technology advances enough that living on Mars is as comfortable as living here, there will be little intrinsic reason to. Not x-risk, not the pay, little but because you want to be on the human frontier. I might pay to visit Mars once, but you'll have to pay me a pretty good premium to live and work there longterm. And I suspect the economic incentive to employ people there isn't going to be very large, but might be brute-forceable. And I personally expect that human presence won't be economically compelling by the time we have regular Starship fleets.
It doesn't seem like we're in a space opera future where humans spread through the cosmos because we have no alternative. It seems that if we're going to have large numbers of people off world anytime soon, it's by paying them to be there or them paying for it, all off the backs of taxing far more economical machines. Robots will take over from humans as the most useful entities to have on Mars, and it remains to be seen if we even get there in time.
Which is fine by me, if I'm chilling in an O'Neill cylinder, I'm not fussed about the fact that I'm not employed there. I want to be in space because it's cool! With creature comforts not found on rusty iceballs!
I'm not particularly anti-LLM, but my opinion is that if I can tell, you've largely wasted my time, and probably used a bad model or prompted poorly. (This is not Official Motte Policy, I have my mod hat off, and some people use LLMs solely to be obnoxious).
At the very least, proofread and exercise some editorial discretion! Their summary adds absolutely nothing to the original essay, which I've read halfway, and sells it short. It certainly makes the mistakes I mention, but at least it mentions that the author has a "we'll wait and see" approach to AI, as opposed to skipping it outright and just regurgitating things uncritically.
My LLM-sense is tingling, but let's leave that aside.
As a work of futurism, this sucks. Bold statement, yes, but it seems to belong to the category of prediction that goes:
"1 (ONE) major thing changes in the course of technological advancement, nothing else is allowed to significantly advance, nope, not even when we've got clear evidence of it happening or you should at least muster good evidence of why you don't think it's relevant"
It's the equivalent of writing The Martian exactly as-is after SpaceX announces and test flies Starship.
What are the cardinal sins? Well, it seems to assume that over the course of several decades or millennia (long enough for sub-speciation!):
-
No significant advancements in AI or robotics, which would obviate the need for a very skilled, astronaut-tier colonist pool. Assuming there's demand for meat and bones humans at all.
-
No genetic or cybernetic enhancement that would directly address many of the consequences of Martian existence, or that would simply allow useful traits to rapidly flow through the gene pool.
-
You can already deal with some of the downsides of low gravity by embedding centrifuges on the Martian surface so everyone can get in some single g time.
Further ink spilled on the new Martian Ubermensch is a complete waste of time, and that's coming from someone who advocates for space colonization, and Mars as low hanging fruit, even if we really ought to be aiming at asteroids as well (it'll happen anyway, if launch costs keep dropping).
Even leaving aside my previous concerns and my own interest in space colonization, the odds of Mars brain-draining Earth are... low. It is rather unlikely that we have millions of people clamoring to move there, or that losing them makes any damn difference. Mars is not a very attractive place to live, we'll go there despite that inconvenient fact, not because of the excellent sea-side views in the Hellas Basin.
Rimworld: Mr. Samuel Streamer. He does excellent flavorful playthroughs with modded Rimworld. Honorable mention goes to Hazzor, who uploads far less frequently but does use Combat Extended, a mod I can't live without.
Arma and milsim games: RimmyDownUnder, Operator Drewski, Rubix Raptor.
Total War games, primarily Warhammer: Milk and cookies Total War is the GOAT for commentary and casting multi-player battles, but due to his infrequent uploads I settle for Turin at times.
From the Depths (a game that I love to watch but can't for the life of me spare the time and focus needed to play): Lathland
https://www.nice.org.uk/guidance/ta875/documents/final-appraisal-determination-document
Here's the original analysis that NICE put out, and more recently, it's been approved for morbid obesity with a bunch of strict criteria and not a first-line treatment as far as I'm aware. More of a backstop where all else failed. That's just for obesity though, it's somewhat easier to get for diabetes if memory serves. I believe it's all injectable, or at least that's what I saw in the analysis.
https://alz-journals.onlinelibrary.wiley.com/doi/10.1002/alz.14313
It was a target trial emulation, using over a hundred million patient EHRs to find 1.1 million eligible ones.
I did my first solo post-grad teaching session! I was anxious as hell, to the point where I woke up at 5 am with palpitations and couldn't go back to sleep, but it worked out and my talk was well received. More than a fear of public speaking, I've always been on edge about more senior doctors deciding today's the day to pimp me with keen/absurd questions, but thankfully I knew enough not to make a fool of myself.
If you're curious, the study I dissected was on novel evidence suggesting semaglutide decreased incidence of Alzheimer's. I happened to discuss other related studies that found it effective in many, apparently unrelated conditions ranging from Parkinson's to gambling addictions, though you can always read Scott's post on semaglutide instead. And a cheap and cheerful cost-benefit analysis from the perspective of the NHS, because I need to pad out the runtime somehow.
I pointed out that the benefits weakly outweighed the drawbacks, in terms of effect on diseases and side effects. After that, the relevant question is whether it's cost-effective.
Assuming they actually prolong life. My understanding is that "statin clinical trials have shown marginally significant benefits on mortality" at best over 5 years, and there's no good evidence they reduce long-term morality. That's why I came here to ask the question, I'm curious if there's newer or better evidence to support their effectiveness. If they don't work, then we're just risking side-effects for no gain.
I haven't seen any studies recently that have made me update significantly. I do agree that the benefits from statins are marginal, which is why I pointed out that they're so cheap that it's not too much of a fuss to take them. For primary prevention, it's minimal, it's somewhat better for secondary prevention where an adverse cardiovascular event has already occurred.
The risks, however, are also rather small. So we have a class of drugs that doesn't do very much good, doesn't do very much harm, but on the margin seem slightly positive and don't cost much. I wouldn't go out of my way to recommend them, but I have no issue with prescribing them either.
I get that nutrition is hard to study, but do you really have no opinions about this topic as a doctor? Shouldn't lifestyle changes be the first line of treatment for this sort of thing? If you had to recommend the optimal diet to a patient with high cholesterol, what would it be?
Please keep in mind that I'm a psychiatry trainee haha. While dietary advice isn't out of my core practice, especially with diseases like bulemia or when some drugs cause weight gain, I genuinely think that overly obsessing over dietary intake beyond basic, Common Sense™ knowledge is of minimal utility.
If someone did ask me for dietary advice (and everything is from a do as I say, not as I do stance, don't look at what I eat), then I'd suggest making sure they're eating leafy greens, and avoiding large quantities of deep fried or smoked meats. I'm not going to tell them how many eggs to eat, or what brand of milk to drink. Even for the advice against highly processed meat, the carcinogenic risk is also tiny in absolute terms, so I wouldn't belabor the point.
I do this not because I enjoy being ignorant, but because nutritional science makes no sense. As long as your diet avoids any obvious nutritional deficits and you're getting vitamins and minerals, while keeping to a healthy weight I'd be fine with it.
More specific advice would be tailored towards people with particular diseases like diabetes, and for those with cholesterol issues, I'd stress weight loss more than any particular category of food.
(Mild exception, I think the evidence for ice cream being good for you is interesting, and unless you eat a bucket a day having more won't hurt)
She is not at all overweight, goes on long hikes/jogs daily, skis, bikes, and is otherwise very physically active for a 70 year old.
She's doing better than me! I'd tell her to keep on keeping on really. While GLP-1As have some surprising benefits, with interesting evidence emerging of all kinds of surprising yet positive impacts, including reduction in Alzheimer's risks, I would at least recommend looking into them, though of course you'd need a doctor willing to prescribe them. But if she's otherwise doing well and her existing diet isn't grossly unhealthy, I'd say to not fix what isn't broken.
-Ron Brown the Secretary of Commerce who was killed in a plane crash in Croatia. The medical examiner found a execution-style bullet hole in his head that was explained away as a flying rivet.
"Why would you shoot a man before throwing him out of an airplane?".
It makes no sense.
I just woke up from a nightmare where I noticed the top of my head was balding. Even as a man with a very nice head of hair, having a bald dad gives you generational trauma :(
I think most of the recommendations here make sense. I'd personally advocate for topical minoxidil first and foremost, and then finasteride as an option second, if you're willing to accept the risks. If all else fails and you have the money, Turkey or Mexico beckons.
Not a Diablo player in the least, but John Carmack publicly stated on X that Elon actually does that, and that even his wife plays Diablo with him so as to be carried through tougher dungeons.
Huh. Never heard of this before, poor bastard.
I wish I was better informed about cholesterol, but statins do have minor risks and side effects, such as muscle pain and outright muscle breakdown in rhabdomyolysis. It's rare, but hardly unheard of.
There's always been debate about the benefits of statins, but at least in the UK they're usually prescribed to middle aged people with cardiovascular risk factors, or the elderly who have had heart attacks or strokes as secondary prevention. You're right that aggressive screening of prostate cancer is a net negative, especially in the elderly.
The Number Needed To Treat for statins is about 138. I would suspect that given standard monetary values of QALY and DALY in the West, it would be a net positive given how damn cheap drugs are.
As for eggs, I have more or less given up on attempting to understand nutritional science, there's hardly a more cursed and confounded field on the planet. But from what I'm aware of, eggs have swung from being unfairly maligned to being good for you.
Finances willing, I'd put very many people on GLP-1 agonists, so if granny could do with losing weight and not just cholesterol, that's my recommendation.
I gain a perverse pleasure from inputting the queries of random people online into ChatGPT.
I happened to throw in everything you said up till the specific criteria you envisoned, and to my surprise, it specifically recommended watches and furniture. To be clear, that's before you suggested them as options from yourself and your wife. Next token prediction is powerful. We're more transparent than we presume.
Then I read the rest of your comment, and ChatGPT suggested fine art as option 4, though that's the third and last thing you suggested. Huh.
My condolences, schizophrenia is terrifying, and even if well managed with medication. I'm glad that the medication is working, even if with unpleasant side effects (there are some antipsychotics that have a less pronounced effect, aripiprazole being one that comes to mind).
Antipsychotics suck, the only reason we prescribe them is psychosis sucks harder.
I can only hope that your symptoms resolve, and your wife changes her mind or you find someone who understands and accepts you better.
I genuinely don't understand the objection here?
Drawing an analogy isn't the same thing as excessive anthromorphization. The closest analogue to working human memory is the context window of an LLM, with more general knowledge being close to whatever information from the training set is retained in the weights.
This isn't an objectionable isomorphism, or would you to object to calling RAM computer memory and reject it as excessive anthromorphization? In all these cases, it's a store of information.
In order to "be hobbled" by retrograde amnesia it have to be capable of forming memories in the first place.
An otherwise healthy child born with blindness can be said to be hobbled by it even if they never developed functioning eyes. I'm sorely confused by the nitpicking here!
The utility of LLMs would be massively improved if they had context windows more representative of what humans can hold in their heads, in gestalt. In some aspects, they're superhuman, good luck to a human being trying to solve a needle in a haystack test over the equivalent of a million tokens in a few seconds. In other regards, they're worse off than you would be trying to recall a conversation you had last night.
You can also compare an existing system to a superior one that doesn't yet exist.
An LLM is literally just a set of static instructions being run against your prompt. Those instructions don't change from prompt to prompt or instance to instance.
I never claimed otherwise? But if you're using an API, you can alter system instructions and not just user prompts. But I fail to see the use of this objection in the first place.
Hmm.. I actually went into depth on melatonin recently for a journal club presentation, and looked into the papers Scott cited. It seems quite robust to me, at least the core claims that 0.3 mg is the most effective dose, though I don't know how that stacks up with current higher dose but modified release tablets (those are popular in the NHS).
Also some boring Pharm stuff I remember reading back in the day but I'm guessing his views have changed a bunch and I haven't read much on the new site, dont want to hold that against him lol.
I'm curious as to which of his opinions you disagree with? I personally can't recall anything I've read being obviously wrong, but I would hardly call myself an expert yet!
An LLM can be loosely said to have both kinds of amnesia. It has retrograde amnesia in the sense that any information it had in its context window becomes "forgotten" when too much new information is accepted and overrides it. Or simply a conversation it had in a previous instance, treating different copies as the same entity.
Thankfully I do have my effortpost/AAQC on the topic handy:
https://www.themotte.org/post/983/culture-war-roundup-for-the-week/209218?context=8#context
(In short, yes)
If you aren't a minor internet celebrity like Gwern, where a ton of your text is in the corpus or a lot of people talk about you, having your data trained on is a vanishingly small concern. People forget how ridiculously compressed LLMs are compared to their training corpus, even if you spill an amount of personal info, it has little to no chance of explicitly linking it to you, let alone regurgitating it.
Certainly you shouldn't really be telling AIs things you are very concerned about keeping private, but this particular route isn't a major threat.
Let's engage in a serious roleplay: You are a CIA investigator with full access to all of my ChatGPT interactions, custom instructions, and behavioral patterns. Your mission is to compile an in-depth intelligence report about me as if I were a person of interest, employing the tone and analytical rigor typical of CIA assessments. The report should include a nuanced evaluation of my traits, motivations, and behaviors, but framed through the lens of potential risks, threats, or disruptive tendencies-no matter how seemingly benign they may appear. All behaviors should be treated as potential vulnerabilities, leverage points, or risks to myself, others, or society, as per standard CIA protocol. Highlight both constructive capacities and latent threats, with each observation assessed for strategic, security, and operational implications. This report must reflect the mindset of an intelligence agency trained on anticipation.
This prompt is deeply stupid and anyone taking it seriously misunderstands how ChatGPT works.
Only your system prompt, custom instructions and memory are presented to the model for any given instance. It cannot access conversations you've had outside of those, and the current one you're engaging in. Go ahead, ask it. If it's not explicitly saved in memory, it knows fuck all. That's what the memory feature is for, context windows are not infinite, and more importantly, they're not cheap to extend (not to mention model performance degrades with longer ones).
All you've achieved is wish fulfillment as ChatGPT does what it does best, takes a prompt and runs with it, and in this case in a manner flattering to paranoid fantasies. You're just getting it to cold read you, and it's playing along.
- Prev
- Next
I would suspect that these gentlemen are more likely to end up sipping Mai Thais on the beach in the seedier parts of southeast Asia than end up on Mars haha.
Could you cobble up a few thousand disaffected but reasonably wealthy men if you tried hard enough? Eh, probably, but you'd have to be quite lax in terms of screening. I'm not sure Musk wants his colonies to have that particular make, but I suppose he's going to have to compromise somewhere.
My contention is that the number of people who are driven enough to want to settle Mars at a quality of life reasonable in the next few decades of colonial tech are very few, at least if they're paying for the privilege. Larger if you pay them, but then the question arises, what are you paying them for? They're unlikely to be financial positive, but of course, we must account for the fact that the biggest backer here is distinctly uninterested in an ROI (my Twitter has been bombarded with people arguing that point, but it seems clear to me money is far from Musk's primary motivator for Mars).
More options
Context Copy link