self_made_human
amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
A friend to everyone is a friend to no one.
User ID: 454
I've been... lazy in that regard. Far too much money in my account that's not accruing interest. But yes, that's a factor. I also earn an OOM more than I did back India, which definitely helps. If I was less lazy, I'd have put most of my money in the S&P500 by now, but I've already put myself in a much better place than if I'd been complacent about things.
I don't expect that this will necessarily make me rich in relative terms, I'm starting too low, too late. But I want enough of a safety net to survive in comfort for the (potential) period of unemployment when AI eats my profession whole, before we implement solutions such as UBI. Not starving, not dying in a riot, all of that is important to me.
I'm probably in the 99.99th percentile for doctors (or anyone else) when it comes to the use of AI in the workplace. I estimate I could automate 90% of my work (leaving aside the patient facing stuff and things that currently require hands and a voice) if I could.
The main thing holding me back? NHS IT, data protection laws and EMR software that still has Windows XP design language. This means I'm bottlenecked by inputting relevant informant into an AI model (manually trawling the EMR, copying and pasting information, taking screenshots of particularly intransigent apps) and also transferring the output into the digital record.
The AIs are damn good at medicine/psychiatry. Outside my own domain, I have a great deal of (justified) confidence in their capabilities. I've often come to take their side when they disagree with my bosses, though the two are usually in agreement. I've used them to help me figure out case presentations ("what would a particularly cranky senior ask me about this specific case?" and guess what they actually asked?), giving me a quick run-down on journal publications, helping me figure out stats, sanity checking my work, helping decide an optimal dose of a drug etc. There's very little they can't do now.
That's the actual thinky stuff. A lot of my time is eaten up by emails, collating and transcribing notes and information, and current SOTA models can do these in a heartbeat.
To an extent, this is an artifact of resident doctors often being the ward donkey, but I'm confident that senior clinicians have plenty to gain or automate away. The main reason they don't is the fact that they're set in their ways. If you've prescribed every drug under the sun, you don't need to pop open the BNF as often as a relative novice like me would - that means far less exploration of what AI can do for you. Yet they've got an enormous amount of paperwork and regulatory bullshit to handle, and I promise it can be done in a heartbeat.
Hell, in the one hospital where I get to call the shots (my dad's, back in India), I managed to cut down enormous amounts of work for the doctors, senior or junior. Discharges and summaries that would take half a day or more get done in ten minutes, and senior doctors have been blown away by the efficiency and quality gains.
Most doctors are at least aware of ChatGPT, even if the majority use whatever is free and easy. I'm still way ahead of the curve in application, but eventually the human in the loop will be vestigial. It's great fun till they can legally prescribe, at which point, RIP human doctors.
ChatGPT isn’t so different from, say, Jarvis in Iron Man (or countless other AIs in fiction), and the median 90-100IQ person may even have believed in 2007 that technology like that actually existed “for rich people” or at least didn’t seem much more advanced than what they had.
Eh? I'm very confident that's wrong. Normies might not appreciate the impact of ChatGPT and co to the same degree, but I strongly doubt that they literally believed that there was human-level AI in 2021. AGI was science fiction for damn good reason, it didn't exist, and very, very few people expected we'd see it or even precursors in the 2020s. Jarvis was scifi, and nobody believed that something like Siri was in the same weight-class.
To shift focus back to your main thesis: the normie you describe is accustomed and acclimatized to being average. Bitter experience has proven to them that they're never going to be an "intellectual" and that their cognitive and physical labor is commoditized. It's unlikely that being the smartest person in the room (or in spitting distance) is an experience they're familiar with. Hence they have less to lose from a non-human competitor who dominates them in that department.
On the other hand, their average Mottizen is used to being smart, and working in a role where it's not easy to just grab a random person off the street to replace them. That breeds a certain degree of discomfort at the prospect. I've made my peace, and I'm going to do what I can to escape the (potential) permanent underclass. It would be nice to have a full, accomplished career with original contributions to my professional field or the random topics I care about, but I'll take a post-scarcity utopia if I can get it.
Call me an optimist, but if we're talking about a civilization capable of rendering the observable universe at our perceived level of fidelity, I think they've got that handled.
Why not? The question was whether small tweaks in the sim parameters might have catastrophic effects. Being able to revert to an earlier save or checkpoint is useful either way.
It's okay, I'm sure they've got rolling backups.
The Culture series? The Golden Oecumene, for a slightly different take on a post-scarcity utopia.
I was visiting my psychiatrist uncle while his dad was over. As part of Standard Indian Hospitality, they tried offloading their spare wardrobe onto me. This included a pair of high-waisted grandpa trousers.
I'm sold. They're surprisingly comfy, look good on me, and go with everything. Highly recommend.
The incredibly loud and sequined suits? Less so, but you take the good with the bad.
I can nominate @ArjinFerman or @Corvos, if they're willing to accept. I'd be happy to not bother with an escrow if you're fine with it, given the lower sums involved.
My proposed terms are clear concessions on an acquittal or conviction, and if this somehow doesn't resolve in 2 years, a general throwing up of hands and acceptance that we're never getting to the bottom of this.
I don't see how this disagrees with anything I've said?
The hypothetical example you've presented is probably more cut-and-dry than anything we've seen here. I suspect that it would actually be more likely to end in a conviction than you think, judges do not regularly do Bayesian calcs in court.
A guilty verdict is very strong evidence of guilt. A verdict of 'not proven' is very weak evidence of factual innocence (as opposed to legal innocence).
I agree, in fact I alluded to the same. If a video came out showing an assault by the accused and without a conviction (as unlikely as that is), then I'd be willing to accept that in lieu of a favorable legal verdict.
keep in mind your original argument rested on nothing more than statements from the police, not official charges, or an actual convction
Hmm? I don't think that's the case. I also heavily stressed what can only be described as "local sentiment", perhaps priors, in addition to the official story. The locals (debatably including me) thought it's more likely than not.
For example:
My own priors, which seem to match those of most actual Scots I’ve spoken to, lean toward a more mundane explanation.
and how you portrayed anyone unconvinced by your arguments as unreasonable.
That is not true. I think I made a strong argument, but I also acknowledge:
I would like to believe that this clarification settles things, but I am also not naïve. If your epistemic filter is tuned to maximum paranoia, then the absence of evidence is merely further evidence of a cover-up. For everyone else, the police statement, local skepticism, and sociological context should nudge your priors at least a little.
In other words, as a Bayesian, my opinion is that you should at the very least be slightly swayed by the argument. That is not the same as thinking that anyone who disagrees with me is unreasonable. There are actual people (living breathing humans) who are immune to any argument, probably including divine intervention. My scorn is largely reserved for them.
Similarly, the article you shared has meaningfully moved my posteriors. Back then, I expect that if anyone asked, I'd say I'm 80-90% confident of a lack of guilt, and now I've moved down to 70%. That is precisely the kind of update in the face of new evidence that I endorse and respect. Hence why I do it myself.
I expect that if a conviction is secured, I'd jump to maybe a 90% certainty that I was wrong, and if they're acquitted, then back up to 90% confidence of being correct. Feel free to tag me if something happens, since I don't really read the BBC that often.
I'm not sure I trust you enough to hold up your end of the bargain. If, for the sake of example, it was @ArjinFerman offering, I'd take it, though I'd prefer smaller sums like £50:25 since I don't care that much. If you're willing to go through the hassle of finding someone to use as an escrow, while using crypto (which is hassle on my part), sure.
If not, I care about my reputation and epistemics to happily accept being proven wrong, if and when I'm proven wrong.
I agree that's a possibility, and if that's proven, I'd be more sympathetic. I personally disagree quite a bit with the UK's approach towards banning pretty much every form of self-defense.
I'm happy to concede if the prosecution ends in a conviction. I still think it's more likely than not that they're acquitted (if I had to put a number on it, 70%).
I'm also happy to acknowledge that acquittal doesn't necessarily mean a lack of guilt, but I don't think the British judicial system is so corrupt that it represents null evidence.
I feel like it would be less hassle to take a heroic dose of viagra and then think of England.
"The AI doesn't hate you, it doesn't love you, but it knows you are made of atoms it can use for something else."
About 2 years back, and at the time I'd perceived it as unilateral and I'd been in the midst of incredibly stressful exams while under a lot of pressure. That was also the case for the second episode a year later. I never had anything similar during my childhood or adolescence.
The scotoma was also very different from my impression of what a typical aura was like (not that I'm an expert on migraines) so the initial absence of headaches in addition to that had me never consider migraines as a cause.
I used to have vision problems. They'd diagnosed me with central serous chorioretinopathy, which is one of those conditions that sounds scary but usually isn't. The main exception: sometimes it progresses to retinal detachment and you go blind. But this seemed unlikely enough that I wasn't too worried.
For years I'd get maybe one episode annually. Fine, whatever. Then a few months ago the frequency ramped up dramatically, and I started wondering if something was wrong.
I did what any reasonable person does when they suspect their diagnosis might be incorrect: I asked several AI models. They all said roughly the same thing: this doesn't look like CSCR. The visual field defects were appearing and resolving way too quickly. Plus I was getting headaches, eye pain, and nausea, though mostly just the scotoma by itself. None of this quite fit.
Today I had another episode and finally got myself to an ophthalmologist. I came prepared. I'd used an Amsler grid to map the exact coverage of the blind spot and tracked how it progressed over time. The smoking gun: this time I had clear evidence the scotoma was bilateral, affecting both eyes instead of just one.
ChatGPT (the 5 Thinking model) had been pretty insistent this was migraine with aura (and careful to exclude more concerning pathology like a TIA/amaurosis fugax). After the ophthalmologist spent several minutes blinding me with lights so bright they photobleached my retinas (and myrdriatics, couldn't read anything closer than two feet for a while afterwards, this is why I'd been putting off the appointment), guess what conclusion they reached?
Migraine with aura.
On one hand: relief. No risk of going blind after all. On the other hand: migraines suck, and I'm pretty annoyed that multiple doctors missed this. The symptoms appearing and disappearing so quickly should have raised immediate doubts about CSCR, which takes days to resolve. Even I started questioning it once the pain showed up, though admittedly I never jumped to migraines either. But my excuse is that I'm a psychiatrist, not an ophthalmologist.
Unfortunately, I was also diagnosed with ocular hypertension, which is a risk factor for glaucoma. Uh.. You win some, you lose some? And it's helpful for your clinician to run tests on you in person? Go see a doctor too, even if ChatGPT is very helpful. It sadly lacks thumbs.
So once LLMs start having little green men inside them they will be as conscious as a corporation haha. Also a corporation itself is not more conscious than a rock, as the corporation cannot do anything without conscious agents acting for it. It has no agency on its own. If I create an LLC and then forget about it, does it think? does it have its own will? or does it just sit there on some ledger. If a rock has people carrying it around and performing tasks for it, has it suddenly gained consciousness?
It is helpful to consider another analogue: the concept of being "alive". A rock is clearly not alive. A human is. So are microbes, but once we get to viruses and prions, the delineation between living and non-living becomes blurry.
Similarly, it is entirely possible that consciousness can be continuous. I'm not a pan-psychist, I think it's dumb to think that an atom or a rock has any degree of consciousness, but consider the difference between an awake and lucid human, one who is drunk, one who is anesthetized or in a coma, someone lobotomized, a fetus etc. We have little idea what the bare minimum is.
A rock is no more conscious for being held than it was before. I think it's fair to say that the rock+human system as a whole is conscious, but only as conscious as the human already was. Think about it, there already is a "rock" in every human: a collection of hydroxyapatite crystals and protein matrices that make up your bones. And yet your consciousness clearly does not lie in your bones. Removing your femur won't impact your cognition, though you'll have a rather bad limp.
Humans are already made up of non-sentient building blocks. Namely the neurons in your brain. I think we can both agree that a single neuron is not meaningfully conscious, but in aggregate?
And guess what? We can already almost perfectly model a single biological neuron in-silico.
https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/
This function is what the authors of the new work taught an artificial deep neural network to imitate in order to determine its complexity. They started by creating a massive simulation of the input-output function of a type of neuron with distinct trees of dendritic branches at its top and bottom, known as a pyramidal neuron, from a rat’s cortex.
they fed the simulation into a deep neural network that had up to 256 artificial neurons in each layer. They continued increasing the number of layers until they achieved 99% accuracy at the millisecond level between the input and output of the simulated neuron. The deep neural network successfully predicted the behavior of the neuron’s input-output function with at least five — but no more than eight — artificial layers. In most of the networks, that equated to about 1,000 artificial neurons for just one biological neuron.
In principle, there's nothing preventing us from scaling up to a whole rat brain or even a human brain, all while using artificial neural nets. I am of course eliding the enormous engineering challenges involved, but it can clearly be done in principle and that's what counts.
(I'm aware that the architectures of an LLM and any biological brain are very different)
My basic theory(really a constraint) of conscious behavior:
Any sentient system must have persistent internal state across time.
This implies non-Markovian dynamics with respect to perception and action.
LLMs are finite-context, externally stateful, inference-time Markovian systems. Therefore, LLMs lack a necessary condition for consciousness.
We have examples of sentient systems with no persistent state, and humans to boot. There are lesions that can make someone have complete anterograde amnesia. They can maintain a continuous but limited capacity short-term memory, but the standard process of encoding and storage to longterm memory fails.
They can remember the last ~10 minutes (context window) and details of their life so far (latent knowledge) but do not consolidate new memories and thus are no longer capable of "online" learning. I do not think it's controversial that such people are conscious, and I certainly think they are.
That demonstrates, at least to my satisfaction, an existence proof that online learning is not a strict necessity for consciousness.
Further, I do not think that using an external repository to maintain state is in any way disqualifying. Humans use external memory aids all the time, and we'll probably develop BCIs that can export and import arbitrary data. There is nothing privileged about storage inside the space of the skull, it's just highly convenient.
I have strong confidence that I'm conscious, and so are you and the typical human (because of biological and structural similarities). I am also very confident that rocks and atoms aren't. I am far more agnostic about LLMs. We simply do not know if they are or aren't conscious.
My objection is to your expression of strong confidence that they aren't conscious. As far as I can tell, the sensible thing to do is wait and watch for more conclusive evidence, assuming we ever get it.
Maybe I need to reread your opinion, but my understanding is that you are in the "LLMs are conscious/have minds" camp of thought.
I do not believe that my thoughts on the topic came up, at least in this thread. As above, I do not make strong claims that LLMs are conscious. I maintain uncertainty. I don't particularly think the question even matters, since I wouldn't treat them any differently even if they were. "Mind" is a very poorly defined term (and we're already talking about consciousness, which doesn't do so hot itself). I think that conceptualizing each instance of an LLM as being a mind is somewhat defensible, even if that's not a hill I particularly care to die on.
Well, I'd have to go looking for kosher sausages. Shame, pork really is the best for that.
Also you've gotten reductionism vs abstractions completely backwards.
That's what I get for arguing at 3 am. I do know the difference.
See my latest reply to Toll for more.
A corporation is a higher-level abstraction with goals, memory, persistence, and decision-making. Do we think corporations are conscious?
They are more "conscious" than a rock. I do not know if they have qualia, but at least they contain conscious entities as sub-agents (humans).
A nation-state has beliefs, intentions, and agency in discourse. Are they conscious? Do they feel pain?
Would you start objecting if someone were to say "China is becoming increasingly conscious of the risk posed by falling behind in the AI race against America"? Probably not. Are they actually conscious? Idk. The terminology is still helpful, and shorter than an exhaustive description of every person in China.
A thermostat system “wants” to maintain temperature. Are they alive?
No, but the word "alive" is slightly more applicable here than it would be to a rock. Applying terms such as "alive" to a thermostat is a daft thing to do in practice, we have more useful frameworks: an engineer might use control theory, a home owner might only care about what the dials do in terms of the temperature in the toilet. Nobody gets anything useful out of arguing if it's living or dead.
LLMs don't have minds and they aren't conscious.
Hold on there. You are claiming, in effect, to have solved the Hard Problem of consciousness. How exactly do you know that they're not conscious? Can you furnish a mechanistic model that demonstrates that humans made of atoms or meat are "conscious" in a way that an entity made of model weights can't be even in principle?
They are parameterized conditional probability functions, that are finite-order Markovian models over token sequences. Nothing exists outside their context window. They don't persist across interactions, there is no endogenous memory, and no self-updating parameters during inference.
Entirely correct.
They have personality like programing languages or compilers have personality, as a biased function of how they were built, and what they were trained on.
That is not mutually exclusive to anything I've said so far.
As opposed to what? A tiger not made out of atoms?
We've only known that tigers are made out of atoms for a few hundred years. That is a fact of interest to biologists, I'm sure, but everyone else was and is well-served by a higher level description such as "angry yellow ball of fur that would love to eat me if it could." The point is that that the more reductionist framework doesn't obviate higher-level models. They are complimentary. Both models are useful, and differentially useful in practice.
(The tiger could be made out of 11-dimensional strings, or it, like us, could be instantiated in some kind of supercomputer simulating our universe, as opposed to atoms. This makes very little difference when the question is running zoos or how to behave when you see one lurking in your driveway.)
You think that LLMs being a "bunch of weights" makes ascribing a personality to them somehow incorrect. I don't see how that's the case, any more than someone arguing that humans (or tigers) being made of atoms precludes us from being conscious, being minds or having personalities, even if we don't know how those properties rise from atoms.
That something looks like, sounds like, and walks like a duck doesn't always make it a duck. For example, is Donald Duck a duck?. Well, we can yes and know that he's a representation of a conception of a duck with human like personality mapped onto him (see where I'm going ...) but it doesn't make him a duck made out of atoms - which seems to be, like, important or something.
He's not Donald Goose, is he? Jokes aside, I'm not sure what the issue is here. Donald Duck on your TV is a collection of pixels, but his behavior can be better described by "short-tempered anthropomorphic duck with an exhibitionist streak" (he doesn't wear pants, probably OK principle).
If you sit down to play chess against Stockfish, you can say "this is just a matrix of evaluation functions and search trees." You would be correct. But if you actually want to win, you have to model it as a Grandmaster-level opponent. You have to ascribe it "intent" (it wants to capture my queen) and "foresight" (it is setting a trap), or you will lose.
My point is basically endorsing Daniel Dennett's "Intentionalist" stance. Quoting the relevant Wikipedia article:
The core idea is that, when understanding, explaining, and/or predicting the behavior of an object, we can choose to view it at varying levels of abstraction. The more concrete the level, the more accurate in principle our predictions are; the more abstract, the greater the computational power we gain by zooming out and skipping over the irrelevant details.
The most concrete is the physical stance, the domain of physics and chemistry, which makes predictions from knowledge of the physical constitution of the system and the physical laws that govern its operation; and thus, given a particular set of physical laws and initial conditions, and a particular configuration, a specific future state is predicted (this could also be called the "structure stance").[15] At this level, we are concerned with such things as mass, energy, velocity, and chemical composition. When we predict where a ball is going to land based on its current trajectory, we are taking the physical stance. Another example of this stance comes when we look at a strip made up of two types of metal bonded together and predict how it will bend as the temperature changes, based on the physical properties of the two metals.
Most abstract is the intentional stance, the domain of software and minds, which requires no knowledge of either structure or design,[17] and "[clarifies] the logic of mentalistic explanations of behaviour, their predictive power, and their relation to other forms of explanation" (Bolton & Hill, 1996, p. 24). Predictions are made on the basis of explanations expressed in terms of meaningful mental states; and, given the task of predicting or explaining the behaviour of a specific agent (a person, animal, corporation, artifact, nation, etc.), it is implicitly assumed that the agent will always act on the basis of its beliefs and desires in order to get precisely what it wants (this could also be called the "folk psychology stance").[18] At this level, we are concerned with such things as belief, thinking and intent. When we predict that the bird will fly away because it knows the cat is coming and is afraid of getting eaten, we are taking the intentional stance. Another example would be when we predict that Mary will leave the theater and drive to the restaurant because she sees that the movie is over and is hungry.
As I've taken pains to explain, conceptualizing LLMs as a bunch of weights is correct, but not helpful in many contexts. Calling them "minds" or ascribing them personalities is simply another model of them, and one that's definitely more tractable for the end-user, and also useful to actual AI researchers and engineers, even if they're using both models.
Note that this conversation started off with a discussion about using LLMs for coding purposes. That is the level of abstraction that's relevant to the debate, and there noting the macroscopic properties I'm describing is more useful, or at least adds useful cognitive compression and produces better models than calling them a collection of weights.
I will even grant the main failure mode: anthropomorphism becomes actively harmful when it causes you to infer hidden integrity, stable goals, or moral patience, and then you stop doing the boring engineering checks. But that is an argument for using the heuristic carefully, not an argument that the heuristic is incoherent. As far as I'm aware, I don't make that kind of mistake.
I find it peculiar that Karpathy doesn't see a relationship between those two things.
Hmm? That's not my takeaway from the tweet (xeet?). He's not denying a connection between AI capabilities and code quality decline, he's making a more subtle point about skill distribution.
The basic model goes like this: AI tools multiply your output at every skill level. Give a novice programmer access to ChatGPT, Claude or Copilot (maybe not Copilot, lol) , and they go from "can't write working code" to "can produce something that technically runs." Give an expert like Karpathy the same tools, and he goes from "excellent" to "somewhat more excellent." The multiplicative factor might even be similar! But that's the rub, there are way more novices than experts.
So you get a flood of new code. Most of it is mediocre-to-bad, because most people are mediocre-to-bad at programming, and multiplying "bad" by even a generous factor still gives you "not great." The experts are also producing more, and their output is better, but nobody writes news articles about the twentieth high-quality library that quietly does its job. We only notice when things break.
This maps onto basically every domain. Take medicine as a test case (yay, the one domain where I'm a quasi-expert) Any random person can feed their lab results into ChatGPT and get something interpretable back. This is genuinely useful! Going from "incomprehensible numbers" to "your kidneys are probably fine but your cholesterol needs work" is a huge upgrade for the average patient. They might miss nuances or accept hallucinated explanations, but they're still better off than before.
Meanwhile, as someone who actually understands medicine, I can extract even more value. I can write better prompts, catch inconsistencies, verify citations, and integrate the AI's suggestions into a broader clinical picture. The AI makes me more productive, but I was already productive, so the absolute gains are smaller relative to my baseline. And critically, I'm less likely to get fooled by confident-sounding nonsense (it's rare but happens at above negligible rates).
This is where I tentatively endorse a "skill issue" framing, where everyone's output getting multiplied, but bad times a multiplier is still usually bad, and there are simply more bad actors (in the neutral sense) than good ones. The denominator in "slop per good output" has gotten larger, but so has the numerator, and the numerator was already bigger to start with. From inside the system, if you're Andrej Karpathy, you mostly notice that you're faster. From outside, you notice that GitHub is full of garbage and the latest Windows update broke your system.
This isn't even a new pattern. Every productivity tool follows similar dynamics. When word processors became common, suddenly everyone could produce professional-looking documents. Did the average quality of written work improve? Well, the floor certainly rose (less illegible handwriting, if I continue to accurately insult my colleagues), but we also got an explosion of mediocre memos and reports that previously wouldn't have been written at all. The ceiling barely budged because good writers were already good. I get more use out of an LLM for writing advice than, say, Scott.
Hey, I'm fond of it, and I'll miss it when the imminent deprecation hits. I literally never used it for coding, but I found that it was excellent at rewriting text in arbitrary styles, better than any SOTA model at the time, and still better than many. Think "show me what this scifi story would be like if it was written by Peter Watts".
I have no idea why a trimmed down coding-focused LLM was so damn good at the job, but it was. RIP to a real one.
- Prev
- Next

Does that include people with Down's Syndrome? Outright and obvious diseases aside, I can think of plenty of people who are so unpleasant/pointless to talk to that I'd speak to an AI any day instead. And even within AI, there are models that I'd prefer over the alternatives.
More options
Context Copy link