self_made_human
amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
A friend to everyone is a friend to no one.
User ID: 454
Man (pun not initially intended), I've never felt like "being" a man, nor have I ever felt tempted to become a woman. I wouldn't hit the magic button, I'm content the way I am.
I have a masculine personality and stereotypically male interests. I'm very good at the touchy-feel stuff when I can be arsed (I had a bunch of female friends, even if I had more male friends overall), but I prefer the way men talk to other men. Being a woman also comes with severe inconveniences in the form of periods. And here I was feeling bad for myself after getting migraines once every few months.
When I contemplate changing my body, I envision becoming taller, stronger or more handsome (and escaping from the prison of my flesh)—more masculine—I've never even desired to be twinkish, let alone feminine.
At the end of the day, I don't feel like I have some kind of pointer in my head that affirms my male personality. If I turned into a woman by means either magical or Indistinguishable From It Science, I'd just do the same things I already do. There's no gender dysphoria or euphoria, there just is.
any human
Does that include people with Down's Syndrome? Outright and obvious diseases aside, I can think of plenty of people who are so unpleasant/pointless to talk to that I'd speak to an AI any day instead. And even within AI, there are models that I'd prefer over the alternatives.
I've been... lazy in that regard. Far too much money in my account that's not accruing interest. But yes, that's a factor. I also earn an OOM more than I did back India, which definitely helps. If I was less lazy, I'd have put most of my money in the S&P500 by now, but I've already put myself in a much better place than if I'd been complacent about things.
I don't expect that this will necessarily make me rich in relative terms, I'm starting too low, too late. But I want enough of a safety net to survive in comfort for the (potential) period of unemployment when AI eats my profession whole, before we implement solutions such as UBI. Not starving, not dying in a riot, all of that is important to me.
I'm probably in the 99.99th percentile for doctors (or anyone else) when it comes to the use of AI in the workplace. I estimate I could automate 90% of my work (leaving aside the patient facing stuff and things that currently require hands and a voice) if I could.
The main thing holding me back? NHS IT, data protection laws and EMR software that still has Windows XP design language. This means I'm bottlenecked by inputting relevant informant into an AI model (manually trawling the EMR, copying and pasting information, taking screenshots of particularly intransigent apps) and also transferring the output into the digital record.
The AIs are damn good at medicine/psychiatry. Outside my own domain, I have a great deal of (justified) confidence in their capabilities. I've often come to take their side when they disagree with my bosses, though the two are usually in agreement. I've used them to help me figure out case presentations ("what would a particularly cranky senior ask me about this specific case?" and guess what they actually asked?), giving me a quick run-down on journal publications, helping me figure out stats, sanity checking my work, helping decide an optimal dose of a drug etc. There's very little they can't do now.
That's the actual thinky stuff. A lot of my time is eaten up by emails, collating and transcribing notes and information, and current SOTA models can do these in a heartbeat.
To an extent, this is an artifact of resident doctors often being the ward donkey, but I'm confident that senior clinicians have plenty to gain or automate away. The main reason they don't is the fact that they're set in their ways. If you've prescribed every drug under the sun, you don't need to pop open the BNF as often as a relative novice like me would - that means far less exploration of what AI can do for you. Yet they've got an enormous amount of paperwork and regulatory bullshit to handle, and I promise it can be done in a heartbeat.
Hell, in the one hospital where I get to call the shots (my dad's, back in India), I managed to cut down enormous amounts of work for the doctors, senior or junior. Discharges and summaries that would take half a day or more get done in ten minutes, and senior doctors have been blown away by the efficiency and quality gains.
Most doctors are at least aware of ChatGPT, even if the majority use whatever is free and easy. I'm still way ahead of the curve in application, but eventually the human in the loop will be vestigial. It's great fun till they can legally prescribe, at which point, RIP human doctors.
ChatGPT isn’t so different from, say, Jarvis in Iron Man (or countless other AIs in fiction), and the median 90-100IQ person may even have believed in 2007 that technology like that actually existed “for rich people” or at least didn’t seem much more advanced than what they had.
Eh? I'm very confident that's wrong. Normies might not appreciate the impact of ChatGPT and co to the same degree, but I strongly doubt that they literally believed that there was human-level AI in 2021. AGI was science fiction for damn good reason, it didn't exist, and very, very few people expected we'd see it or even precursors in the 2020s. Jarvis was scifi, and nobody believed that something like Siri was in the same weight-class.
To shift focus back to your main thesis: the normie you describe is accustomed and acclimatized to being average. Bitter experience has proven to them that they're never going to be an "intellectual" and that their cognitive and physical labor is commoditized. It's unlikely that being the smartest person in the room (or in spitting distance) is an experience they're familiar with. Hence they have less to lose from a non-human competitor who dominates them in that department.
On the other hand, their average Mottizen is used to being smart, and working in a role where it's not easy to just grab a random person off the street to replace them. That breeds a certain degree of discomfort at the prospect. I've made my peace, and I'm going to do what I can to escape the (potential) permanent underclass. It would be nice to have a full, accomplished career with original contributions to my professional field or the random topics I care about, but I'll take a post-scarcity utopia if I can get it.
Call me an optimist, but if we're talking about a civilization capable of rendering the observable universe at our perceived level of fidelity, I think they've got that handled.
Why not? The question was whether small tweaks in the sim parameters might have catastrophic effects. Being able to revert to an earlier save or checkpoint is useful either way.
It's okay, I'm sure they've got rolling backups.
The Culture series? The Golden Oecumene, for a slightly different take on a post-scarcity utopia.
I was visiting my psychiatrist uncle while his dad was over. As part of Standard Indian Hospitality, they tried offloading their spare wardrobe onto me. This included a pair of high-waisted grandpa trousers.
I'm sold. They're surprisingly comfy, look good on me, and go with everything. Highly recommend.
The incredibly loud and sequined suits? Less so, but you take the good with the bad.
I can nominate @ArjinFerman or @Corvos, if they're willing to accept. I'd be happy to not bother with an escrow if you're fine with it, given the lower sums involved.
My proposed terms are clear concessions on an acquittal or conviction, and if this somehow doesn't resolve in 2 years, a general throwing up of hands and acceptance that we're never getting to the bottom of this.
I don't see how this disagrees with anything I've said?
The hypothetical example you've presented is probably more cut-and-dry than anything we've seen here. I suspect that it would actually be more likely to end in a conviction than you think, judges do not regularly do Bayesian calcs in court.
A guilty verdict is very strong evidence of guilt. A verdict of 'not proven' is very weak evidence of factual innocence (as opposed to legal innocence).
I agree, in fact I alluded to the same. If a video came out showing an assault by the accused and without a conviction (as unlikely as that is), then I'd be willing to accept that in lieu of a favorable legal verdict.
keep in mind your original argument rested on nothing more than statements from the police, not official charges, or an actual convction
Hmm? I don't think that's the case. I also heavily stressed what can only be described as "local sentiment", perhaps priors, in addition to the official story. The locals (debatably including me) thought it's more likely than not.
For example:
My own priors, which seem to match those of most actual Scots I’ve spoken to, lean toward a more mundane explanation.
and how you portrayed anyone unconvinced by your arguments as unreasonable.
That is not true. I think I made a strong argument, but I also acknowledge:
I would like to believe that this clarification settles things, but I am also not naïve. If your epistemic filter is tuned to maximum paranoia, then the absence of evidence is merely further evidence of a cover-up. For everyone else, the police statement, local skepticism, and sociological context should nudge your priors at least a little.
In other words, as a Bayesian, my opinion is that you should at the very least be slightly swayed by the argument. That is not the same as thinking that anyone who disagrees with me is unreasonable. There are actual people (living breathing humans) who are immune to any argument, probably including divine intervention. My scorn is largely reserved for them.
Similarly, the article you shared has meaningfully moved my posteriors. Back then, I expect that if anyone asked, I'd say I'm 80-90% confident of a lack of guilt, and now I've moved down to 70%. That is precisely the kind of update in the face of new evidence that I endorse and respect. Hence why I do it myself.
I expect that if a conviction is secured, I'd jump to maybe a 90% certainty that I was wrong, and if they're acquitted, then back up to 90% confidence of being correct. Feel free to tag me if something happens, since I don't really read the BBC that often.
I'm not sure I trust you enough to hold up your end of the bargain. If, for the sake of example, it was @ArjinFerman offering, I'd take it, though I'd prefer smaller sums like £50:25 since I don't care that much. If you're willing to go through the hassle of finding someone to use as an escrow, while using crypto (which is hassle on my part), sure.
If not, I care about my reputation and epistemics to happily accept being proven wrong, if and when I'm proven wrong.
I agree that's a possibility, and if that's proven, I'd be more sympathetic. I personally disagree quite a bit with the UK's approach towards banning pretty much every form of self-defense.
I'm happy to concede if the prosecution ends in a conviction. I still think it's more likely than not that they're acquitted (if I had to put a number on it, 70%).
I'm also happy to acknowledge that acquittal doesn't necessarily mean a lack of guilt, but I don't think the British judicial system is so corrupt that it represents null evidence.
I feel like it would be less hassle to take a heroic dose of viagra and then think of England.
"The AI doesn't hate you, it doesn't love you, but it knows you are made of atoms it can use for something else."
About 2 years back, and at the time I'd perceived it as unilateral and I'd been in the midst of incredibly stressful exams while under a lot of pressure. That was also the case for the second episode a year later. I never had anything similar during my childhood or adolescence.
The scotoma was also very different from my impression of what a typical aura was like (not that I'm an expert on migraines) so the initial absence of headaches in addition to that had me never consider migraines as a cause.
I used to have vision problems. They'd diagnosed me with central serous chorioretinopathy, which is one of those conditions that sounds scary but usually isn't. The main exception: sometimes it progresses to retinal detachment and you go blind. But this seemed unlikely enough that I wasn't too worried.
For years I'd get maybe one episode annually. Fine, whatever. Then a few months ago the frequency ramped up dramatically, and I started wondering if something was wrong.
I did what any reasonable person does when they suspect their diagnosis might be incorrect: I asked several AI models. They all said roughly the same thing: this doesn't look like CSCR. The visual field defects were appearing and resolving way too quickly. Plus I was getting headaches, eye pain, and nausea, though mostly just the scotoma by itself. None of this quite fit.
Today I had another episode and finally got myself to an ophthalmologist. I came prepared. I'd used an Amsler grid to map the exact coverage of the blind spot and tracked how it progressed over time. The smoking gun: this time I had clear evidence the scotoma was bilateral, affecting both eyes instead of just one.
ChatGPT (the 5 Thinking model) had been pretty insistent this was migraine with aura (and careful to exclude more concerning pathology like a TIA/amaurosis fugax). After the ophthalmologist spent several minutes blinding me with lights so bright they photobleached my retinas (and myrdriatics, couldn't read anything closer than two feet for a while afterwards, this is why I'd been putting off the appointment), guess what conclusion they reached?
Migraine with aura.
On one hand: relief. No risk of going blind after all. On the other hand: migraines suck, and I'm pretty annoyed that multiple doctors missed this. The symptoms appearing and disappearing so quickly should have raised immediate doubts about CSCR, which takes days to resolve. Even I started questioning it once the pain showed up, though admittedly I never jumped to migraines either. But my excuse is that I'm a psychiatrist, not an ophthalmologist.
Unfortunately, I was also diagnosed with ocular hypertension, which is a risk factor for glaucoma. Uh.. You win some, you lose some? And it's helpful for your clinician to run tests on you in person? Go see a doctor too, even if ChatGPT is very helpful. It sadly lacks thumbs.
So once LLMs start having little green men inside them they will be as conscious as a corporation haha. Also a corporation itself is not more conscious than a rock, as the corporation cannot do anything without conscious agents acting for it. It has no agency on its own. If I create an LLC and then forget about it, does it think? does it have its own will? or does it just sit there on some ledger. If a rock has people carrying it around and performing tasks for it, has it suddenly gained consciousness?
It is helpful to consider another analogue: the concept of being "alive". A rock is clearly not alive. A human is. So are microbes, but once we get to viruses and prions, the delineation between living and non-living becomes blurry.
Similarly, it is entirely possible that consciousness can be continuous. I'm not a pan-psychist, I think it's dumb to think that an atom or a rock has any degree of consciousness, but consider the difference between an awake and lucid human, one who is drunk, one who is anesthetized or in a coma, someone lobotomized, a fetus etc. We have little idea what the bare minimum is.
A rock is no more conscious for being held than it was before. I think it's fair to say that the rock+human system as a whole is conscious, but only as conscious as the human already was. Think about it, there already is a "rock" in every human: a collection of hydroxyapatite crystals and protein matrices that make up your bones. And yet your consciousness clearly does not lie in your bones. Removing your femur won't impact your cognition, though you'll have a rather bad limp.
Humans are already made up of non-sentient building blocks. Namely the neurons in your brain. I think we can both agree that a single neuron is not meaningfully conscious, but in aggregate?
And guess what? We can already almost perfectly model a single biological neuron in-silico.
https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/
This function is what the authors of the new work taught an artificial deep neural network to imitate in order to determine its complexity. They started by creating a massive simulation of the input-output function of a type of neuron with distinct trees of dendritic branches at its top and bottom, known as a pyramidal neuron, from a rat’s cortex.
they fed the simulation into a deep neural network that had up to 256 artificial neurons in each layer. They continued increasing the number of layers until they achieved 99% accuracy at the millisecond level between the input and output of the simulated neuron. The deep neural network successfully predicted the behavior of the neuron’s input-output function with at least five — but no more than eight — artificial layers. In most of the networks, that equated to about 1,000 artificial neurons for just one biological neuron.
In principle, there's nothing preventing us from scaling up to a whole rat brain or even a human brain, all while using artificial neural nets. I am of course eliding the enormous engineering challenges involved, but it can clearly be done in principle and that's what counts.
(I'm aware that the architectures of an LLM and any biological brain are very different)
My basic theory(really a constraint) of conscious behavior:
Any sentient system must have persistent internal state across time.
This implies non-Markovian dynamics with respect to perception and action.
LLMs are finite-context, externally stateful, inference-time Markovian systems. Therefore, LLMs lack a necessary condition for consciousness.
We have examples of sentient systems with no persistent state, and humans to boot. There are lesions that can make someone have complete anterograde amnesia. They can maintain a continuous but limited capacity short-term memory, but the standard process of encoding and storage to longterm memory fails.
They can remember the last ~10 minutes (context window) and details of their life so far (latent knowledge) but do not consolidate new memories and thus are no longer capable of "online" learning. I do not think it's controversial that such people are conscious, and I certainly think they are.
That demonstrates, at least to my satisfaction, an existence proof that online learning is not a strict necessity for consciousness.
Further, I do not think that using an external repository to maintain state is in any way disqualifying. Humans use external memory aids all the time, and we'll probably develop BCIs that can export and import arbitrary data. There is nothing privileged about storage inside the space of the skull, it's just highly convenient.
I have strong confidence that I'm conscious, and so are you and the typical human (because of biological and structural similarities). I am also very confident that rocks and atoms aren't. I am far more agnostic about LLMs. We simply do not know if they are or aren't conscious.
My objection is to your expression of strong confidence that they aren't conscious. As far as I can tell, the sensible thing to do is wait and watch for more conclusive evidence, assuming we ever get it.
Maybe I need to reread your opinion, but my understanding is that you are in the "LLMs are conscious/have minds" camp of thought.
I do not believe that my thoughts on the topic came up, at least in this thread. As above, I do not make strong claims that LLMs are conscious. I maintain uncertainty. I don't particularly think the question even matters, since I wouldn't treat them any differently even if they were. "Mind" is a very poorly defined term (and we're already talking about consciousness, which doesn't do so hot itself). I think that conceptualizing each instance of an LLM as being a mind is somewhat defensible, even if that's not a hill I particularly care to die on.
- Prev
- Next

I'll eat my hat if they were anywhere a majority. I'm far more inclined to believe that polling would show something very close to Lizardman's constant.
More options
Context Copy link