@self_made_human's banner p

self_made_human

amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi

15 followers   follows 0 users  
joined 2022 September 05 05:31:00 UTC

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

A friend to everyone is a friend to no one.


				

User ID: 454

self_made_human

amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi

15 followers   follows 0 users   joined 2022 September 05 05:31:00 UTC

					

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

A friend to everyone is a friend to no one.


					

User ID: 454

Post-training quantization is often enough to get 8-bit models close to floating-point accuracy.

Sorry, I wrote that while rather sleep deprived, though I'm not sure what doesn't make sense about it?

What I was trying to say is that it's regular practice to quantize models down significantly, switching from FP32 to INT8 without significant degradation in quality. You can go even harder, people do 4-bit quantization these days, and I'm pretty sure I read about others claiming to quantize down to a single bit.

I don't think any amount of word smithing can get around this disagreement or make people change their minds about the level of epsilon that seems reasonable to them. In principle, though, I can imagine some hypothetical experiments where we actually copy people with different levels of epsilons, observe the resulting behavior, and that this might actually be able to convince people that a certain epsilon is appropriate.

I don't think so. I feel that pointing out that if you are an arbitrary X% different from who you were and who will be, while a biological human, and then you have some reasonable metric for identifying the delta between the biological you (or the last recorded form, after destructive scanning), then there is little grounds to claim that you're not the same "person". And once we're comparing digital copies, there are plenty of already established metrics, I'd wager that KL divergence or something similar might come in handy when assessing only behavior or cognitive output for fixed stimuli. Or something close to a perceptual hash function.

I am closer, right now, to the person I was a second ago than the person I was a week ago, or the person I'll be next month. This is fine. This is entirely unremarkable, and taken for granted by just about everybody who wasn't hit by a bus in the interim. But the point is that I consider this grounds to accept (bounded) deviations from ground truth in a subsequent digital copy as not a particularly big deal. If someone demands something even closer? Well, that's their prerogative. They just have to justify (at least to themselves) why they don't mind dying and becoming a new person every few days, weeks or years. If a version of me from 20 years ago or 20 years in the future showed up, we'd get along and we'd look after each other. I'm happy with that, even if I can't pin-point a specific boundary where I wouldn't identify with divergent forks.

Lines of code.

  1. I do not need perfect accuracy (or operation on real numbers). Why would I? We run simulations all the time, and while accuracy is desirable, the brain itself is an intrinsically noisy and stochastic entity. It isn't perfectly self-similar from moment to moment, and when you consider measurement error, the gains from additional 9s of accuracy drop off precipitously. A night's sleep does not change who I consider myself to be as a person to any meaningful degree.
  2. I don't need that formal proof that the copy is perfect. Close enough works for government work, and it also works for me, but probably for a closer value.
  3. In other words, you're conflating exact representation with sufficient representation, which is what I care about, and which is significantly more tractable.

https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/

They started by creating a massive simulation of the input-output function of a type of neuron with distinct trees of dendritic branches at its top and bottom, known as a pyramidal neuron, from a rat’s cortex. Then they fed the simulation into a deep neural network that had up to 256 artificial neurons in each layer. They continued increasing the number of layers until they achieved 99% accuracy at the millisecond level between the input and output of the simulated neuron. The deep neural network successfully predicted the behavior of the neuron’s input-output function with at least five — but no more than eight — artificial layers. In most of the networks, that equated to about 1,000 artificial neurons for just one biological neuron.

You're also overstating with the scaling objection. It is true that in many domains better approximation can cost much more compute. But that does not show that the relevant personal-level properties require astronomically fine precision. In modern ML, quantization is a routine example of this. Post-training quantization is often enough to get 8-bit models close to floating-point accuracy. You do lose performance and fidelity if you push things too far, but the tradeoff can be handled sensibly and save a lot of compute or memory.

Yes, you probably cannot get a formal proof that an earring is an epsilon-close continuation of you. But we do not demand formal proofs for identity anywhere else. We do not prove that the person waking up after sleep, anaesthesia, intoxication, or an episode of delirium is “really” the same person in a theorem-checking sense.

I am okay with a blackbox/behavioral approach if mechanistic understanding or similar metrics aren't an option. Does the new copy behave in a manner consistent with me, for the same set of stimuli? How consistent? True perfection simply doesn't matter. I am not a perfect copy of myself from moment to moment anyway, even as a biological human. That makes these objections moot as far as I can tell.

Is someone the shape or the filling? The intangible and ineffable insides that might fill many shapes. Or the the thing that is outside and visible to the world. Sounds like maybe you'd say the shape, the story suggests the filling.

I, quite literally, have little idea what that analogy means here. Seriously, it isn't obvious to me at all what it would mean for someone's consciousness or identity to be a shape or a filling. If you have another way to framing the question, I can attempt a more useful answer.

At least the way I see it, active consciousness is a dynamic process that only requires active computation (and maybe temporality, though I don't see why you can't run a mind upload backwards or asynchronously). Capturing the information content of the original mind is necessary, but not sufficient, for consciousness; in the way that someone in cryo (when it's known to work) is neither truly dead nor actively alive. If nothing is happening, there's nobody there to experience anything. Playing a movie is not the same as owning a copy.

More poetically, I consider myself the wave, and not the water. The dance, not the dancer. If someone pisses in the pool, it won't bother me very much if at all. The performance can switch out extras on the fly without issue, as long as the production and choreography remains the same.

I do agree that the "practically indistinguishable" version of a p-zombie is a more serious concern. I am agnostic with regards to the qualia of LLMs.

I suspect, but can't prove beyond reasonable doubt, that sufficiently strong optimization towards the task of mimicking human speech and reasoning will, most of the time, produce cognitive circuitry that is surprisingly close to the real deal. I think I've mentioned that a good place to read up on that are Anthropic's posts and papers on their MechInt work. Is that far enough to produce qualia, let alone humanlike qualia? Hell if I know!

From a pragmatic perspective, I would be okay with defaulting to believing that extremely humanlike agents might have qualia. I wouldn't like to make that assumption, and I don't for anything other than actual biological humans today, but I can see why it might just be the only way to handle things sensibly.

I agree with most of your points, at least when it comes to the principle that some things are far from settled facts.

But:

  1. I see no good reason to believe that manually running a neural network by hand would feel different from the "inside". That includes even an upload of a human mind. For me at, least, substrate independence implies more than just vibes and papers.
  2. It is very obvious to me that parts of a larger system can be unconscious or lack qualia while the larger ensemble does. I think the Chinese Room is a ridiculous thing to take seriously, because I don't see a reason to think that a single neuron in my brain knows English, even if my whole brain clearly can. Is the pen and the paper not conscious? Sure. But the atoms in me aren't conscious either. Doesn't stop anything. You could still hook the output of that hand-calculated process to a robot, and it could control the robot like a normal human might (in theory, if you're calculating fast enough).

The idea that there's something essential wrt consciousness about the human brain or meat in general? Unfalsifiable at present, perhaps unfalsifiable forever. If a mind upload of a human claimed to have qualia, would you immediately believe them? I know many wouldn't.

But the usual invisible and intangible dragon in my garage idea is also just as unfalsifiable. Nobody really believes in that one, so I'll give myself some credit for taking what I see as the more parsimonious/agnostic position for what I see as justified reasons.

In the interest of saving us time, let me just say that we approach the problem with very different premises. I share pretty much zero of your moral intuitions, and if I knew of an argument that could change your mind in that regard, I'd be smarter or more charismatic than the earring (and vice-versa with reference to you). I'm not.