@self_made_human's banner p

self_made_human

C'est la vie

16 followers   follows 0 users  
joined 2022 September 05 05:31:00 UTC

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

I tried stuffing my friends into this textbox and it really didn't work out.


				

User ID: 454

self_made_human

C'est la vie

16 followers   follows 0 users   joined 2022 September 05 05:31:00 UTC

					

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

I tried stuffing my friends into this textbox and it really didn't work out.


					

User ID: 454

I've been shaking my head at that particular debacle, it seems that the UK is just about the only country on the planet that takes utterly toothless "international law" seriously. They could have told the Mauritian government to shove it, what would they have done, cancel discount holiday vouchers and row over in a canoe?

I don't really have a horse in this race, but I still find it all too tiresome.

Out of boredom, I'm using Gemini to make a mortar calculator app that takes in screenshots/grid coordinates and outputs firing solutions. Should work in theory!

I expect that when people usually say that, they're implicitly stating strong belief that the problems are both solvable and being solved. Not that this necessarily means that such claims are true..

If you're a Windows user, and seeking a more power-user experience, I strongly endorse Windows PowerToys. While not an official Microsoft product, it's a passion project by Microsoft devs. Current features:

  • Advanced Paste
  • Always on Top
  • PowerToys Awake
  • Color Picker
  • Command Not Found
  • Command Palette
  • Crop And Lock
  • Environment Variables
  • FancyZones
  • File Explorer Add-ons
  • File Locksmith
  • Hosts File Editor
  • Image Resizer
  • Keyboard Manager
  • Mouse utilities
  • Mouse Without Borders
  • New+
  • Paste as Plain Text
  • Peek
  • PowerRename
  • PowerToys Run
  • Quick Accent
  • Registry Preview
  • Screen Ruler
  • Shortcut Guide
  • Text Extractor
  • Workspaces
  • ZoomIt

I personally get some mileage out of the FancyZones feature, as it's a big upgrade over default window tiling manager. With a 4k screen, it's a shame not to use the real estate to its fullest potential. I can see the pixel counter being useful in Arma Reforger, where you need to measure distances on a map, cheeky mortar calculator right there.

You know, the UK gets plenty of flak for its groveling attitude towards anyone with a slightly different shade of skin and the most threadbare justification behind seeking reparations for past injustice, but have you seen the other Commonwealth states? Australia and NZ are so cucked it beggars belief.

They all seem to cling on to a form of DEI that's about a decade out of date, at least compared to the US, and even there, it was never as strong and all-encompassing.

What even drives people to such abject and performative self-flagellation?

Well, they've gotten better and better over time. I've been using LLMs before they were cool, and we've probably seen between 1-2 OOM reduction in hallucination rates. The bigger they get, the lower the rate. It's not like humans are immune to mistakes, misremembering, or even plain making shit up.

In fact, some recent studies (on now outdated models like Claude 3.6) found zero hallucinations at all in tasks like medical transcription and summarization.

It's a solvable problem, be it through human oversight or the use of other parallel models to check results.

Apple using homomorphic encryption for image classification on the cloud:

https://boehs.org/node/homomorphic-encryption

Homomorphically Encrypting CRDTs:

https://jakelazaroff.com/words/homomorphically-encrypted-crdts/

That's for homomorphic encryption in particular, which, AFAIK, is the absolute peak of security. Then you've got more standard setups like VMs on the cloud, and prevention of data leakage between unrelated customers on the same hardware, in the manner that AWS/Azure handle things.

I appreciate the thorough response, but I think you're painting an unnecessarily bleak picture that doesn't account for several key factors.

You're right that my argument depends on relatively stable economic institutions, but this isn't as unrealistic as you suggest. We already have financial instruments that span centuries - perpetual bonds, endowments, trusts. The Vatican has maintained financial continuity for over 500 years.

Improving technology makes it at least theoretically possible to have such systems become even more robust, spanning into the indefinite, if not infinite future.

So this argument seems to depend on an eternally-stable investment market where you can put in value today and withdraw value in, say, five thousand years. No expropriation by government, no debasement of currency, no economic collapse, no massive fraud or theft, no pillage by hostile armies, every one of which we have numerous examples of throughout human history.

The precise details of how a post Singularity society might function are beyond me. Yet I expect that they would have far more robust solutions to such problems. What exactly is the currency to debase, when we might trade entirely in units of energy or in crypto currency?

The estimate I've heard recently is that the UK grooming gangs may have raped as many as a million girls. The cops looked the other way. The government looked the other way. My understanding is that the large majority of the perpetrators got away with it, and the few that got caught received minimal sentences for the amount of harm they caused.

Where on Earth did you come across this claim???

Does it not strike you as prima facie absurd? The population of the UK is about 68 million, if around 1.5% of the entire population, or 3% of the women, had been raped by organized "rape gangs", I think we'd have noticed. I live here, for Christ's sake. That's the kind of figure you'd expect in a country under occupation or literally in the midst of a civil war.

The confirmed numbers, which are definitely an understatement, are about 5k girls total. I don't see how you can stretch that another 3 orders of magnitude no matter how hard you try.

Putting aside those absurd figures:

The grooming gangs are indeed horrific, but they're not representative of how most vulnerable populations are treated in developed societies. For every Rotherham, there are thousands of care homes, hospitals, and institutions that function reasonably well. The vast majority of elderly people in care facilities, despite being physically vulnerable and economically dependent, aren't systematically abused.

Your examples of state collapse and genocide are real risks, but they're risks that already exist for biological humans. The question isn't whether bad things can happen, but whether the additional risks of uploading outweigh the benefits. A world capable of supporting uploaded minds is likely one with sophisticated technology and institutions - probably more stable than historical examples, not less.

To sum up: you are counting on money to protect you, on the understanding that you will be economically useless, and the assumption that you will have meaningful investments and that nothing bad will ever happen to them. You are counting on people who own you to be trustworthy, and to only transfer possession of you to trustworthy people. And you are counting on the government to protect you, and never turn hostile toward you, nor be defeated by any other hostile government, forever.

You're describing the experience of a retiree.

The "ownable commodity" framing assumes a particular legal framework that need not exist. We already have legal protections against slavery, even of non-standard persons (corporations have rights, as do some animals in certain jurisdictions). There's no reason uploaded minds couldn't have robust legal protections - potentially stronger than biological humans, since their substrate makes certain forms of evidence and monitoring easier.

You mention trust extending through infinite chains, but this misunderstands how modern systems work. I don't need to trust every person my bank trusts, or every person my government trusts. Institutional structures, legal frameworks, and distributed systems can provide security without requiring universal interpersonal trust.

As Einstein, potentially apocryphally, said- Compound interest is the most powerful force in the universe. A post-Singularity economy has hordes of Von Neumann swarms turning all the matter in grasp to something useful, with a rate of growth only hard capped by the speed of light. It's not a big deal to expect even a small investment to compound, that's how retirement funds work today.

Further, you assume that I'll be entirely helpless throughout the whole process. Far from it. I want to be a posthuman intelligence that can function as a peer to any ASI, and plain biology won't cut it. Uploading my mind allows for enhancements that mere flesh and blood don't allow.

I could also strive to self-host my own hardware, or form a trusted community. There are other technological solutions to the issue of trust-

  1. Substrates running on homomorphic encryption, where the provider can run your consciousness without ever being able to "read" it.

  2. Decentralized hosting, where no single entity controls your file, but a distributed network does, governed by a smart contract you agreed to.

  3. I could send trillions of copies of myself into interstellar space.

They really can't get all of me.

At the end of the day, you're arguing that because a totalitarian government could create digital hells, I should choose the certainty of annihilation. That's like refusing to board an airplane because of the risk of a crash, and instead choosing to walk off a cliff. Yes, the crash is horrific, but the cliff is a 100% guarantee of the same outcome: death.

Your argument is that because a system can fail, it will fail in the worst way imaginable, and therefore I should choose oblivion. My argument is that the choice is between certain death and a future with manageable risks. The economic incentives will be for security, not slavery. The technology will co-evolve with its own safeguards. And the societal risks, while real, are ones we already face and must mitigate regardless. If the rule of law collapses, we all lose.

The ultimate omnipotent god in this scenario is Death, and I'll take my chances with human fallibility over its perfect, inescapable certainty any day.

A human. More or less, there are caveats involved. A brain-dead or severely cognitively impaired (without hope of improvement) human loses all/most of their moral worth as far as I'm concerned. Not all humans are made alike.

This doesn't mean that entities that are more "sophisticated", biologically or otherwise, but aren't human in terms of genetic or cognitive origin enter my circle of concern. An intelligent alien? I don't particularly care about its welfare? A superintelligent AI? What's it to me? A transhuman or posthuman descendant of Homo sapiens? I care about such a being's welfare. Call it selfish if you want, since I expect to become one or have my descendants become them.

This is simply a fact about my preferences, and I'm not open to debate on the criteria I use personally. I'm open to discussing it, but please don't go to the trouble if you expect to change my mind.

And if that was true about us, then your opinion or mine considering the ethics of mind emulation would be utterly irrelevant. Not to mention that it wouldn't be the world of Lena, exactly. The entire point of Lena is that the simulation is very different from reality, in the worse direction

If we didn't know for a fact that we are/aren't in a simulation, it remains entirely applicable. Besides, my entire point is that Lena isn't an accurate prediction of what the world will look like given its current trajectory.

If continuity of consciousness isn't a big deal then we can forget the assumption that consciousness is tied to specific mind patterns at all. Maybe one second you're self_made_human, and another second you're Katy Perry, and the next second yet is spent in a nascent Boltzmann brain halfway across the observed universe.

That doesn't follow, when I temporarily lose continuity of consciousness, I wake up more or less the same person. I don't even perceive the gap, sleeping is pretty much an IRL time skip. That's because the underlying pattern of embodied cognition is minimally affected in the process.

In what meaningful way can the "same" person be me and then Katy Perry? The word "same" becomes entirely meaningless.

A butterfly can't actually dream of being human.

I'd probably go with number 2 and a bit of 3. I would likely think slightly worse of someone who acts that way, but not to the point I'd say or do much about it.

I think that the majority of our intuitions about the distasteful nature of torturing animals arises from the fact that, in the modern day, the majority of people who do such a thing are socio/psychopaths and hence dangerous to their fellow man.

This is not a universal unchanging truth! You don't have to go very far back in time to find societies and cultures where randomly kicking dogs and torturing cats was no big deal, and great fun for the whole gang. Even today, many small kids will tear wings off flies without being sociopaths or psychopaths. They get trained out of expressing such behavior.

If a person got their kicks out of torturing animals, but didn't demonstrate other reasons for me to be concerned about them, I don't really care.

On a slight tangent, I don't care about animal rights or welfare. The fact that a cutesy little cow had to die to make a steak means nothing to me. I'm still only human, so I feel bad if I see someone mistreat a dog, and might occasionally intervene if my emotions get too strong. That's an emotional response, not an intellectual one, because I think the crime they're commuting is equivalent to property damage, and they have the right to treat their own property as they will. This doesn't stop me from loving my own two dogs, and being willing to use severe violence on anyone who'd hurt them. But it's the fact that they're my dogs that makes it so, and I wouldn't donate money to the RSPCA.

That is a far more reasonable take, but once again, I'd say that the most likely alternative is death. I really don't want to be dead!

There also ways to mitigate the risk. You can self-host your uploads, which I'd certainly do if that was an option. You could have multiple copies running, if there's 10^9 happy flourishing self_made_humans out there, it would suck to be the couple dozen being tortured by people who really hate me because of moderation decisions made on an underwater basket weaving community before the Singularity, but that's acceptable for me. I expect that we would have legal and technical safeguards too, such as some form of tamper-protection and fail-deadly in place.

Can I guarantee someone won't make a copy of me that gets vile shit done to it? Not at all, I just think there are no better options even given Deep Time. It beats being information-theoretically dead, at which point I guess you just have to pray for a Boltzmann Brain that looks like you to show up.

wouldn’t you care if someone were purposely buying bees only to kill them?

Not in the least. I've heard of worse hobbies.

I have a dim opinion of the Rawlsian veil of ignorance, but even so, there are a million issues with such claims.

If you experience living in reality now (as opposed to remembering it), by induction you can be sure that you will never experience living as an em.

This claim implicitly but load-bearingly assumes that a post-Singularity civilization won't have the ability to create simulations indistinguishable from reality.

Even today, we have no actual rebuttal for the Simulation Hypothesis. You and I could be simulations inside a simulation, but it's a possibility we can't prove or exclude at the moment, so the sensible thing to do is to ignore it and move on with our lives.

Even if you did start out as a Real Human, then I think that with the kind of mind editing in Lena, it would be trivial to make you forget or ignore that fact.

Further, I don't think continuity of consciousness is a big deal, which is why I don't have nightmares about going to take a nap. As far as I'm concerned, my "mind" is a pattern that can be instantiated in just about any form of compute, but at the moment is in a biological computer. There is no qualitative change in the process of mind upload, at least a high fidelity one, be it a destructive scan or preserving of the original brain.

Sure, but it's unwise to dismiss them.

Sure. And yet I invite you to show me how I'm "dismissing" them. All I've done is point out the competing incentives, which are regulatory, legal and ethical, which I expect to solve the problem.

Because the patients have power to just not go to the ones that would. Not to mention take revenge.

Are you familiar with the literature on the principal-agent problem? It's not remotely as simple as "just not go to the ones that would".

I will leave aside the fact that there's no physical law demanding that prospective mind uploads must use a single compute provider, and don't have the option to self-host either, and that there will likely be persons or organizations that can take "revenge" on their behalf.

PETA exists as an organization that takes "revenge" on the behalf of random animals, to set the floor rather low but not zero.

I feel like this makes the case against you than for you.

I feel like it doesn't, or I wouldn't have used that analogy. Please explain.

You don't find it odd that the singularity has to be accepted as an article of faith for the discussion to continue?

God. Leaving aside such loaded phrases as "article of faith", I think that it's very likely that we have some form of technological Singularity within our nominal life expectancy.

Even @FCfromSSC acknowledges the possibility of mind uploading, and presumably believes that the kind of rapid technological improvement that we colloquially call a Singularity is a requisite for us to live to see it. He even identifies with the potential mind upload. He however, believes that this is against his best interests.

My interests are to attempt to demonstrate why I think this is a mistake, or at the least, throwing the baby out with the bathwater. Consider that, from my perspective, an altruistic act.

If you don't think that mind uploads are a possibility, or that we won't live to see them, my interest in debating with you is minimal. What would the point even be? Alas, I'm here, because I suppose I have a sadomasochistic streak and will argue just about anything.

Right, so when emulation's labour will be like horse labour relative to chatGPT, and it will actively cost resources to keep them running, what does that analogy imply about the likely fate of mind-emulations?

Naively? Bad things. Less naively? Everything I've argued for so far.

But consider that it's not just the emulations that will be in the place of horses. If emulation are horses, then good old fashioned meat and bone humans would be closer to horse with a broken leg.

Being advocate for outcomes that don't literally kill all humans, I believe in attempting to steer the course of our technologies and laws in a direction that doesn't lead to this.

If the minds can't support themselves economically, they obvious incentive is to pull the plug on them, so you don't have to pay them UBI anymore.

"Incentives" are not the be-all and end-all of matters in life.

The police are incentivized to have high levels of crime to justify their salaries. You don't see them running coaching sessions on bank robbery.

Oncologists have "incentives" to keep you alive and cancer-ridden indefinitely to get that sweet insurance money. I know plenty, and I'm afraid that's not an accurate description of any of them.

Then the incentive becomes: manipulate the emulations to sign away the rights to their investments, and then pull the plug.

The number of cemeteries that dig up their clients and sell them for parts is, to the best of my knowledge, small.

The number of investment firms and banks that snatch the fees of the recently departed to spend on their whims, is, as far as I'm aware, rather limited.

Cloud service providers don't, as a general rule, steal all your data and sell them to your competitors.

The kind of organization that would run mind uploads would likely be a cross between all of the above.

Do you know why millions of people were kept in chattel slavery throughout history? Because there was a good business argument for it. Even the most abusive sheikh in Qatar doesn't bus in dozens of kaffirs for the sole purpose of beating them up for the joy of it. The majority of people who hate you are more than content to end the matter with a bullet in your brain, and not to keep you around to torture indefinitely.

Besides, I'd like you to consider the possibility, however controversial it might sound, that people and systems sometimes do the right thing even when the first-order effects aren't to their "best interests". And perhaps we might have cops and politicians in some form to help even the scales.

I do not grant any claims of "the singularity" happening a single shred of legitimacy, unless it comes with solid supporting evidence.

In that case, I don't see the point of having this discussion at all.

Or are the horses we do keep there for our amusment?

Yes? The population of horses crashed during the Industrial Revolution, and has only recently recovered, driven almost entirely by recreational demand.

Note that I think a technological Singularity has a decent risk of causing me, and everyone else, to end up dead.

There's not much anyone can do if that happens, so my arguments are limited to the scenarios where that's not the case, presumably with some degree of rule of law, personal property rights and so on.

By your lights, it does not seem that there is any particular reason to think that "profit" plays a part here either way; but in any case, there is no direct cost to industrial-scale digital atrocities either. Distributing hell.exe does not take significantly longer or cost significantly more for ten billion instances than it does for one.

You're the one who used Lena to illustrate your point. That story specifically centers around the conceit that there's profit to be made through mass reproduction and enslavement of mind uploads.

In a more general case? Bad things can always happen. It's a question of risks and benefits.

Distributing a million copies of hell.exe might be a negligible expense. Running them? Not at all. I can run a seed box and host a torrent of a video game to thousands of people for a few dollars a month. Running a thousand instances? Much more expensive.

Even most people who hate your guts are content with having you simply dead, instead of tortured indefinitely.

Imagine, if you will, if some people in this future decide other people, maybe a whole class of other people, are bad and should be punished; an unprecedented idea, perhaps, but humor me here. What happens then? Do you believe that humans have an innate aversion to abusing those weaker than themselves? What was the "profit motive" for the Rotherham rape gangs? What was the "profit motive" for the police and government officials who looked the other way?

There is such a thing as over-updating on a given amount of evidence.

You don't live in an environment where you're constantly being tortured and harried. Neither do I. Even the Rotherham cases eventually came to light, and arrests were made. Justice better late than never.

It is that once you are uploaded, you are fundamentally at the mercy of whoever possesses your file, to a degree that no human has ever before experienced. You cannot hide from them, even within your own mind. You cannot escape them, even in death. And the risk of that fate will never, ever go away.

Well, maybe law-enforcement now has the ability to enforce a quadrillion life sentences as punishment for such crimes. Seriously. We do have law enforcement, and I expect that in most future timelines, we'll have some equivalent. Don't upload your mind to parties you don't trust.

I wouldn't call the history of every invention to be "very little reason".

I guess that's why, after the invention of the hamster wheel, we've got indentured slaves running in them to power our facilities. Enslaving human mind uploads is in a similar ballpark of eminently sensible economic decisions.

How do these emulations get the resources to pay the companies for the service of protection? Presumably they work, no?

Not necessarily. I think you're well aware of my concerns about automation-induced unemployment, with most if not all humans becoming economically unproductive. Mind uploads are unlikely to change that.

What humans might have instead are UBI or pre-existing investments on which they can survive. Even small sums held before a Singularity could end up worth a fortune due to how red-hot the demand for capital would be. They could spend this on backup copies of themselves if that wasn't a service governments provided from popular demand.

By getting more clients? If yes, why compete for the limited amount of clients, when you can just copy-paste them? We're already seeing a similar dynamic with meatsack humans and immigration, it strikes me as extremely naive to think it would happen less if we make it easier and cheaper.

So you happen to see an enormous trade in illegal horses, to replace honest local tractors in the fields? I suppose that's one form of "mule" hopping the borders. No. Because, in both scenarios, they're obsolete, and little that you can do to make mind uploads cheaper won't apply to normal AI, which already start at an advantage.

Slavery ensures profit, torture ensures compliance.

Well, it's an awful shame that we have pretty handy "slaves" already, in the form of ChatGPT and its descendants. Once again, if you have tractors, the market for horse-rustling falls through the bottom.

While a very nice scifi story, there's very little reason to think that reality will pan out that way.

It suffers from the same failure of imagination as Hanson's Age of Em. We don't live in a universe where it looks like it makes economic sense to have mind uploads doing cognitive or physical labor. We've got LLMs, and will likely have other kinds of nonhuman AI. They can be far more finely tuned and optimized than any human upload (while keeping the latter recognizably human), while costing far less in terms of resources to run. While compute estimates for human brain emulation are all over the place, varying in multiple OOMs, almost all such guesses are far, far larger than a single instance of even the most unwieldy LLM around.

I sincerely doubt that even a stripped down human emulation can run on the same hardware as a SOTA LLM.

If there's no industrial or economic demand for Em slaves, who is the customer for mind-uploading technology?

The answer is obvious: the person being uploaded. You and me. People who don't want to die. This completely flips the market dynamic. We are not the product; we are the clients. The service being sold goes from "cognitive labor" to "secure digital immortality." In this market, companies would compete not on how efficiently they can exploit Ems, but on how robustly they can protect them.

There is no profit motive behind enslaving and torturing them. Without profit, you go from industrial-scale atrocities to bespoke custom nightmares. Which aren't really worth worrying about. You might as well refuse to have children or other descendants, because someone can hypothetically torture them to get back at you. If nobody is making money off enslaving human uploads, then just about nobody but psychopaths will seek to go through the expense of torturing them.

Sounds peachy to me, but maybe I'm just annoyed by the seagulls screeching outside my window at 3 am.

If, after the universe has been mostly converted into computronium, there exist people who want to hug trees- Let them. If they were sensible, they'd do it in full immersion VR, but it doesn't cost much to have solar system scale nature preserves for the hippies.

Would you rather be "fully legible" or fully dead? Easy choice as far as I'm concerned.

While I agree with the second paragraph, the first one has me scratching my head. Why would suffering have anything to do with the "unlearning gradient of an ML model" and, if so, how does an atom have anything to do with ML?

The post went too far, even for LessWrong's open-minded standards. The comments there are 90% people tearing into it.

Thank you!

Ketamine vs LSD/psilocybin are very different in terms of pharmacology, even if the net effect on depression is the same. The former acts by modulating NMDA primarily, the latter 5HT2A.

Subjectively, a k-hole is light years apart from psychedelics.

They then tend to increase neuronal plasticity, via different mechanisms of action.

Its weird that we found three recreational drugs from different families doing this, and no non-recreational ones.

Well, ECT and transcranial magnetic stimulation use no drugs at all (barring incidental anesthesia and muscle relaxant in the former). They also, after a few sessions, relieve depression for months or years. Once again, the terminal effect is believed to be increased synaptogenesis/plasticity. ECT has been around for 70 years.

There's nothing particularly weird about it. The regulatory environment just became somewhat more friendly towards exploring less conventional therapies when the anecdotal evidence became strong enough.

Personally, I couldn't care less how "weird" this seems in the first place, as long as the treatments work. The human body is weird and unintuitive in the first place.