self_made_human
C'est la vie
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
I tried stuffing my friends into this textbox and it really didn't work out.
User ID: 454
You know, the UK gets plenty of flak for its groveling attitude towards anyone with a slightly different shade of skin and the most threadbare justification behind seeking reparations for past injustice, but have you seen the other Commonwealth states? Australia and NZ are so cucked it beggars belief.
They all seem to cling on to a form of DEI that's about a decade out of date, at least compared to the US, and even there, it was never as strong and all-encompassing.
What even drives people to such abject and performative self-flagellation?
I could be wrong, but my understanding is that the majority of white-passing "aboriginals" simply have negligible or <10% Caucasian ancestry. At that point, what's surprising about the fact that they look white?
I've been shaking my head at that particular debacle, it seems that the UK is just about the only country on the planet that takes utterly toothless "international law" seriously. They could have told the Mauritian government to shove it, what would they have done, cancel discount holiday vouchers and row over in a canoe?
I don't really have a horse in this race, but I still find it all too tiresome.
If you're a Windows user, and seeking a more power-user experience, I strongly endorse Windows PowerToys. While not an official Microsoft product, it's a passion project by Microsoft devs. Current features:
- Advanced Paste
- Always on Top
- PowerToys Awake
- Color Picker
- Command Not Found
- Command Palette
- Crop And Lock
- Environment Variables
- FancyZones
- File Explorer Add-ons
- File Locksmith
- Hosts File Editor
- Image Resizer
- Keyboard Manager
- Mouse utilities
- Mouse Without Borders
- New+
- Paste as Plain Text
- Peek
- PowerRename
- PowerToys Run
- Quick Accent
- Registry Preview
- Screen Ruler
- Shortcut Guide
- Text Extractor
- Workspaces
- ZoomIt
I personally get some mileage out of the FancyZones feature, as it's a big upgrade over default window tiling manager. With a 4k screen, it's a shame not to use the real estate to its fullest potential. I can see the pixel counter being useful in Arma Reforger, where you need to measure distances on a map, cheeky mortar calculator right there.
Well, they've gotten better and better over time. I've been using LLMs before they were cool, and we've probably seen between 1-2 OOM reduction in hallucination rates. The bigger they get, the lower the rate. It's not like humans are immune to mistakes, misremembering, or even plain making shit up.
In fact, some recent studies (on now outdated models like Claude 3.6) found zero hallucinations at all in tasks like medical transcription and summarization.
It's a solvable problem, be it through human oversight or the use of other parallel models to check results.
A human. More or less, there are caveats involved. A brain-dead or severely cognitively impaired (without hope of improvement) human loses all/most of their moral worth as far as I'm concerned. Not all humans are made alike.
This doesn't mean that entities that are more "sophisticated", biologically or otherwise, but aren't human in terms of genetic or cognitive origin enter my circle of concern. An intelligent alien? I don't particularly care about its welfare? A superintelligent AI? What's it to me? A transhuman or posthuman descendant of Homo sapiens? I care about such a being's welfare. Call it selfish if you want, since I expect to become one or have my descendants become them.
This is simply a fact about my preferences, and I'm not open to debate on the criteria I use personally. I'm open to discussing it, but please don't go to the trouble if you expect to change my mind.
The post went too far, even for LessWrong's open-minded standards. The comments there are 90% people tearing into it.
Substack has a feature where the creator can view the original sources of any traffic to a particular blog.
So far, it was mostly a curiosity, I was used to seeing Reddit, or "Email", those being places where I'd personally shared my writing. I'm not sure what links posted on The Motte show up as, but presumably "Direct" since I strip out the usual tracking IDs out of courtesy.
My latest post did numbers, at least by baby Substack standards. I'd seen a decent amount of traffic from X, and the site's abysmal search did find someone of decent repute shouting it out. A few large Substack authors reblogged it to boot. Someone big even wants to interview me, though my desire for online pseudonymity might make that a no-go.
But then, when I checked back later today, I was immensely surprised to see Gwern in the list. I mean, he's a big name, but surely he doesn't have an independent listing? I dug in, and to my immense pride, I saw that my post had been deemed worthy of ending up as a link he'd rounded up on his personal site.
I'm very chuffed, but I find myself chagrined by the fact that the number of people I know IRL I can boast about this to round up to zero. The only way I could be more pleased is if it caught Scott's eye, but I've managed to achieve that once before so it's off the bucket list.
Motherfucking Gwern-senpai noticed me. I knew that obsessively tracking down links and literature reviews as well as digging into neurophysiology would pay off. Now I feel awful about not adding a dozen footnotes and citations :(
Anyone else ever catch the eye of their heroes?
Despite being an interesting and well-written essay, I have absolutely no sympathy for the author or her views.
All in all, the average woman is psychologically abused in the dating market.
Right. As if the average man is doing so hot.
Dating apps suck for the majority of people. I'd say they'd suck less for the average woman, if they were capable of setting up boundaries.
Out of boredom, I'm using Gemini to make a mortar calculator app that takes in screenshots/grid coordinates and outputs firing solutions. Should work in theory!
I expect that when people usually say that, they're implicitly stating strong belief that the problems are both solvable and being solved. Not that this necessarily means that such claims are true..
We know this because we can, in fact, point to the gears in CPUs and RAM and do gear things with them, and this is in fact the best, most efficient way to manipulate and interact with them. This is not the case for minds: every workable method we have for manipulating and interacting with human minds operates off the assumption that the human mind is non-deterministic, and every attempt to develop ways to manipulate and interact with minds deterministically has utterly failed. There is no mind-equivalent of a programming language, a compiler, a BIOS, a chip die, etc.
The computer analogy is doing a lot of heavy lifting here, but it's carrying more weight than it can bear. Yes, if you take a soldering iron to your CPU, you'll break it. But the reason we know computers are deterministic isn't because we can point to individual transistors and say "this one controls the mouse cursor." It's because we built them from the ground up with deterministic principles, and we can trace the logical flow from input to output through layers of abstraction.
Compare that to any more tangled, yet mechanistic naturally occurring phenomena, and you can see that just knowing the fundamental or even statistical laws governing a complex process doesn't give us the ability to make surgical changes. We can predict the weather several days out with significant accuracy, yet our ability to change it to our benefit is limited.
The brain is not a tool we built. The brain is a three-pound lump of evolved, self-organizing, wet, squishy, recursively layered technology that we woke up inside of. We are not engineers with a schematic, I'd say we're closer to archaeologists who have discovered an alien supercomputer of terrifying complexity, with no instruction manual and no "off" switch.
The universe, biology, or natural selection, was under no selection pressure to make the brain legible to itself. You can look at our attempts at making evolutionary algorithms, and see how the outputs often appear chaotic, but still work.
Consider even LLMs. The basic units, neurons? Not a big deal. Simple linear algebra. Even the attention mechanism isn't too complicated. Yet run the whole ensemble through enormous amounts of data, and we find ourselves consistently befuddled by how the fuck the whole thing works. And yet we understand it perfectly fine on a micro level! Or consider the inevitable buildup of spaghetti code, turning something as deterministic (let's not get into race-conditions and all that, but in general) as code into something headache inducing at best.
And LLMs were built by humans. To be legible to humans. Neuroscience has a far more uphill struggle.
And yet we've made considerable progress. We're well past the sheer crudeness of lobotomies or hits on the head.
fMRI studies can predict with reasonable accuracy which of several choices a person will make seconds before they're consciously aware of the decision. We've got functional BCIs. We can interpret dreams, we can take a literal snapshot of your mind's eye. We can use deep brain stimulation or optogentics to flip individual neurons or neural circuits with reproducible and consistent effects.
As for "determinism of the gaps". What?
Two hundred years ago, the "gap" was the entire brain. The mind was a total mystery. Now, we can point to specific neural circuits involved in decision-making, emotion, and perception. We've moved from "an imbalance of humors causes melancholy" to "stimulating the subgenual cingulate can alleviate depressive symptoms." We've gone from believing seizures were demonic possession to understanding them as uncontrolled electrical storms in the cortex. The gaps where a non-material explanation can hide are shrinking daily. The vector of scientific progress seems to be pointing firmly in one direction. At this point, there's little but wishful thinking behind vain hopes that just maybe, mechanistic interpretation might fail on the next rung of the ladder.
I am frankly flabbergasted that anyone could come away with the opposite takeaway. It's akin to claiming that progress from Newton's laws to the Standard Model has somehow left us in more ontological and epistemic confusion. It has the same chutzpah as a homeopath telling me that modern medicine is a failure because we were wrong about the aetiogenesis of gastric ulcers.
This is not the case for minds: every workable method we have for manipulating and interacting with human minds operates off the assumption that the human mind is non-deterministic, and every attempt to develop ways to manipulate and interact with minds deterministically has utterly failed.
Citation needed? I mean, what's so non-deterministic about the advances I mentioned? What exactly do you think are the "non-deterministic" techniques that work?
I'd probably go with number 2 and a bit of 3. I would likely think slightly worse of someone who acts that way, but not to the point I'd say or do much about it.
I think that the majority of our intuitions about the distasteful nature of torturing animals arises from the fact that, in the modern day, the majority of people who do such a thing are socio/psychopaths and hence dangerous to their fellow man.
This is not a universal unchanging truth! You don't have to go very far back in time to find societies and cultures where randomly kicking dogs and torturing cats was no big deal, and great fun for the whole gang. Even today, many small kids will tear wings off flies without being sociopaths or psychopaths. They get trained out of expressing such behavior.
If a person got their kicks out of torturing animals, but didn't demonstrate other reasons for me to be concerned about them, I don't really care.
On a slight tangent, I don't care about animal rights or welfare. The fact that a cutesy little cow had to die to make a steak means nothing to me. I'm still only human, so I feel bad if I see someone mistreat a dog, and might occasionally intervene if my emotions get too strong. That's an emotional response, not an intellectual one, because I think the crime they're commuting is equivalent to property damage, and they have the right to treat their own property as they will. This doesn't stop me from loving my own two dogs, and being willing to use severe violence on anyone who'd hurt them. But it's the fact that they're my dogs that makes it so, and I wouldn't donate money to the RSPCA.
That is a far more reasonable take, but once again, I'd say that the most likely alternative is death. I really don't want to be dead!
There also ways to mitigate the risk. You can self-host your uploads, which I'd certainly do if that was an option. You could have multiple copies running, if there's 10^9 happy flourishing self_made_humans out there, it would suck to be the couple dozen being tortured by people who really hate me because of moderation decisions made on an underwater basket weaving community before the Singularity, but that's acceptable for me. I expect that we would have legal and technical safeguards too, such as some form of tamper-protection and fail-deadly in place.
Can I guarantee someone won't make a copy of me that gets vile shit done to it? Not at all, I just think there are no better options even given Deep Time. It beats being information-theoretically dead, at which point I guess you just have to pray for a Boltzmann Brain that looks like you to show up.
While a very nice scifi story, there's very little reason to think that reality will pan out that way.
It suffers from the same failure of imagination as Hanson's Age of Em. We don't live in a universe where it looks like it makes economic sense to have mind uploads doing cognitive or physical labor. We've got LLMs, and will likely have other kinds of nonhuman AI. They can be far more finely tuned and optimized than any human upload (while keeping the latter recognizably human), while costing far less in terms of resources to run. While compute estimates for human brain emulation are all over the place, varying in multiple OOMs, almost all such guesses are far, far larger than a single instance of even the most unwieldy LLM around.
I sincerely doubt that even a stripped down human emulation can run on the same hardware as a SOTA LLM.
If there's no industrial or economic demand for Em slaves, who is the customer for mind-uploading technology?
The answer is obvious: the person being uploaded. You and me. People who don't want to die. This completely flips the market dynamic. We are not the product; we are the clients. The service being sold goes from "cognitive labor" to "secure digital immortality." In this market, companies would compete not on how efficiently they can exploit Ems, but on how robustly they can protect them.
There is no profit motive behind enslaving and torturing them. Without profit, you go from industrial-scale atrocities to bespoke custom nightmares. Which aren't really worth worrying about. You might as well refuse to have children or other descendants, because someone can hypothetically torture them to get back at you. If nobody is making money off enslaving human uploads, then just about nobody but psychopaths will seek to go through the expense of torturing them.
This is all hopelessly confounded by the fact that, on the author's own admission, they were doing significant amounts of ketamine at the same time.
wouldn’t you care if someone were purposely buying bees only to kill them?
Not in the least. I've heard of worse hobbies.
Sounds peachy to me, but maybe I'm just annoyed by the seagulls screeching outside my window at 3 am.
If, after the universe has been mostly converted into computronium, there exist people who want to hug trees- Let them. If they were sensible, they'd do it in full immersion VR, but it doesn't cost much to have solar system scale nature preserves for the hippies.
While I agree with the second paragraph, the first one has me scratching my head. Why would suffering have anything to do with the "unlearning gradient of an ML model" and, if so, how does an atom have anything to do with ML?
Note that I think a technological Singularity has a decent risk of causing me, and everyone else, to end up dead.
There's not much anyone can do if that happens, so my arguments are limited to the scenarios where that's not the case, presumably with some degree of rule of law, personal property rights and so on.
By your lights, it does not seem that there is any particular reason to think that "profit" plays a part here either way; but in any case, there is no direct cost to industrial-scale digital atrocities either. Distributing hell.exe does not take significantly longer or cost significantly more for ten billion instances than it does for one.
You're the one who used Lena to illustrate your point. That story specifically centers around the conceit that there's profit to be made through mass reproduction and enslavement of mind uploads.
In a more general case? Bad things can always happen. It's a question of risks and benefits.
Distributing a million copies of hell.exe might be a negligible expense. Running them? Not at all. I can run a seed box and host a torrent of a video game to thousands of people for a few dollars a month. Running a thousand instances? Much more expensive.
Even most people who hate your guts are content with having you simply dead, instead of tortured indefinitely.
Imagine, if you will, if some people in this future decide other people, maybe a whole class of other people, are bad and should be punished; an unprecedented idea, perhaps, but humor me here. What happens then? Do you believe that humans have an innate aversion to abusing those weaker than themselves? What was the "profit motive" for the Rotherham rape gangs? What was the "profit motive" for the police and government officials who looked the other way?
There is such a thing as over-updating on a given amount of evidence.
You don't live in an environment where you're constantly being tortured and harried. Neither do I. Even the Rotherham cases eventually came to light, and arrests were made. Justice better late than never.
It is that once you are uploaded, you are fundamentally at the mercy of whoever possesses your file, to a degree that no human has ever before experienced. You cannot hide from them, even within your own mind. You cannot escape them, even in death. And the risk of that fate will never, ever go away.
Well, maybe law-enforcement now has the ability to enforce a quadrillion life sentences as punishment for such crimes. Seriously. We do have law enforcement, and I expect that in most future timelines, we'll have some equivalent. Don't upload your mind to parties you don't trust.
Apple using homomorphic encryption for image classification on the cloud:
https://boehs.org/node/homomorphic-encryption
Homomorphically Encrypting CRDTs:
https://jakelazaroff.com/words/homomorphically-encrypted-crdts/
That's for homomorphic encryption in particular, which, AFAIK, is the absolute peak of security. Then you've got more standard setups like VMs on the cloud, and prevention of data leakage between unrelated customers on the same hardware, in the manner that AWS/Azure handle things.
I wouldn't call the history of every invention to be "very little reason".
I guess that's why, after the invention of the hamster wheel, we've got indentured slaves running in them to power our facilities. Enslaving human mind uploads is in a similar ballpark of eminently sensible economic decisions.
How do these emulations get the resources to pay the companies for the service of protection? Presumably they work, no?
Not necessarily. I think you're well aware of my concerns about automation-induced unemployment, with most if not all humans becoming economically unproductive. Mind uploads are unlikely to change that.
What humans might have instead are UBI or pre-existing investments on which they can survive. Even small sums held before a Singularity could end up worth a fortune due to how red-hot the demand for capital would be. They could spend this on backup copies of themselves if that wasn't a service governments provided from popular demand.
By getting more clients? If yes, why compete for the limited amount of clients, when you can just copy-paste them? We're already seeing a similar dynamic with meatsack humans and immigration, it strikes me as extremely naive to think it would happen less if we make it easier and cheaper.
So you happen to see an enormous trade in illegal horses, to replace honest local tractors in the fields? I suppose that's one form of "mule" hopping the borders. No. Because, in both scenarios, they're obsolete, and little that you can do to make mind uploads cheaper won't apply to normal AI, which already start at an advantage.
Slavery ensures profit, torture ensures compliance.
Well, it's an awful shame that we have pretty handy "slaves" already, in the form of ChatGPT and its descendants. Once again, if you have tractors, the market for horse-rustling falls through the bottom.
I shared my latest post on the Slate Star Codex subreddit, and Scott showed up in the comments to complain about how I'd characterised him in the article. I dutifully apologised and rephrased the offending passage. In the list of things that made me feel ashamed of myself this year, this was in the top five.
My condolences. If that happened to me, I'd be driven to drink.
Kind of like a less overtly titillating Aella, and, in my view, far more physically attractive.
(Presumedly) white men really not beating the allegations!
(She is pretty, but I found this particular photo profoundly disturbing. To be fair, now that I actually opened it, it's AI generated on purpose)
I have a dim opinion of the Rawlsian veil of ignorance, but even so, there are a million issues with such claims.
If you experience living in reality now (as opposed to remembering it), by induction you can be sure that you will never experience living as an em.
This claim implicitly but load-bearingly assumes that a post-Singularity civilization won't have the ability to create simulations indistinguishable from reality.
Even today, we have no actual rebuttal for the Simulation Hypothesis. You and I could be simulations inside a simulation, but it's a possibility we can't prove or exclude at the moment, so the sensible thing to do is to ignore it and move on with our lives.
Even if you did start out as a Real Human, then I think that with the kind of mind editing in Lena, it would be trivial to make you forget or ignore that fact.
Further, I don't think continuity of consciousness is a big deal, which is why I don't have nightmares about going to take a nap. As far as I'm concerned, my "mind" is a pattern that can be instantiated in just about any form of compute, but at the moment is in a biological computer. There is no qualitative change in the process of mind upload, at least a high fidelity one, be it a destructive scan or preserving of the original brain.
- Prev
- Next
Man, not every movement that is somewhat stupid is a "psy-op". I remember a non-negligible number of bleeding hearts back in India complaining about fireworks because it scared dogs, while the general populace didn't give a shit. Neither did I, both because my dogs could snooze through a nuclear exchange, and because I really didn't care.
Even my ex (who was a bleeding heart liberal by any standard) was part of them, because her poorly trained, nippy little anklebiter was scared shirtless.
It's obvious to me that a certain fraction of people will have an innate proclivity towards certain stances, might be personally sensitive to loud noises, or might live in places where fireworks get out of hand. And that eventually, they might start grassroot or coordinated complaints about it.
Not every silly worldview is a psy-op. You're diluting the word into uselessness.
More options
Context Copy link