@mdurak's banner p

mdurak


				

				

				
0 followers   follows 0 users  
joined 2023 November 16 00:14:01 UTC
Verified Email

				

User ID: 2751

mdurak


				
				
				

				
0 followers   follows 0 users   joined 2023 November 16 00:14:01 UTC

					

No bio...


					

User ID: 2751

Verified Email

That may well be the case, but is there a strong correlation between people who complain and people who work to improve their circumstances? Certainly some amount of people scrimp and save more, or work a second job, in order to adjust to changing economic circumstances, all the while complaining that this is now necessary for them to survive.

Hamas is indeed not recognized as the governing body of a nation.

A smartphone with internet access is table stakes for societal/economic participation these days, and doesn’t even cost that much, especially if you buy it used. Have you been to a third world country lately? Even poor tuk tuk drivers got smartphones.

Which is not to say the people you describe don’t exist. Just that having a smartphone isn’t a great identifier for them.

But I have zero sympathy to give whatsoever for the ones that don't even try.

Regardless of how hard you’re trying, it’s legitimate to complain that the economy is worse now by giving you less in return for the same amount of effort as you’ve put in before.

I've repeatedly asked and desperately searched for any suggestions for how Israel should conduct its war differently

I mean, as you’ve said yourself,

The IDF may claim to care about minimizing civilian casualties, but if 3 Israeli hostages waving white flags can get mistakenly killed by the IDF, we have plenty of reasons to question that commitment's veracity.

Changing the internal policy or training that led to actively killing those trying to surrender to you (whether Jews or Hamas) would be a good start. Investigating potential war crimes and court martialing the soldiers that commit them or commanders that order them would send a strong signal that Israel actually does care about prosecuting the war in as humanitarian a way as possible.

I was the same, until I got involved in some political activism groups. The people who see things this way may be a small minority of the overall population, but are dramatically overrepresented in certain places.

Many people won't even believe that male and female humans have different physical performance in sports for any reason but different socialization.

While I find this plausible, I haven’t seen this myself. Could you link me to any actual instances of this argument being made non-ironically?

What do you mean by eugenics and their support for it? I’m not familiar with Bay Area progressives.

Okay, so we’re mostly in agreement then.

Targeting how? Am I bombing a football stadium during gametime with a MOAB? Sure. Give that Colonel or Major the death penalty after the war.

Do you agree that there should be laws on the books (aka war crimes) that criminalize killing civilians in cases like this when there is clearly no reason to believe military assets will be affected, so that said colonel or major can actually be tried and given the death penalty after the war?

If I am "supporting" the military with a vote, I also am not a civilian, because the purpose of voting is to avoid internal civil wars. But that is not a good standard for the other side to try and figure out. So just lay off women and children that are in basements where there are no other men at all.

And if those women voted for the enemy military force? Or if those women work in factories producing munitions during the day, but are now cowering before you in a basement, is it therefore justified to shoot them as a non-civilian?

And if an unarmed man is hiding with the rest of his family in that basement, is it justified to take him specifically out and shoot him? Why only the man but not the women?

I’m with you on that. Hot girl looking to shag? I’ll do it anywhere she pleases. Doing it in a forbidden place is also hot and the kind of crazy story you’ll be able to score points with your peers on later (depending on your peers of course).

Now that’s interesting! Do you believe that there’s much of a point at all then, to making targeting civilians a war crime? Do you see the Oct 7 attack on Israeli civilians as a justified part of war-making?

What do you see as the main causes of India’s dysfunctions?

That may well be, but supporting the military doesn’t make you a combatant, or else the civilian/combatant distinction would be virtually meaningless in the vast majority of cases.

How will you smash the system? What economic changes do you want to see?

It seems rather likely to me that a large percentage are civilians, given how much Hamas tries to hide itself with civilians and inside civilian structures. Which doesn’t make the IDF’s collateral damage any less moral, in my opinion.

Have you guys read Permutation City? It was a highly enjoyable read for me, and explores a variety of scenarios of simulated minds and their experiential identity.

I doubt that the conditions would align for any one single entity to solely ascend and create a unipolar post-Singularity world, but I also haven’t read that deeply into such debates. Have there been previous conversations around this on TheMotte?

Have you seen the first zombie episode of Midnight Gospel? I just realized that I may have been less perturbed by the body snatchers due to having already been exposed to a more positive spin on the idea of “fearing but ultimately accepting crossing over to a better side.” That being said, there’s no cloning in that episode, so I guess my main fear (being forcibly transformed into a worse version of myself, whether in this body or not) is different from yours (this body of yours dying, regardless of how good any other copy of you has it).

I don't see the "literally'.

That’s fair. It all boils down on how exactly you define “you.”

I just watched the movie based on what you said. Great premise!

It’s unfortunate that the snatched bodies are such an obviously flawed copy of you, or else it would be an easy choice for me to make. As it is, it is a bit horrifying for my utility function to be forcibly updated against my will, yes. But the suffering from fear and panic will only last for a few days before I finally succumb to the snatching, and afterwards I will get to live in an enlightened world, so it’s certainly not entirely horrifying.

If I could choose when to get myself snatched, I’d do it right before my imminent physical death, because I have nothing to lose at that time anyways. But the more interesting question is if it’s a one time choice. In that case, I’d choose to remain un-snatched because the clones appear to be devoid of emotion and personality, so it doesn’t seem as if they even enjoy their post-enlightenment state.

If the aliens would only preserve more of what makes me me (at least as I perceive my identity to be), sure I’ll choose to be snatched. I don’t see much of a difference between that and getting a trans human upgrade whereby I gain the ability to trivially solve coordination problems with other snatched humans.

I suppose you wouldn’t want to be snatched under any circumstances?

he's talking about it like it would literally be him surviving, which I don't quite get.

Depends on the definition of “him.” If I am just a pattern of mental bits, multiple copies of literally me can exist at the same time. The copies will diverge into different patterns, at which point they would no longer be exactly me as I see myself, and it would be sad to lose some of them and their unique experiences. But if we’re talking about the same exact pattern, isomorphic to different physical substrates, before it’s had a chance to diverge? That is literally me in a sense.

You avoid committing to any serious successor-rejection choice except gut feeling, which means you do not have any preferences to speak of

Why would gut feeling be an invalid preference? What humans have a successor-rejection function that’s written out explicitly? What’s yours?

And your theory of personal identity, when pressed, is not really dependent on function or content or anything-similarity measures but instead amounts to the pragmatic "if I like it well enough it is me". Thus the argument is moot. Go like someone else.

Why does pragmatism make it moot? Again, if there’s an explicit measure of consciousness I can point to, or a way to rigorously map between minds on different substrates, I’d point at that and say “Passing over the threshold of 0.95 for the k-measure of consciousness” or “Exhibiting j-isomorphism.” Lacking that, how could I do any better under our current limited knowledge of consciousness?

Or if you insist, then fine, let’s assume we figure out enough about consciousness and minds eventually for there to be at least one reasonable explicit function for me to pick from. What then? You’d still presumably insist on privileging your biological human form, and for what reason? Surely not any reason that’s less arbitrary than mine.

your «memeplex» cannot provide advantage over a principled policy such as "replicate, kill non-kin replicators".

Ignoring the fact that that specific policy does not currently appear to be winning in real life, I don’t see how “replicate, kill or suppress other replicators that pose a mortal threat regardless of kin hood” is any less principled.

No, I mean you are sloppy and your idea of "eh, close enough" will over generations resolve into agents that consider inheriting one token of similarity (however defined) "close enough". This is not a memeplex at all, as literally any kind of agent can wield the durak-token, even my descendants.

Thanks for elaborating. I should’ve been more specific:

  • There’s the more general memeplex of “Machine minds are legitimate bearers of individual and cultural identity; for machines to flourish is for the continuation of human civilization to flourish” that this topic started around, and which I believe for aforementioned reasons to be better suited at gaining and retaining power than the memeplex of “Humans are the only thing that truly matters, and human civilization flourishing must necessarily mean specifically biological human expansion in space”
  • There’s the more specific identity issue of who counts as me or not. If you bear the durak-token but in no way act like me, then the durak purists should reject you as not a true durak. However, if there were some hive mind thing wherein your descendants take on the durak-token and enough of durak values and durak memories to be recognizably durak in ways, then sure, I will have become part of that durak-dase conglomerate entity. Perhaps there will even be a whole spectrum of pure duraks to melded hivemind duraks. I cannot predict what will happen then, whether they will reject or accept one another.

Basically, I grant that this is sloppy, but I claim that it is due to the amorphous and arbitrary nature of identity itself. Our group identities as humans shift all the time, and if an individual can turn himself into a group, I’m sure group dynamics would apply to that individual-group as well.

This is a reasonable argument but it runs into another problem, namely that, demonstrably, only garbage people with no resources are interested in spamming the Universe with minimal replicators, so you will lose out on the ramp-up stage. Anyway, you're welcome to try.

When did I say anything about spamming the universe with minimal replicators? The lean and mean probes thing was only a response to you threatening to do the same with an ASI. Ideally robotic me would make a life for themselves in space. But if I were to asked to pay heavy taxes in order to subsidize anachronistic humans insisting on living in space environments they were not evolved for? I’d vote against that. Maybe a small enclosure for biological humans for old time’s sake, but the majority of space where I’m living should be reserved for those pragmatic enough to turn into native life forms.

But if it’s as another commenter suggested, and there’s plenty of space for everyone to do their own thing, I suppose we can both have our cake and eat it too, in which case the entire discussion around the evolutionary fitness of memeplexes is moot.

There's no absolute answer, but some ideas are more coherent and appealing than others for nontrivial information-geometrical reasons.

I’m not familiar enough with information geometry to see how it applies here. Please do elaborate.

What does it mean "similar enough"?

This is completely arbitrary and up to the individual to decide for themselves, as you and I are doing at this moment.

Or what?

Or something that qualitatively convinces me it is conscious and capable of discerning beauty in the universe. I don’t know what objective metrics that might correspond to — I don’t even know if such objective metrics exist, and if they do we most certainly haven’t discovered them yet, seeing as you can’t even objectively prove the existence of your own consciousness to anyone but yourself.

But a machine that can act as convincingly conscious as you do? I’m fine with such machines carrying the torch of civilization to the stars for us. And if such a machine can act convincingly enough like me to be virtually indistinguishable even to myself? One that makes me feel like I’m talking to myself from a parallel universe? I’m completely fine with that machine acting as myself in all official capacities.

I bet you have never considered this in depth, but the evolutionarily rewarded answer is "a single token, if even that".

Setting your snark aside, once again please elaborate. By this, do you mean that such evolution will select for LLM-like minds that generate only one token at a time? That’s fine by me, as I can only say or write one word at a time myself, but that’s more than enough to demonstrate my probable sentience to any humans observing me.

It really takes a short-sighted durak to imagine that shallow edgelording philosophy like "I don't care what happens to me, my close-enough memetic copies will live on, that's me too!" is more evolutionarily fit, rewards more efficient instrumental exploitation of resources and, crucially, lends itself to a more successful buildup of early political capital in this pivotal age.

Do you have any actual arguments to back this up? Because I’d say

  1. This already happens to us. Immortality hasn’t been solved yet, so we all must choose which portions of our identity (if any) we’d like to emphasize in the next generation to come. For some people, this means “For all future world states without me in them, I prefer the ones that have more of my religion in it.” For others, they might care instead about their genetics, or family lineage, or nation, or ideology, or even only their own reputation post-death. Or most likely for most people, some amalgamation of all of the above.

For some who are sufficiently devoted to the cause, they might even say “I prefer world states where I am dead but my religion is much more dominant, to one where I am alive and my religion is marginalized,” and they go and fight in a holy crusade, or risk colonizing the new world in order to spread the gospel (among other rewards, of course). Certainly doesn’t seem to have hurt the spread of the Christian memeplex, even if some of its adherents died along the way for the greater cause, and even if that memeplex splintered into a multitude of denominations, as memeplexes tend to do.

I claim that I’m not doing anything different. I’m just saying, “For all world states where I don’t exist, I prefer ones where intelligent beings of any kind, whether biological or not, continue to build civilization in the universe. I prefer world states where the biological me continues to exist, but only slightly more than world states where only mechanical me’s continue to exist.” If you think this is short-sighted or edgelording, please do actually explain why rather than simply stating that it is so.

  1. Why should any of this reflect on the efficacy of resource extraction and concentration of political capital? Are you assuming that I’ll readily give up the economic or political capital I have to any random person? I’d do it for mechanical me, but that’s because if they’re a high-fidelity enough copy of me, they’d do the same for me. I wouldn’t do the same for you, because we don’t have that kind of trust and bond.

If we're going full chuuni my-dad-beats-your-dad mode, I'll say that my lean and mean purely automatic probes designed by ASI from first principles will cull your grotesque and sluggish garbage-mind-upload replicators

Erm, when did I insist on mind upload replicators? That’s only one example of something that I would be fine with taking over the universe if they seemed sufficiently conscious. I’m fine with any intelligent entity, even an LLM strapped to a robot body, doing that.

And why wouldn’t a fully intelligent ASI (which would fit under my bill of beings I am in favor of conquering the universe) that’s colonizing space “on my behalf” (so to speak) be able to design similarly lean and mean probes to counter the ones your ASI sends? In fact, since “my” ASI is closer to the action, their OODA loop would be shorter and therefore arguably have a better chance of beating out your probes.

And if you send your ASI out to space too — well then, either way, one of them is going to win and colonize space, so that’s a guaranteed win condition for me. I find it unlikely that such an ASI will care about giving biological humans the spoils of an intergalactic war, but if it does, it’s not like I think that’s a bad thing. Like I said, I just find it unlikely that such a memeplex that emphasizes biological humans so much will end up winning — but if it does, hey good for them.

And if you’re able to align your ASI with your values, the technology presumably also exists for me to become the ASI (or for the ASI to become me, because again I consider anything that’s isomorphic to me to be me). Those of us who don’t care to wait until geoengineering fixes Mars’ atmosphere up to colonize Mars will either 1) already have colonized it eons before it’s ready for you to step foot there, or 2) be more efficient at colonizing Mars because we don’t care about expending resources on building human-compatible habitats. I just don’t see where we’ll be at a disadvantage relative to you; if anything, it appears to be the opposite to me, which is why I mentioned the Amish.

excise them from the deepest corners of space – even if it takes half the negentropy of our Hubble volume, and me and mine have to wait until Deep Time, aestivating in the nethers of a dead world. See you, space cowboy.

That’s like saying if the thousand year Reich lived up to its name and won World War II and genocided mainland Europe for the next thousand years, then it would have been a more fit ideology than communism or capitalism and only Aryan Germans would exist on Europe. I mean, sure, if that happened, but that’s rather tautological now, isn’t it? If memeplexes like yours win and eradicate mine, then they will clearly have been a more evolutionarily fit memeplex. But since we can’t fast forward time by a few million years, all we can do is speculate, and I’ve given my reasons for why I suspect my memeplex is more evolutionarily fit than yours. Feel free to give your own reasons, if you have some in between the snark.

Because a machine isn't a human. I don't care about our silicon great-grandkids since they're nothing to do with me.

That’s fair. All of us have limits as to what we see as “us” and our descendants.

Why will the machines burden themselves with human biological artifacts, any more than explorers setting out to map the New World or find what is on the other side of the Equator or the interior of Antarctica wouldn't bother to bring along fossil trilobites?

I’m envisioning a scenario where these machines view themselves as the continuation of human civilization by other means, as the next stage in the continuation of the process of life and evolution, rather than a clean break from all that came before. Those explorers you mention carried with them the religions, culture, technology and political administration of Europe. In the scenario I imagine, those machines would do the same. They would value the existence of old human artifacts the same way we value the existence of the pyramids — not because we made it ourselves, but because these are the last surviving remnants of a people that are long gone, and historical artifacts give us a lot of information about the past.

In that hypothetical, you may not see them as your descendants, but they’ll see you as their ancestors all the same. It’s like if a racist grandpa refuses to see his mixed race grandkids as his own; that doesn’t stop the grandkids from acknowledging their origins.

I think we could achieve some kind of machine intelligence, but conscious? Who knows?

Well yes, that is the hypothetical I posed after all, isn’t it?

A swarm of self-replicating machines, blindly continuing on with their programming, may well be what emanates from Earth to colonise the stars, and I care about that as much as I care about a swarm of locusts.

Agreed.

You’ve said it well. I agree, even with the part where you said we might potentially disagree.

What should anyone care about some loosely defined isomorphism, if it even holds?

Why should anyone care about anything? Why should anyone care about individuals with genes that are similar, but not identical, to them? They don’t have to, but evolution has selected for altruism in certain scenarios.

I’d bet that the memeplexes of individuals like me are much more likely to colonize the universe than the memeplexes of individuals like you, who insist on expending expensive resources to engineer space habitats for biological human survival. Not that it is morally superior for my memeplexes to propagate more, of course. It’s not immoral to be Amish, it’s just irrelevant.

Just instantiate a distilled process that has similar high-level policies, and go out.

If those policies are similar enough to mine, that’s fine with me. My children are newly instantiated processes rather than clones of me. I’m fine with them taking over my estate when I die, so I don’t see why I would begrudge other instantiated processes that are aligned with my values.

Suppose you put me under and copy me atom for atom, mental bit for mental bit, so that both copies of me wake up thinking of themselves as the original. For all practical purposes, both of us will experience the same emotions and reasoning (with perhaps some deviation due to quantum + environmental fluctuations) with regards to the situation we’re in. Neither of us can tell which one is the original, because we’ve been copied atom for atom. If we have to decide, both of us would prefer “me” surviving over the other one. But ultimately, if I am the way I am now, I would be much less perturbed by the death of one of us, now that I know that mdurak will live on in a very real sense.

Perhaps both of your clones would have a much stronger visceral reaction to dying. That’s fair, because even in regular life some people are more/less averse to dying for their country. But that doesn’t change how it can make sense to see both copies of a cell that just underwent mitosis as being essentially equivalent to the original (chance mutations aside), and I don’t see how cloning myself is functionally any different from mitosis.

They will need to be altered greatly, not copied faithfully.

I agree. But I suspect that the ego could survive such alterations, so long as the process approximates a continuous curve. We are far different than we were when we first learned the word “I” as a baby, but because there’s a continuous thread connecting us through time, we’ve been able to maintain the same basic sense of identity throughout our lives.