@Tachikoma's banner p

Tachikoma


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 01:46:36 UTC
Verified Email

				

User ID: 352

Tachikoma


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 01:46:36 UTC

					

No bio...


					

User ID: 352

Verified Email

In a future where we could create cheap, tasty vat-grown meat which was a perfect or near-perfect substitute for the natural variety we should expect the animal rights movement to gain a lot of steam. I would expect industrial animal husbandry to fall, but I think small scale hunting and farming might hold on by virtue of protecting and practicing long-running cultural practices.

The rates of declining global fertility seem to counter the idea that life has inherent sprawling tendencies. Or at least, once a species is sufficiently intelligent, capable of long-term planning and controlling its own fertility that the sprawling tendencies can be intelligently managed.

I like being flesh and blood and I wouldn't trust the alternative until it's been thoroughly tested. But the point of what I'm aiming for is that the choice isn't either or, if your want to digitize yourself while others want to go to space in the flesh there are ample resources to support either mode of being. The AGI and ASI that go out to pave the way should make room to accommodate a future with a wide diversity of human life. We should support exploring the frontiers of not just space but human existence. If this were just about convenience you wouldn't need humans at all, you could build AI that are mentally much more suited to space exploration.

That's why you send the AGI probes ahead to build and run artificial habitats and then pick up any humans from Earth afterwards that are interested in leaving (not necessarily permanently). It's true that having to take care of meat in space will take significantly more resources than just digitized minds (whether artificial or formerly organic) but then what's the whole point of this whole project of building AGI and ASI if we can't have our cake and eat it too?

Where's the quadrant for creating an AGI that can fit in a space probe, then launching a fleet with enough equipment to reach the asteroid belt and establish an industrial base from raw materials to build space habitats for humans that want to leave the old politics, animosities and nationalism behind on Earth? Each habitat or ship can practice whatever politics or ideologies they want, and people can self-select into whichever suits them and pursue a future among the stars. Essentially, which quadrant will lead to the Culture? That's the one I'm in.

Space exploration and colonization will happen, but it will be led by machines. Humans are not built for space, not just in the environment needed to sustain ourselves but also in its vastness, both across space and time. If it weren't possible or feasible to build intelligent machines then maybe in a thousand years, we would progress far enough with genetic engineering to create a splinter race suited to the demands of space travel. But it is possible, and we are already in the process of making AI that will chart the path to the stars.

This idea reminds me of a sci-fi series where names are much more descriptive which makes sense when the individuals live in a galaxy spanning society comprised of trillions of citizens.

Names; Culture names act as an address if the person concerned stays where they were brought up. Let's take an example; Balveda, from Consider Phlebas. Her full name is Juboal-Rabaroansa Perosteck Alseyn Balveda dam T'seif. The first part tells you she was born/brought up on Rabaroan Plate, in the Juboal stellar system (where there is only one Orbital in a system, the first part of a name will often be the name of the Orbital rather than the star); Perosteck is her given name (almost invariably the choice of one's mother), Alseyn is her chosen name (people usually choose their names in their teens, and sometimes have a succession through their lives; an alseyn is a graceful but fierce avian raptor common to many Orbitals in the region which includes the Juboal system); Balveda is her family name (usually one's mother's family name) and T'seif is the house/estate she was raised within. The 'sa' affix on the first part of her name would translate into 'er' in English (we might all start our names with 'Sun-Earther', in English, if we were to adopt the same nomenclature), and the 'dam' part is similar to the German 'von'. Of course, not everyone follows this naming-system, but most do, and the Culture tries to ensure that star and Orbital names are unique, to avoid confusion.

The author signs off with their name translated into this naming scheme in the blog post he wrote this in for added laughs.

Iain M Banks (Sun-Earther Iain El-Bonko Banks of North Queensferry)

Have you never socialized with other people as a group, for the purpose of socialization and wondering how you can make it better next time? (Not implying that every party/social event needs to be improved every time, or that there is always some obvious way to improve it for next time).

Competing to throwing the 'best' party doesn't have to be treated as a sociopathic game of Risk, nor does partying mean going to a night club and competing to see who can get the most drunk or can score the hottest chick. A party can be as little effort as stargazing with friends while getting drunk or high or neither. Replace partying with hanging out with friends if you want to quibble with the choice of words. The point is that in a utopia what it means to compete can be entirely different and much more pro-social than what it means to compete in other societies.

Also, I would imagine trying too hard to 'win' at socializing, having fun and letting loose (the goal of a party) is the opposite sort of behaviour necessary for a 'good' party in the first place.

Maybe you haven't been to the right parties yet.

One of the key defining features of being human is that we can form bonds that go beyond simple kinship which allow us to create complex societies, up to and including our current globe-spanning super-civilization. As the meme goes, we actually are built different.

Of course, there will always be struggles and competitions - the only difference is that in a future utopian society they will be who throws the best parties, who can create the most creative and lavish gifts to give away and climbing the status hierarchy by being most well-liked or admired or talk of the town.

I added the disinterested AGI as a possibility, but don't think it matters because the people who made it would still have the same drive to try again - since the benevolent but mostly indifferent AGI is not serving those AGI builder's goals (whatever they might be). The only way to lock in a future where a benevolent indifferent AGI exists is it is the first AGI created and then it prevents us humans from building anymore AGI. But the only way to do that would be to severely curtail or heavily surveil us, which would contradict its existence as being indifferent.

Long story short people can leave the Culture, but it's almost always depicted as a bad idea in universe, and the Minds are there every step of the way to guide folks into leaving.

Except it isn't presented as being bad? Culture citizens are free to travel and that is one of the more popular things to do, whether within the Culture, to other civilizations or into the wilderness. Whole factions break off from the Culture due to philosophical differences (the Peace faction, the Zetetic Elench that believe they should modify themselves to understand aliens better)

but I wish that there was another collective of humanity that was powerful, and independent from the Minds

There isn't, though I think it's mentioned that humans can undergo modifications to become more like Minds. But then in becoming a Mind they wouldn't be human anymore... So, then what's the point? When it comes to playing chess, it wouldn't matter how many chimps were tossed together to face a human player. Likewise, there's no number of human grandmaster chess players put together in a room that could outplay the current state of the art chess playing AIs.

Which type of 'benevolent AGI' would be your preferred outcome?

My preferred benevolent AGI is one that provides all humans with the conditions necessary to live a good life. What is a good life? That is something everyone has to decide for themselves, which is informed by a complex stew of genes, culture, education, age and more.

The only thing I am uncertain about is how to handle communities - I and some group of people might choose to live out in the wilderness like our ancestors did, and in doing so forsaking modern miracles like medicine. We can accept the hardships that lifestyle entails being adults, but what about our children? Our grandchildren? Ought the benevolent AGI intervene to offer those children basic medical care? Or education? This isn't a new ethical debate; it already exists like in the case of the Jehovah Witness who object to blood transfusion and force that on their children who may need it, which in some countries can be overridden by medical staff and the government.

Given that many people seem to think that AGI is inevitable, I can never understand how a future in which that AGI is benevolent is worse than one in which it isn't.

Is indifference better? An ASI that is akin to a force of nature, bulldozing planets (with humans on them) not out of hatred but just because we happen to be in the wrong place at the wrong time? Or would they rather create an actively malicious ASI? Hey, at least then you get a villain to unite against... even if you will very likely lose, but of course that is the case for every hero in any good story.

How about benevolent indifference? The AGI stays in its lane, humans stay in theirs, everyone is happy, right? Ah, except for those AGI builders who made the thing in order to do X (where X is to solve some problem that's too difficult for them to solve). What's to prevent the builders from just trying again, tweaking parameters or the architecture until they get what they want? Or something they didn't want but can't put back in the bottle.

If the only answer is benevolent, then the only question is what form does benevolence take? Is it being a nanny to help a drowning child? Should the ASI only intervene when we are about to commit an existential blunder? Does climate change count? It won't be an existential crisis, but it will likely result in the deaths of tens of millions and the immiseration of hundreds of millions more. Do you think drowning Bangladeshis (who emit very little carbon in the first place) would consider being saved by a benevolent ASI nannying and refuse it on those grounds?

Of course, if you think that AGI is not inevitable, other futures are possible. But given that even many people close to AI research also struggle with how to align said AI and even they can't coordinate to slow AI development, I don't really get how it doesn't emerge at some point.

Sidenote: You can leave the Culture. You don't need to be babysat if you think you are all grown up. It is by design... anarchy.