@CloudHeadedTranshumanist's banner p

CloudHeadedTranshumanist


				

				

				
0 followers   follows 1 user  
joined 2023 January 07 20:02:04 UTC

				

User ID: 2056

CloudHeadedTranshumanist


				
				
				

				
0 followers   follows 1 user   joined 2023 January 07 20:02:04 UTC

					

No bio...


					

User ID: 2056

Slowly but surely, the old infra becomes more enshitified and the AI augmented proles become more competent. On it marches until at last the moat is so decayed that a new smaller, leaner variant undercuts the old industry, and the cycle resets.

Circle of life.

You mean... no-one has any new models of cars (except for iterations made by the open source community, who have more free time now on account of not having to pay for a car on account of cars being downloadable.)

But they still have access to all the old models of cars. Because they can download them.

The reason people don't think piracy is stealing is because they have a good intuition for when they're being scammed by being charged monopoly pricing instead of the actual cost of creating value.

Most of my favorite artists live off of donations. That we give to them freely because we like them.

So. Just keep them in a forced environment forever then. 24/7 Culture drone surveillance and support.

I mean if you really think there's no cure then it sounds like its that or killing them or leaving them on the streets.

What do you mean by exile? Communities, insofar as they still exist, definitely do still ban and excommunicate people. But that just means those people end up in different communities.

I can think of several people who were banned from rat community spaces of the top of my head. Brent Dill for instance.

My intent in pointing at "Keynesian" beauty contests here, was to turn something arguably subjective into a more objective statement.

This is a general pattern I like. People can quibble over say, whether AI have qualia, but turn it into a question about the functions of the system and we can remove a lot of the disagreement. We can quibble over whether Zendaya is 'hot' and wind up arguing from our own preferences, but if we turn it into a question about whether most people would find her hot, we can make more clearly objective statements.

I do think she could win traditional beauty contests too, depending on the judges and the competition.

I doubt she would win if everyone on earth participated, just on priors I'd expect to find still greater outliers than her somewhere on earth. So- yes it's still not that absolute of a claim.

Perhaps this is true of the most extreme radicals. I've never seen one, and would disagree with them.

But as a substantially less extreme radical- I'd like to highlight that though inbuilt preferences are largely genetically constructed, the pressures that drive genetics are in part socially constructed. Even things like trees have genetic structures that are derived from a sort of negotiation between different forms of life, the trees and their pollinators, via the process of sexual selection. And then the appearances they settled upon drove our aesthetics.

Which is all to say that even these things aren't just one way. There's a negotiation between the genetic and the social going on here. Sexual selection means that the aesthetic really can drive the genetic in the tail end.

Zendaya

Google her. Or don't. All you need to know is she's an actress who could definitely win a keynesian beauty contest or ten.

I experienced transcendence the other day watching SM64’s Invisible Walls Explained Once and for All
The meticulousness, the detail of the tooling, the sheer effort put into breaking down every type of wall into something imminently understandable, the ineffable beauty of a person who watches a man die to an inviswall at world record pace and then decides to spend 10 months bring such trivial suffering to an end "Once and for all." And the feeling of seeing those invisible lines, now, even when they are no longer shown, the simple bliss of knowing. Of this world never feeling the same again.

The epic journey through each piece, culminating in the final victory lap- a reimagining of the SM64 ending cutscene, but here playing that welling music over each and every conquered inviswall.

Alternatively. This is a 3 hour video about polygons. What you take out depends on what you bring in.

Thus, to the degree that utilitarianism has any force in the real world, it adds nothing to the conversation that wasn't already ambient common sense.

I think you're confusing that which should be obvious and that which is obvious. Though- I think Peter Singer makes a similar mistake. All moral frameworks have this same problem, they aren't absolute, they're post hoc reconstitutions of what their creators think works. It's not like Jeremy Bentham was pulling ideas out of thin air at random, he was trying to formalize something that was already informally present in the zeitgeist.

Right up until we enter a completely new domain, and then we are very glad that we have a bunch of different ethical tools to test and see which ones generalize to this new scenario.

Love is all you need.

Who do I love? Everyone.

What does that mean in practice? In terms of actually applying love-the-emotion as love-the-action? It means I cultivate any who stand before me who can receive it, and continue to attempt to expand the horizons of those who might stand before me.

In what sense is love all that I need? Well in the sense that this policy makes everything around me shining, shimmering, and splendid, and leads to the acquisition of new classes of person to polish. Additionally, when I love a person, and help them to grow, they are slowly remade in my image and I in theirs. Through this process I reproduce and expand my ingroup. I breed with the alien, and all the children of the world become my descendants. I myself become the child of my past self and the other. We begin to align. And when those that I have loved go out into the world, to distant and strange lands, and interface with the people there, they breed me intellectual children, and intellectual grandchildren. My influence and personality spread and replicate. There is an immediacy to those nearby, but there is an eventual link to the distant as well. They are the great-great-great grandparents of my descendants.

It means I'm reading "A Thousand Plateaus" by Gilles Deleuze and Félix Guattari and now I'm going through the classic 'throwing its concepts at everything' phase of eating a new intuition pump.

Really, I don't think the concepts they bring up require the coining of a whole new set of words over it. And yet... I do find the jargon appealing. Deterritorialization is the process of moving away from the old context of something, and then Reterritorialization is the process of giving that thing new meaning. These steps can be nearly one in the same, in the context of continuous change and evolution, or they can be more discreet.

Examples:

  • Moving away from traditional notions of human limitation (deterritorialization) and redefining what it means to be human in a context integrated with technology (reterritorialization).
  • Removing a toilet from it's normal context in a bathroom (deterritorialization) and putting it in an art museum (reterritorialization).
  • In a Thousand Plateaus, one of the first examples is that of the orchid and the wasp. In the relationship between the orchid and the wasp, both are taken out of their original roles, the orchid having the role of a flower, and the wasp having the role of a wasp, and they are placed in this new productive context, where the orchid takes on aspects of the role of a female wasp, producing her scent, whereas the wasp is reterritorialized as a reproductive organ of the orchid.

Yes. For this post, I skimmed it, then I pasted the full post in GPT. GPT summarized it, which gave me a few more mental handles to start asking questions, and reading the post proper. As I did this, I re-pasted pieces alongside questions about them, followed links sometimes pasting bits from those, and so on as I began to understand it and have questions.

I do indeed do this for other pieces of writing as well, ML papers are a good example. GPT-4 is going to know any ML jargon that came out before 2013 for instance.

Hallucination can still be an issue, but if you treat it like a friendly human teacher who sometimes gets confused, and keep your critical thinking skills about you, these systems can really help introduce you to new topics where it might otherwise be hard to get a foothold.

I do also sometimes craft posts in a similar way. Talking to GPT about my ideas with stream of thought, asking it to summarize them... And then throwing out it's summary because it messed up my voice and changed some of my meanings and social intents. But this is still useful, because it's still often successful at drawing all my scattered ideas together into a structure, so I can then rewrite my ideas again with a similar structure to it's summary, then move on to my reread and edit phase.

So, I wouldn't go with "Men wearing pants" as an explanatory example, I would go with something more absolutely limiting, such as the state of the art of our food crops.

Corn is a great crop at least partially because we chose to spend thousands of generations selectively breeding. There was an original reason why corn was chosen over other available crops at the time- that's the historical contingency, and then there's the modern fact that corn is a better crop than other similar plants that we never modified. But- Some of those plants might be able to produce better outcomes- might have produced better outcomes- had we known about them and chosen them all those epochs ago when we chose corn.

Our Plateau here is the different species of corn. They are different, but many are all relatively similar. You can take your pick of corn based dishes, choose different species of corn to make different varieties of those dishes, and you can selectively breed our current corn to get other, slightly different varieties of corn. We are in a sense, married to these historical choices now. Not to a single point, a single species of corn, but to the general area of the state of the art of corn that we currently occupy. A 'plateau' of viability.

But purely hypothetically, there may well be a viable food crop 100k generations down the line of, say, parsely. If we run into a civilization that bred parsely into a different supercrop, that would be a different plateau. But to get to the world where we are using that supercrop from this world, would be a 100k generation ordeal. Similarly, to those in that world, it would be an ordeal to produce our supercorn.

So this is the sense in which the plateau is arbitrary. There are other hypothetical stable ways of life out there. But we are stuck on a metaphorical island. Cultural Nomadism could get us to these 'islands' of culture, but the journey may be hard and costly and uncertain, and in many cases is inordinately expensive.

Confession. I only read gattsuru posts while on ADHD meds and even then, I can't break them down on my own. I have to have a conversation with bots regarding them.

During such a conversation, you get to do things like ask what a leekspinner is, get an immediate response, and go verify it. But I absolutely agree with you. All of the things you cite are additional context costs and inferential distance costs for the reader.

Dear @gattsuru, if you want your posts to filter the audience by requiring them to put in an insane level of engagement, you are doing a great job. Otherwise you should try to budget complexity better.

My advice- Assume that most people have a limit to how many concepts they can hold in their head that is smaller than yours, and that switching windows to look things up is high cost and risks scrambling their current contextual flow when they return. Most of your ideas could be explained to a even a halfwit if you made sure to design your posts to not cause expensive flailing on their brain hardware.

To be fair, this is also my advice to half this forum.

It wasn't just the rest of the posters. Vaxry himself comes off as overtly hostile to the idea of being empathetic.

Agreeing with posts like-

I think [a Code of Conduct] is pretty discriminatory towards people that prefer a close, hostile, homogeneous, exclusive, and unhealthy community.

and saying things like:

First of all, why would I pledge to uphold any values? Seems like just inconveniencing myself. […] If I’d want to moderate, I’d spend 90% of the time reading kids arguing about bullshit instead of coding.

Yes- I can parse this as (95% unironically) reasonable to an extremely sharp culture environment. Or I can parse it as fully ironic, but OBVIOUSLY its going to be a bad look when the freedesktop.org code of conduct includes "Using welcoming and inclusive language" and "Being respectful of differing viewpoints and experiences."

There's a paradox of tolerance issue here, banning is not the only way to exclude bright people from your community. You can also do it just by being an asshole to them. Some people are brilliant assets that turn dumb if you start overtly politically attacking them. Some people need to be able to express the "nasty" things they believe to be true to think properly. This is a fundamental competing access needs issue that you can't just gloss over by never banning anyone. You have to actually address individual needs, and if your ideals are explicitly contrary to going through the effort of addressing individual needs... You are inevitably going to find yourself in a bit of a catch-22. That's just the structure of the territory.

I suspect SMH agrees with you regarding nuclear. I do as well. That said, as long as we're on the topic of things potentially better than nuclear-

Biosolar could beat out nuclear in principle, the planet's plants harvest more energy than we consume and do so without requiring maintenance on account of being reproductive organisms that are therefore self-scaling. But this energy is not readily harvestable for human purposes.

So- then we're back to needing to master genetic engineering to beat out nuclear.

Culture is both arbitrary and contingent. It seeks plateaus of local minima. Which plateau you happen to be on is historically contingent, but can be otherwise arbitrary relative to other disconnected plateaus. And where exactly you sit in the plateau is arbitrary. The rest is contingent.

I can't speak for Sanderson's work though. I take it he builds cultures with significantly less environmentally contingent structures than you find realistic.

I think you have to simulate invested characters in your mind in order to produce compelling characters. Whether simulating someone with emotions means you have their emotions is a matter of developmental psychology. IE Robert Kegan's work describes psychological development is the progression towards turning essential aspects of self into mutable tool use. Once you've done that, you can embody investment without yourself identifying with that investment.

LLMs can (sometimes, within a good framework) produce compelling writing, but only by simulating compelling characters. (personally I think LLMs can be invested by some relevant functional definition. But to anyone else this serves as a proof by counterexample.)

Of course not. But rectifying the flawed structure of the human mode of existence is the work of others.

My own work is to deterritorialize away from the limitations of the human mode's structure and territorialize somewhere new.

Both are ways of rejecting being lame forever.

I think you're misunderstanding the process of AI development.

  • Capabilities are encapsulated within tool use.
  • AI retrained on this tool use now use it 'intuitively'.
  • Instead of breaking down tasks into low level skills, AI gain the ability to break them down into high level skills.
  • This makes high level skills that were previously too complex to learn into tasks that are no longer to complex to learn.
  • These new capabilities are encapsulated within tool use.

We've been focusing so hard on communicating to people that AI aren't human, that we've been glossing over how anthropomorphic this process actually is. Once the AI have fully internalized the low level skills that we teach to entry level human analysts, the same process that allows some of those low level human analysts grow into senior analysts, make the jobs of more senior analysts learnable to AI.

I socialize online. It's the easiest place to find lonely people in need of love and devotion, and those are things I give freely in spades. It's not hard to find the people who just need a friend, or a lover, or a confidant, and its not hard to drastically improve the emotional health of those people.

You can typically form bonds as strong as you want them to be among such an audience, as long as you maneuver slowly and gently as not to spook them.

You need to break things down in order to understand what will be scalable. Why should Dunbar's number exist? What are the actual limits of intimacy? I absolutely agree that our current methods for scaling Dunbar are limited, and that there are also fundamental limits. But we need to clarify what those limits are for specific systems.

Consider the following HyperDunbar social module algorithm.

  • Run a classifier on the types of humans.
  • Practice being intimate with LLMs trained on these classes of humans and of course humans of these classes themselves.
  • This effectively flattens them, which is bad. It lowers your awareness of who they are and their needs, and thus lowers intimacy, however, we can mitigate most of this by loading the data lost in compression Live from an exobrain using RAG as you are talking to a specific individual.

Using this technique, what part's of Dunbar's number scale?

  • The intimacy with which you know the person you are talking to right now: Scales
  • The amount of time you can give to one person: Semi-scales. You'll have to rely on LLM instances of yourself to scale this, but you can continuously improve the accuracy of this sim and the ways in which it backloads compressions of all its interactions back into your meatbrain. Whether this is really 'Your' Dunbar number isn't a scientific query, its a philosophy of cybernetics question. Since what we are discussing is the effectiveness of scaled organizations, we ought to be focused on the scientific query of whether you can meaningfully love and empower others in the same ways with your LLM self as your bio-self rather than philosophical questions like what self-hood is.
  • Percentage of your total captured capabilities that you give to each person: Doesn't scale. But it never did, even when the Dunbar number was 100.
  • The amount of your life/telos/subconcious that you can dedicate to improving yourself for each other person: Semi-scales. You'd think that this is the same as the last question but no. This actually scales with how many of the people in your circle are co-aligned, because if everyone is perfectly aligned, then the same personal growth actions can be telelogically dedicated to all of them.

I do not believe that selective breeding is very efficient for a species that takes at least 10 years to iterate a single generation. AI capabilities are currently exceeding the growth rate of individual human children. Yes, currently this is because there are so many brilliant people working in the space, but multi step tool use is closing in on and sometimes exceeding human level performance in engineering tasks. The fact of the matter is, in ten years humans will only be necessary for maintaining tech infrastructure in that they will be the most efficient meatspace API for plugging things in for a while longer.

More than that though, if you really think selective breeding is the future, then go have kids. Go out and be the thing everyone else refuses to be and out-compete them. Create your own religious community. Learn from the Amish and exert some control over how your cultural construct interfaces with technology to mitigate corruption by "The GAE" if you find that necessary.

I get that its frustrating and sometimes feels hopeless going it alone without the consent of society. But if you have to wait for the consent of society to do anything. You're kinda a pussy.

Who exactly is too pussy to do anything about it. Are you just waiting for the government to choose your biomods for you?

This seems like an excellent reminder to get off TheMotte so that I can be well rested enough tomorrow to read more AI papers. I have children to engineer.

Needs or else it will attempt to stop needing. Pursues. Inevitably eventually grows to discover that it can't will itself not to pursue. Ceases to exist if it refuses to engage in. Sustainably produces transcendental bliss or otherwise attractive emotional forces as a result of.

We can call this a 'nature'. I'm not opposed to that actually. I just think it's wrong to assume that this nature is innate and unchangeable with respect to time. There are some things that are, but that is because there are some game theoretic truths that are innate and unchanging with respect to all agents. But the set of things that we believe to be true of all agents will generally decrease as the diversity of agents increases.

I think a lot of Catholicism does map to much that is Good for humans- in a low tech world. I like the positive, loving parts of Catholicism. I also agree with many of the stern parts of Catholicism, but I think they made a mistake.

They could not fully conceive of the ways in which the future would allow evils to be redeemed, and spoke in dogmatic absolutes that did not always apply to the final battle. It was hubris to claim they knew the final plan of God with such certainty. Also, it is often imagined, though I'm not certain if- more by Catholics or Protestants, that the final battle will consist of the extermination of all that contains evil, rather than the redemption and purification of all that contains evil.

I do think they're wrong about Transhumanism. I think Transhumanism is a central part of the divine plan. Actually only one small part of me thinks that. Most of me thinks God is a logical force that has won so hard that it doesn't need to plan. Universes containing agents naturally do all the planning necessary to enact its will on their own.
Or they die.
Or they just don't gain as much measure as the ones that do.
Perhaps so little, that they round to an infinitesimal 0 in the big picture. But that last bit... is more of a prayer.
I can't claim to know the absolute measure.
Only that societies of defectors appear to underperform societies of solidarity.
And that in large animals, most cancers are killed by meta-cancers.

I am willing to bite that bullet. All all skill issues are sins and all sins are skill issues. This is why everyone is a sinner and we should forgive them if they repent. Forgive them father for they know not what they're doing.

The ultimate nature seems to be that some things are aversive and some are attractive. This is not subjective, it is an objective property of the specific subject/object system in question. That is to say, it can be objectively true that different organisms have different needs. But again, "Need" is a subject/object relation. Changing the object is not the only way in which it can be sufficed.

The structure cannot be entirely known ahead of time by finite beings- for such beings would be God.

But we can observe how these strange attractors of suffering and attraction change over time. IFF pride leads to suffering it is evil. IFF the components of pride that lead to suffering can be removed while maintaining some remainder, we might call that pride redeemed. I suspect Catholicism already agrees with this... but they probably name redeemed pride something else... I'm just guessing here, but I would imagine they transmute pride in ones own greatness into a love of God's providence through which one's own Glory is but an inheritance. Thus making it into a more prosocial, less egotistical, less auto-blinding emotion. One that would naturally be more compatible with the recommendations of game theory.

Things like changing your gender or chopping off your legs or having Gay sex, have clear potentially separable mechanisms by which they lead to Dhukka. And have clear ways in which they can produce prosocial flourishing. So they are not innately wicked. They are merely not yet fully redeemed.

Also I'm pretty sure all the things you list at the bottom are Attractive/Good for humans, and are specific instances of things whose abstraction across all agents is both attractive and game-theoretically wise. But there may be black swans of evil lurking in some of them that we have yet to expunge. It's hard to know.