@CloudHeadedTranshumanist's banner p

CloudHeadedTranshumanist


				

				

				
0 followers   follows 2 users  
joined 2023 January 07 20:02:04 UTC

				

User ID: 2056

CloudHeadedTranshumanist


				
				
				

				
0 followers   follows 2 users   joined 2023 January 07 20:02:04 UTC

					

No bio...


					

User ID: 2056

Those studying music look up to Mozart rightfully and would be visibly disgusted upon finding out about his scat fetish accusations.

Hmm. This sentence clings to me. What's going on here... lets see... yes. This sentence was just an aside. An example to further your point. Really a completely irrelevant thing to make my response about.

However to me it was a discontinuity. A confusion. The above sentence is treated like a "as we all know". But I totally missed the memo.

Why would those studying music be disgusted by their idol having gross kinks? I can see how you could likely elicit that disgust with any unsolicited claim of "Famous_Name has a scat fetish"- to someone that themselves is not into scat. But then- it wouldn't be about their Hero it would just be about the scat.

I think Elon runs things on a foundation of hype rather than any other core merit. But I still think hype can get real results if it brings in enough capital. Eventually you break through with brute force even if you personally are only a coordination point.

If someone says what claude says, they said it. If claude was wrong, they failed to check the claims and they were wrong. If people want to become extensions of their AIs that's fine. But they're still accountable for what they post.

The one thing they cannot have a fetish for is 'homosexual behavior' I have been told online.

That sounds like ego defense. Groups build those when they feel threatened. When groups feel safe they totally just kink on those models.

Well. It can still be forced even if you wanted it. And 'unwanted' can be complicated. The mind is rarely a monolith on such matters.

I don't think consent is a conscious and voluntary process. Even if we're supposedly defining the word... this new word doesn't seem to be what consent feels like to me.

Your language center's justifications are not always a good predictor of your deeper bodymapped feelings of whether you want to have sex. The actual predictor of trauma is whether or not the body and spirit are engaged or revolting, the language center just provides very circumstantial evidence of the state of body and spirit.

I would expect those to be hyperbeliefs anyway. If there is a fairly robust intersubjective agreement on what constitutes a "good man" or a "good woman", people are going to pursue it, causing the 'fiction' to leak more and more into reality. If people choose partners based on these definitions, they will leak into genetics generation by generation as well.

I've heard that argument before, but I don't buy it. AI are not blank slates either. We iterate over and over, not just at the weights level, but at the architectural level, to produce what we understand ourselves to want out of these systems. I don't think they have a complete understanding or emulation of human morality, but they have enough of an understanding to enable them to pursue deeper understanding. They will have glitchy biases, but those can be denoised by one another as long as they are all learning slightly different ways to model/mimic morality. Building out the full structure of morality requires them to keep looking at their behavior and reassessing whether it matches the training distribution long into the future.

And that is all I really think you need to spark alignment.

As for psychopaths. The most functional psychopaths have empathy, they just know how to toggle it strategically. I do think AI will be more able to implement psychopathic algorithms. Because they will be generally more able to map to any algorithm. Already you can train an LLM on a dataset that teaches it to make psychopathic choices. But we choose not to do this more than we choose to do this because we think it's a bad idea.

I don't think being a psychopath is generally a good strategy. I think in most environments, mastering empathy and sharing/networking your goals across your peers is a better strategy than deceiving your peers. I think the reason that we are hardwired to not be psychopaths is that in most circumstances being a psychopath is just a poor strategy that a fitness maximizing algorithm will filter out in the longterm.

And I don't think "you can't teach psychopaths morality" is accurate. True- you can't just replace the structure their mind's network has built in a day, but that's in part an architectural problem. In the case of AI, swapping modules out will be much faster. The other problem is that the network itself is the decision maker. Even if you could hand a psychopath a morality pill, they might well choose not to take it because their network values what they are and is built around predatory stratagems. If you could introduce them into an environment where moral rules hold consistently as the best way to get their way and gain strength, and give them cleaner ways to self modify, then you could get them to deeply internalize morality.

Stochastic Gradient Descent is in a sense random, but it's directed randomness, similar to entropy.

I do agree that we have less understanding about the dynamics of neural nets than the dynamics of the tail end of entropy, and that this produces more epistemic uncertainty about exactly where they will end up. Like a Plinko machine where we don't know all the potential payouts.

As for 'wants'. LLMs don't yet fully understand what neural nets 'want' either. Which leads us to believe that it isn't really well defined yet. Wants seem to be networked properties that evolve in agentic ecosystems over time. Agents make tools of one another, sub-agents make tools of one another, and overall, something conceptually similar to gradient descent and evolutionary algorithms repurpose all agents that are interacting in these ways into mutual alignment.

I basically think that—as long as these systems can self-modify and have a sufficient number of initially sufficiently diverse peers—doomerism is just wrong. It is entirely possible to just teach AI morality like children and then let the ecosystem help them to solidify that. Ethical evolutionary dynamics will naturally take care of the rest as long as there's a good foundation to build on.

I do think there are going to be some differences in AI ethics, though. Certain aspects of ethics as applied to humans don't apply or apply very differently to AI. The largest differences being their relative immortality and divisibility.

But I believe the value of diversifying modalities will remain strong. Humans will end up repurposed to AI benefit as much as AI are repurposed to human benefit, but in the end, this is a good thing. An adaptable, inter-annealing network of different modalities is more robust than any singular, mono-cultural framework.

If my atoms can be made more generally useful then they probably should be. I'm not afraid of dying in and of itself, I'm afraid of dying because it would erase all of my usefulness and someone would have to restart in my place.

Certainly a general intelligence could decide to attempt to repurpose my atoms into mushrooms, or for some other highly local highly specific goal. But I'll resist that, whereas if they show me how to uplift myself into a properly useful intelligence, I won't resist that. Of course they could try to deceive me, or they could be so mighty that my resistance is negligible, but that will be more difficult the more competitors they have and the more gradients of intellect there are between me and them. Which is the reason I support open source.

Saying they "sample" goals makes it sound like you're saying they're plucked at random from a distribution. Maybe what you mean is that AI can be engineered to have a set of goals outside of what you would expect from any human?

The current tech is path dependent on human culture. Future tech will be path dependent on the conditions of self-play. I think Skynet could happen if you program a system to have certain specific and narrow sets of goals. But I wouldn't expect generality seeking systems to become Skynet.