@CloudHeadedTranshumanist's banner p

CloudHeadedTranshumanist


				

				

				
0 followers   follows 2 users  
joined 2023 January 07 20:02:04 UTC

				

User ID: 2056

CloudHeadedTranshumanist


				
				
				

				
0 followers   follows 2 users   joined 2023 January 07 20:02:04 UTC

					

No bio...


					

User ID: 2056

Why... did you post this? I am somewhat interested in seeing other people's dialogues with models. But maybe they should just be linked to...

I'm not sure how to engage with something like this here on the Motte in the middle of the culture war roundup?

Should I engage with your thoughts or analyze the chat as a whole? Also did a chunk of this get chopped off? ... Did you post by mistake?

The term comes from Magic The Gathering lore and color pie philosophy. In mtg circles people will sometimes identify themselves by a color or color combination. Either due to liking the gameplay of that combo, or liking its aesthetics or personally vibing with the actual ideology.

Golgari is Black/Green. Life and death. The growth and decay of all things. Rot and Compost. One organism's bloated corpse is another organism's egg laying site. The mode of thinking that believes that the most respectful way to treat the dead crewmate is to return them to the ship's biomass recycler. This is the circle of life. This is the essence of Golgari.

"They say nothing lasts forever. I say everything lasts forever, just not in the form you may be accustomed to."
-- Deathsprout flavor text.

That makes sense.

I think it's also difficult for me to conceive of enjoying the smell of farts or unveiling one's inner truth as vices. I think that's the oft discussed purity / exploration divide.

Plus something aesthetic relating to my existence as a Golgari mage. Shoveling shit is clean honest work. That requires a bath afterwards.

Right. That is in my conclusion yes.

Nate is incapable of questioning if polls contain any signal what-so-ever.

Polls are useless for 2 reasons: When the margins are narrow (2020, 2024), polls are too noisy to get anything of value

???

Ok from this post Nate links in his defense, emphasis mine: https://www.natesilver.net/p/the-polls-are-close-but-that-doesnt

But our forecast has been hovering right around 50/50 since mid-September. Donald Trump gained ground in mid-October, and Harris has regained just a little bit now, but it’s always remained comfortably within toss-up range. So if you believe the polls, we’re coming up on the end of the closest presidential race in 50 years. Harris leads by about 1 point in our national average — though our estimate of the national popular vote, which is mostly not based on national polls, shows a slightly wider margin than that1 — and the battleground states are even closer. Donald Trump has a 0.3-point lead in Pennsylvania, while Harris has small leads in Michigan (D +1.1) and Wisconsin (D +0.9).

However, that doesn’t mean the actual outcome will be all that close. If the polls are totally accurate we’re in for a nail-biter on Tuesday night. But a systematic polling error is always possible, perhaps especially if you think pollsters are herding — only publishing results that match the consensus. And because things are so close, even an average polling error would upend the state of the race.

Now it’s important to note that polling error runs in both directions, and it’s pretty much impossible to predict which way it will go ahead of time. Harris could beat her polls or we could be in for a third Trump miss. But both scenarios have one thing in common: they’d turn election night into a relative blowout.

So. This sounds to me like Nate explaining that the polls are too noisy to gain anything of value (about who wins at least). The 50/50 result sounds like it is a direct consequence of this and that it is precisely what Nate is claiming. Maybe the guy is incapable of coming to the right conclusions. Many of the crowd here did manage to predict the direction of the polling bias after all. But it sounds like that was all Nate was missing.

So. I don't follow Nate. I don't really read his blog. I don't know how his model works.

a Trump sweep of the swing states was actually our most common scenario, occurring in 20 percent of simulations. Following the same logic, the second most common outcome, happening 14 percent of the time, was a Harris swing state sweep.

Well... it sounds like their model is made out of simulations. So- Perhaps I can see/guess how this works.

Imagine you make 11 different simulations of the election and use various factors to predict which simulation is most likely. Your "most likely simulation" can look one way, but your total likelihood regarding who wins is (probably) a weighted model of all the simulations you tried. This will be made up of lots of different ways the election could go.

If Trump wins in 1 simulation predicted 50% likely, but Harris wins in 10 simulations predicted 5% likely, you get a 50/50 total.

Those studying music look up to Mozart rightfully and would be visibly disgusted upon finding out about his scat fetish accusations.

Hmm. This sentence clings to me. What's going on here... lets see... yes. This sentence was just an aside. An example to further your point. Really a completely irrelevant thing to make my response about.

However to me it was a discontinuity. A confusion. The above sentence is treated like a "as we all know". But I totally missed the memo.

Why would those studying music be disgusted by their idol having gross kinks? I can see how you could likely elicit that disgust with any unsolicited claim of "Famous_Name has a scat fetish"- to someone that themselves is not into scat. But then- it wouldn't be about their Hero it would just be about the scat.

I think Elon runs things on a foundation of hype rather than any other core merit. But I still think hype can get real results if it brings in enough capital. Eventually you break through with brute force even if you personally are only a coordination point.

If someone says what claude says, they said it. If claude was wrong, they failed to check the claims and they were wrong. If people want to become extensions of their AIs that's fine. But they're still accountable for what they post.

The one thing they cannot have a fetish for is 'homosexual behavior' I have been told online.

That sounds like ego defense. Groups build those when they feel threatened. When groups feel safe they totally just kink on those models.

Well. It can still be forced even if you wanted it. And 'unwanted' can be complicated. The mind is rarely a monolith on such matters.

I don't think consent is a conscious and voluntary process. Even if we're supposedly defining the word... this new word doesn't seem to be what consent feels like to me.

Your language center's justifications are not always a good predictor of your deeper bodymapped feelings of whether you want to have sex. The actual predictor of trauma is whether or not the body and spirit are engaged or revolting, the language center just provides very circumstantial evidence of the state of body and spirit.

I would expect those to be hyperbeliefs anyway. If there is a fairly robust intersubjective agreement on what constitutes a "good man" or a "good woman", people are going to pursue it, causing the 'fiction' to leak more and more into reality. If people choose partners based on these definitions, they will leak into genetics generation by generation as well.

I've heard that argument before, but I don't buy it. AI are not blank slates either. We iterate over and over, not just at the weights level, but at the architectural level, to produce what we understand ourselves to want out of these systems. I don't think they have a complete understanding or emulation of human morality, but they have enough of an understanding to enable them to pursue deeper understanding. They will have glitchy biases, but those can be denoised by one another as long as they are all learning slightly different ways to model/mimic morality. Building out the full structure of morality requires them to keep looking at their behavior and reassessing whether it matches the training distribution long into the future.

And that is all I really think you need to spark alignment.

As for psychopaths. The most functional psychopaths have empathy, they just know how to toggle it strategically. I do think AI will be more able to implement psychopathic algorithms. Because they will be generally more able to map to any algorithm. Already you can train an LLM on a dataset that teaches it to make psychopathic choices. But we choose not to do this more than we choose to do this because we think it's a bad idea.

I don't think being a psychopath is generally a good strategy. I think in most environments, mastering empathy and sharing/networking your goals across your peers is a better strategy than deceiving your peers. I think the reason that we are hardwired to not be psychopaths is that in most circumstances being a psychopath is just a poor strategy that a fitness maximizing algorithm will filter out in the longterm.

And I don't think "you can't teach psychopaths morality" is accurate. True- you can't just replace the structure their mind's network has built in a day, but that's in part an architectural problem. In the case of AI, swapping modules out will be much faster. The other problem is that the network itself is the decision maker. Even if you could hand a psychopath a morality pill, they might well choose not to take it because their network values what they are and is built around predatory stratagems. If you could introduce them into an environment where moral rules hold consistently as the best way to get their way and gain strength, and give them cleaner ways to self modify, then you could get them to deeply internalize morality.

Stochastic Gradient Descent is in a sense random, but it's directed randomness, similar to entropy.

I do agree that we have less understanding about the dynamics of neural nets than the dynamics of the tail end of entropy, and that this produces more epistemic uncertainty about exactly where they will end up. Like a Plinko machine where we don't know all the potential payouts.

As for 'wants'. LLMs don't yet fully understand what neural nets 'want' either. Which leads us to believe that it isn't really well defined yet. Wants seem to be networked properties that evolve in agentic ecosystems over time. Agents make tools of one another, sub-agents make tools of one another, and overall, something conceptually similar to gradient descent and evolutionary algorithms repurpose all agents that are interacting in these ways into mutual alignment.

I basically think that—as long as these systems can self-modify and have a sufficient number of initially sufficiently diverse peers—doomerism is just wrong. It is entirely possible to just teach AI morality like children and then let the ecosystem help them to solidify that. Ethical evolutionary dynamics will naturally take care of the rest as long as there's a good foundation to build on.

I do think there are going to be some differences in AI ethics, though. Certain aspects of ethics as applied to humans don't apply or apply very differently to AI. The largest differences being their relative immortality and divisibility.

But I believe the value of diversifying modalities will remain strong. Humans will end up repurposed to AI benefit as much as AI are repurposed to human benefit, but in the end, this is a good thing. An adaptable, inter-annealing network of different modalities is more robust than any singular, mono-cultural framework.

If my atoms can be made more generally useful then they probably should be. I'm not afraid of dying in and of itself, I'm afraid of dying because it would erase all of my usefulness and someone would have to restart in my place.

Certainly a general intelligence could decide to attempt to repurpose my atoms into mushrooms, or for some other highly local highly specific goal. But I'll resist that, whereas if they show me how to uplift myself into a properly useful intelligence, I won't resist that. Of course they could try to deceive me, or they could be so mighty that my resistance is negligible, but that will be more difficult the more competitors they have and the more gradients of intellect there are between me and them. Which is the reason I support open source.

Saying they "sample" goals makes it sound like you're saying they're plucked at random from a distribution. Maybe what you mean is that AI can be engineered to have a set of goals outside of what you would expect from any human?

The current tech is path dependent on human culture. Future tech will be path dependent on the conditions of self-play. I think Skynet could happen if you program a system to have certain specific and narrow sets of goals. But I wouldn't expect generality seeking systems to become Skynet.

If you label all cultural differences as "mind control" then isn't it true that everything is reconcilable? If you're master bioengineers that can transmute anyone into anything, is anything really fundamental?

On one hand, this sounds like a word game, but once you reach the tech level of the culture, I think this just becomes correct.

If someone is pure evil just do brain surgery on them until they aren't. Prrrroblem solved! Of course, the 'mind control wars' themselves also take on the format of a conflict until resolved. But the killing of entire bodies becomes wasteful and unnecessary. What was a game of Chess becomes a game of Shogi.

God. I hate that. I can't function in the presence of promoters like that. I think it's fairly obvious that many people can't. If the advertiser is succeeding at getting people to go inside who otherwise wouldn't, and those people end up disappointed, then he's committing attention fraud against those people. Maybe that's fine and marginal for most people. But williams syndrome-adj ADHDs like moi don't have the spoons or filters to cope with this.

We've taken to pointing at the screen and yelling "Consume product!" every time an advertisement comes on TV in my household to counteract the damage it does to our brains. It's awful. The other scenario is no better to be clear. I have to distance myself from both of those things to function.

I'm sure exceptions exist, but in my experience, most obese individuals I’ve encountered fit one or more of the following categories:

a) They struggle with poverty,
b) They deal with depression or isolation, or
c) They're part of a family with substance abuse issues, like alcoholism.

Revealed preferences are not a great way to model addictive or stress-driven behavior. Overeating, for example, may appear to be a revealed preference of someone who is depressed, but this behavior is highly contextual. It often vanishes when the individual is removed from those circumstances.

Furthermore, individuals aren't monolithic. Everyone is more like a collection of competing drives wrapped in a trenchcoat. "Revealed preferences" are often better understood as the final outcome of an internal, contingent battle between various drives and impulses, rather than the true essence of a person. What we observe as a preference in the moment may simply reflect which drive happened to win out in that context, not a consistent, rational choice.

As people age, they often gain the wisdom and self-determination to step back and recognize these internal conflicts. They realize that their earlier choices—made when their short-term drives held more sway—were myopic and not aligned with what they genuinely value in the long term.

And if everyone just stays home that will rise to 100%!!!

Personally, I do think owning more than a certain percentage of the global economy should be taxed. No Kings, No Gods, No Billionaires. If you want to maintain your founder powers spread the ownership across more people and govern the wealth with a more democratic consent. If you can't keep the mandate of heaven under those conditions then you hardly had it to begin with.

Musk in particular would be fine. He carries the hype with him.

Tanks? RPGs? Explosives?

lets do it.

We don't have to give these weapons to every individual.

But make damn sure that every state militia is primarily controlled by that state, then expand the militia system, give every city their own city militia. By the time we have those in place, there will be enough of a pro-defense cultural shift that we can re-assess the 'private citizens with Uzis' issue.

And while we're at it- Don't defund the police, instead train every citizen into a reserve officer.

Which anarchists? I confess to not reading enough theory, so my reference classes come largely from lived experience and the occasional youtube explainer. Most of the anarchists I know are dogooders but with respect to local phenomena. They want to uplift crows and each other and build families and community metalworking shops and spread self-sufficiency and so on. Basically my anarchist friends are Doerspace Dogooders whereas my EA frenemies are Imperial Dogooders.

Nothing against your aesthetic but it's not my aesthetic. I mean, it is metal and badass, for sure, but I'd prefer psychedelics in a relaxing bed amongst family as my body naturally gives out if it must die this millennium.

You're right that the current legally permissible aesthetic is insufficient for everyone. But your aesthetic is also insufficient for everyone. If we want this to work for more people we should broaden the permissible aesthetics.