site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 8 of 83 results for

domain:inv.nadeko.net

The most unrealistic part of this is that illiterate morons could ever navigate the insane paperwork to adopt a kid.

"Would you love me if I was a worm" interestingly suggests the opposite - that women would prefer to be loved primarily for their personality (which persists after they are no longer hot).

Interesting. I interpret that statement as 'do you love me completely unconditionally?'. A worm isn't hot, but it doesn't have a personality either.

What do you mean? They'll change their beliefs as they age? Or they're already crypto conservative and don't know it?

I meant more the policies. Mao died peacefully in his sleep and the CCP still rules China. But Mao's death still hung the Maoists out to dry same with Stalin and Lysenko and his followers.

As someone that has played them all (other than 1 and 2, which were made obsolete by their remakes), 3H is probably the Fire Emblem I'd be least likely to every try to play again, and I only finished 2 routes. The guiding principle of modern FE is that every single playable character is potentially the self-insert's husband/wife, and it severely handicaps what they can do in both story and gameplay. The majority of the cast were prevented from having any meaningful role in the second half of the plot, and the ability to always recruit the best characters from each house couldn't have helped the balance design. 3H tried really hard to get the Persona audience when the SMT spinoff they should have emulated was Devil Survivor, where playable characters will happily tell you to fuck off if you choose a route they wouldn't agree with.

Thanks for the site! And please don’t sweat it

Thanks for all the hard work, Zorba, and for letting us know.

I respect what you're saying - at least your point is that ASI "might" behave like this, rather than "will". I don't really agree, but that's ok, this is a tough speculative subject.

I can look at a shark, a dolphin and a torpedo, and will notice that all of them are streamlined so that they can move through liquid water with a minimum of drag. I am somewhat confident that if an alien species or ASI needs to move through liquid water while minimizing drag for some illegible terminal goal, they will design the object to be also streamlined. Perhaps I am wrong in some details -- for example, I might not have considered supercavitation -- but if I saw an alien using cube-shaped submarines that would be surprising.

We have many great examples of what swimming things look like. And we know the "physical laws" that limit traveling through water. We currently have only two distinct types of intelligent agents (humans and LLMs), neither of which tend to be omnicidal in pursuit of maximizing some function. And if there are "mental laws" about how intelligence tends to work, we don't yet know them. So I think you're too confident that you know the form an ASI would take.

Now, true, one of those two examples (humans) does indeed have the inherent "want" to not die. But that's not because we're optimized for some other random goal and we've reasoned that not dying helps accomplish it. Not dying in a competitive environment just happens to be a requirement for what evolution optimizes for (propagating genes), so it got baked in. If our best AIs were coming from an evolutionary process, then I'd worry more about corrigibility.

In a similar vein, preservation of the own utility function and power-seeking seem to be useful instrumental goals for any utility function which is not trivially maximizable. Most utility functions are not trivially maximizable.

Sure, it is true that an agent that doesn't "want" to die would ultimately be more effective at fulfilling its objective than one that doesn't care. But that's not the same as saying that we're likely to produce such an AI. (And it's definitely not the same as saying that the latter kind of AI basically doesn't exist, which is the "instrumental convergence" viewpoint.) Intelligence can be spiky. An AI could be competent - even superintelligent - in many ways without being maximized on every possible fitness axis (which I think LLMs handily demonstrate).

Now, it is possible that an ASI or alien is so far beyond our understanding of the world that it does not have anything we might map to our concept of "utility function"? Sure. In a way, the doomers believe that ASI will appear in a Goldilocks zone -- too different from us to be satisfied with watching TikTok feeds, but similar enough to us that we can still crudely model it as a rational agent, instead of something completely beyond our comprehension.

I think we're agreed here.