@MachineElfPaladin's banner p

MachineElfPaladin


				

				

				
0 followers   follows 0 users  
joined 2022 November 14 21:27:08 UTC

				

User ID: 1858

MachineElfPaladin


				
				
				

				
0 followers   follows 0 users   joined 2022 November 14 21:27:08 UTC

					

No bio...


					

User ID: 1858

I think your read of 'Euthyphro' here is wrong. It's a reference to one of the Socratic dialogs which discusses what is classically known as the "Euthyphro Dilemma" - when you say God commands us to do good things, are they good because God commands it? (In which case, what do you do when God commands you to bash out your firstborn son's brain with a rock?) Or does God command them because they are good? (In which case, what do God's commands have to do with it, why not just do the good things?)

To paraphrase that part of his post, he's saying, "We could argue about the relationship between religion and morality all day, but putting that aside and looking at outcomes..."

If I recall correctly, it was a thread about DignifAI, which was an image-gen model trained to edit photos to put people in "modest" or "respectable" clothing.

Because the physical intimacy is the part that has the psychological drives attached to it. "Sexual orientation" is explicitly about the psychological drives. That's what they care about protecting.

There is a sense in which that is true. However, on the level of evolved human psychology, it is orgasms which are the fundamental drive with intrinsic rewards that facilitates pair-bonding, and so in that sense it is also exactly backwards: gamete mixing is only "sexual" because it happens to be a common side effect of one of the typical ways to seek orgasms with a partner. Why should gamete mixing be considered special, compared to blood transfusions?

This is equivocation between two different meanings of the word "sexual". One is "having to do with gamete mixing", the other is "having to do with orgasms". "Sexual attraction" is firmly in the "orgasms" side of the dichotomy, and sperm donation on the "gamete mixing" side.

Not to the cent, but they'd probably have a few brackets that they break items into and be able to say "a buck each, something like that" or "about $5", especially for frequently-recurring purchases like milk. Particularly people for whom the price of groceries is a meaningful fraction of their budget.

Some reports, likely a significant fraction of them, come from the period after it was posted but before it was edited, when the post was the one line of introduction at the start, a big pile of quotation, and absolutely nothing else. I know that's what pushed me to report it.

If that kid lived in a jurisdiction that practiced the death penalty and carried it out with firing squads, I don't think it would be beyond the pale for them to join in on one execution, probably with a few days' drilling beforehand.

The core difference between your "shoot a person" scenario and the "don't die a virgin" scenario is that shooting random people is something society expects nobody to do, while people having sex is not only allowed but implicitly expected. Children aren't told that they shouldn't ever have sex, but to wait until later, when they'll be more mature and have a better understanding of the situation and the consequences. But for terminally ill children, "later" is never going to come.

I haven't looked into that complaint in depth (attempting to avoid spoilers until I have a good enough setup to play it myself) but I would expect most people making it are long-time veterans of the rest of the Soulsborne games, which skews their perspective a bit. If you haven't played the other games to death, or if you aren't looking to have your balls busted, it probably wouldn't be an issue. Though, as mentioned, I'm trying to avoid spoilers so I could be wrong.

That instrumental convergence paragraph comes with a number of qualifiers and exceptions which substantially limit its application to the nuclear singleton case. To wit:

Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.

I could try to draw finer distinctions between the situations of post-WW2 USA and a hypothetical superintelligent AI, but really the more important point is that the people making the decisions regarding the nukes were human, and humans trip over the "some element in its utility function bars the action" and "self-interested" segments of that text. (And, under most conceptions, the 'rational agent' part, though you could rescue that with certain views of how to model a human's utility function.)

Humans have all sorts of desires and judgements that would interfere with the selection of an otherwise game-theoretically optimal action, things like "friendship" and "moral qualms" and "anxiety". And that's not even getting into how "having a fundamental psychology shaped by natural selection in an environment where not having any other humans around ever meant probable death and certain inability to reproduce their genes" changes your utility function in a way that alters what the game-theoretic optimal actions are.

One of the major contributors to the lack of nuclear warfare we see is that generally speaking humans consider killing another human to be a moral negative, barring unusual circumstances, and this shapes the behavior of organizations composed of humans. This barrier does not exist in the case of an AI that considers causing a human's death to be as relevant as disturbing the specific arrangement of gravel in driveways.

I haven't spent enough time absorbing the vulnerable world hypothesis to have much confidence in being able to represent its proponents' arguments. If I were to respond to the bioweapon myself, it would be: what's the use case? Who wants a highly pathogenic, virulent disease, and what would they do with it? The difficulty of specifically targeting it, the likelihood of getting caught in the backwash, and the near-certainty of turning into an international pariah if/when you get caught or take credit makes it a bad fit for the goals of institutional sponsors. There are lone-wolf lunatics that end up with the goal of 'hurt as many people around me as possible with no regard for my own life or well-being' for whom a bioweapon might be a useful tool, but most paths for human psychology to get there seem to also come with a desire to go out with a blaze of glory that making a disease wouldn't satisfy. Even past that, they'd have the hurdles of figuring out and applying a bunch of stuff almost completely on their own (that paper you linked has 9 authors!) with substandard equipment, for a very delayed and uncertain payoff, when they could get it faster and more certainly by buying a couple of guns or building a bomb or just driving a truck into a crowd.

The threat model is different. Nuclear weapons are basically only useful for destroying things; you don't build one because a nuke makes things better for you in a vacuum, but because it prevents other people from doing bad things to you, or lets you go do things to other people. Genetic engineering capabilities don't automatically create engineered plagues, some person has to enact those capabilities in that fashion. I'm not familiar with the state of the art in GE, but I was under the impression that the knowledge required for that kind of catastrophe was wasn't quite there. Further, I think there are enough tradeoffs involved that accidents are unlikely to make outright x-risk plagues, the same way getting a rocket design wrong probably makes 'a rocket that blows up on takeoff' instead of 'a fully-functional bullet train'.

AI doom has neither of those problems. You want AI because (in theory) AIs solve problems for you, or make stuff, or let you not deal with that annoying task you hate. And, according to the doomer position, once you have a powerful enough AI, that AI's goals win automatically, with no desire for that state required on any human's part, and the default outcome of those goals does not include humans being meaningfully alive.

If nuclear bombs were also capable, by default, of being used as slow transmutation devices that gradually turned ordinary dirt into pure gold or lithium or iron whatever else you needed, and if every nuke had a very small chance per time period of converting into a device that rapidly detonated every other nuke in the world, I would be much less sanguine about our ability to have avoided the atomic bonfire.