site banner

Small-Scale Question Sunday for June 9, 2024

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

So @FCfromSSC, you've stated that you consider a lack of aliens, a lack of AGI, and a lack of Read/Write Consciousness upload ability to be proof that humans are divine and that God exists. If we find alien life, create AGI, and can scan a human brain and make a copy, would that be proof for you that God doesn't exist? Would any of those events change you mind?

He explicitly stated that if we could Read/Write minds then he’d change his mind.

Demonstrate mind reading and mind control, and I'll agree that Determinism appears to be correct. In the meantime, I'll continue to point out that confident assertions are not evidence.

I don't see that specific statement in there. Interesting discussion though. I think a more accurate phrasing would be:

If Free Will truly does not exist, it should be possible, if we were able to gather sufficiently detailed information about an individual's brain, to predict with 100% accuracy everything that person would think, say, and do, and this could be done for any individual you might choose.

The ability to read and write minds does not necessarily prove determinism or disprove free will. It does seem likely though that, if we were ever able to do such a thing, the details of how that process worked would give us considerable insight on those subjects. We can say now that it's still possible that free will doesn't really exist, but we don't have sufficient technology to gather detailed enough information about anyone's brain to fully predict their behavior. If we were able to reliably read and write minds, it would be very tough to say we just didn't have sufficient information. At that point, either we would be able to predict behavior and prove the determinists right, or we would still not be able to fully predict behavior and that would prove that free will actually does exist and the determinists are wrong.

I feel obligated to also note that pure determinism leads to some rather dark conclusions. If it were possible to scientifically prove that a person would 100% only do negative and harmful things for the rest of their life and it was not possible to change that, what else would there be to do except eliminate that person?

If it were possible to scientifically prove that a person would 100% only do negative and harmful things for the rest of their life and it was not possible to change that, what else would there be to do except eliminate that person?

Well we know that this isn't a possibility, right? The Heisenberg uncertainty principle prevents us from modelling anything with that degree of accuracy even in theory. Even if it were possible to take a fully-scanned human and simulate their future actions, it's not possible to fully scan that human in the first place.

If we did understand people that well though, I think the correct approach would be to offer the current person a choice between an ineffectual future, where they retain their current values but without the ability to harm others; and a different one, where their personality is modified just enough to not be deemed evil. This wouldn't even necessarily need physical modification--I doubt many scanned humans would remain fully resilient to an infinite variety of simulated therapies.

Well I personally agree that free will exists and so that is not a possibility. But several people in the linked thread were arguing quite vigorously that free will does not exist and individual behavior was therefore 100% deterministic. I do feel that, in addition to the more direct philosophical arguments that mostly took place in that thread, I should also point out what the natural consequences of that being true would be.

If that is true, we would be able to identify numerous specific people who we would have actual scientific proof will only contribute to society in highly negative ways, and we'll have to decide what to do with those people. Would we eliminate them? We could lock them away for life, but that's expensive, should we bother if we know they will never reform? Also our current criminal justice system in most of the first world locks people away for a pre-determined length of time when we prove they did a specific bad thing. It's rather a departure to be saying, our mind-scanning computer says you'll always be bad, so we're going to lock you away for life, or do actual brain editing on you to make you act right. Definitely can't see that one going wrong in any way.

we would be able to identify numerous specific people who we would have actual scientific proof will only contribute to society in highly negative ways, and we'll have to decide what to do with those people.

Sorry to fight the hypothetical, but I really doubt many people like this exist. Let's say you possess a computer capable of simulating the entire universe. Figuring out the future of a specific bad person based on simulations is only step one. After that there are a practically infinite number of simulation variations. What happens if he gets a concussion? If he gets put on this medication (which we can also perfectly simulate)? If all of his family members come back to life and tell him in unison to shape up?

This is godlike power we're talking about. The ability to simulate a human means the ability to simulate that human's response to any possible molecule or combination of molecules introduced in any manner. If there is even a conceptually possible medication which may help this person then this computer--which we've established can simulate the universe--will be able to find it. Ditto for any combination of events, up to and including brainwashing and wholly replacing their brain with a new brain.

The interesting question to me is not whether these people can be "saved" and made into productive citizens. In my opinion that's a foregone conclusion. The question is at what point this turns from helping someone into forcing them against their will into an outcome their previous self would consider undesirable, and whether doing so is nevertheless moral. I think not--you may as well create new people rather than modifying existing ones drastically, and do with the existing ones as you will.

Would we eliminate them? We could lock them away for life, but that's expensive, should we bother if we know they will never reform? Also our current criminal justice system in most of the first world locks people away for a pre-determined length of time when we prove they did a specific bad thing. It's rather a departure to be saying, our mind-scanning computer says you'll always be bad, so we're going to lock you away for life, or do actual brain editing on you to make you act right. Definitely can't see that one going wrong in any way.

To engage with the actual question you're asking--what do we do with people who are just innately bad?--I definitely think locking people up is fine morally. These simulations are supposed to be infallible after all. If you feel like you need some justification to lock them up, just use the simulation to contrive a scenario where they do a non-negligible amount of harm but don't actually kill someone, and then lock them up after they've done it.