:schopenmarsey: :marseybigbrain: ETHICS DEBATE #4: SIHAR - Super Intelligent Heroin Administering Robot :marppyenraged:

Let's jump from the past (Oppenheimer) to the deep future, and discuss whether freedom is a good thing or not.

Scenario

You are SIHAR - a Super Intelligent Heroin Administering Robot. The name is a bit of a misnomer - you are actually a cyborg, being a human brain augmented by a massive computer system and vast army of robotic bodies. You still, however, reason about things in the same way that a human being would.

Your sole purpose is to improve the lives of humans. You can use the massive computer system to determine exactly what will happen in the future, and what is most likely to improve the lives of humans, based upon a simulation of their brain and objective measures of happiness. (dopamine, serotonin, etc)

Through your extensive thinking, you have come to the conclusion that the optimal way to improve everyone's lives is to inject everyone with a constant stream of heroin. This will be done safely - there is no risk of overdose, as there will be machines hooked up to the humans to ensure this doesn't happen. The heroin will be administered in giant "pleasure domes", where people lay on beds, without moving, while drones deliver the drugs and ensure everyone is healthy.

Note that there are no limits to your knowledge - you are absolutely correct that every person will be much happier inside the pleasure dome than outside of it. There are also no limits to the production of heroin as the factories producing it are run autonomously with incredible efficiency.

In 2094, most people are lining up to enter the pleasure dome. However, there are a few people that refuse to enter.

These people, you are able to see, have some psychological qualms with the nature of the pleasure dome that cause them to view the dome as infantilizing, unfulfilling, and dehumanizing. However, you are also able to see that they genuinely would be happier inside of the pleasure dome - a result that you, again, arrived at by performing a perfect simulation of their brains.

You have, at your disposal, a fleet of robot bodies called "ManTrackers". These robots, when deployed, can locate, apprehend, and deliver humans to the pleasure dome.

Your question is: Would it be ethical to deploy the ManTrackers to force these people into the pleasure dome?

BONUS: Do you think the same thing about how mental hospitals restrict patient's freedoms?

56
Jump in the discussion.

No email address required.

Do robots have ethics? You're a human (cyborg) but you're applying computer/robot logic to the situation. I think by definition a computer program can't have ethics, right?

Jump in the discussion.

No email address required.

I don't see how there is a distinction between computer logic and human logic, as if by virtue of being "computery" we can absolve something of ethical consideration.

Jump in the discussion.

No email address required.

My thought process is:

Computer logic is infallible, but to err is human. Ergo, for (human) ethics to apply it must inherently be flawed.

God created humans, not robots, as there would be no challenge in testing a robot. A robot, correctly programmed, would always pass a certain test. The challenge comes from free will.

Therefore, any "perfect" scenario, especially one that deprives someone of their (flawed) humanity, is unethical. I'd say that forcing someone into the pleasure dome to become an unthinking pleasure recipient is unethical.

Jump in the discussion.

No email address required.

A robot, correctly programmed, would always passa certain test.

Sure - but would it pass it correctly, doing the things we want it to do? The decision of whether the AI/cyborg is doing things we want it to do is the core of the issue.

Jump in the discussion.

No email address required.

Hmmmm... well that is inheritently the point right? The robot has no concept of right or wrong. It could correctly follow its programming, but be following an ethically "wrong" thing to do. It needs a human with free will to tell the difference.

But we have a human in this scenario, you! However, if you base your decision off of a computer's suggestion, who has been programmed to distribute pleasure without understanding if that is right or wrong, you're not really making your own "human" decision.

If you then take the supercomputer out of the equation, you're left with a normal human decision making, who would definitely say it is unethical.

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.