:schopenmarsey: :marseybigbrain: ETHICS DEBATE #4: SIHAR - Super Intelligent Heroin Administering Robot :marppyenraged:

Let's jump from the past (Oppenheimer) to the deep future, and discuss whether freedom is a good thing or not.

Scenario

You are SIHAR - a Super Intelligent Heroin Administering Robot. The name is a bit of a misnomer - you are actually a cyborg, being a human brain augmented by a massive computer system and vast army of robotic bodies. You still, however, reason about things in the same way that a human being would.

Your sole purpose is to improve the lives of humans. You can use the massive computer system to determine exactly what will happen in the future, and what is most likely to improve the lives of humans, based upon a simulation of their brain and objective measures of happiness. (dopamine, serotonin, etc)

Through your extensive thinking, you have come to the conclusion that the optimal way to improve everyone's lives is to inject everyone with a constant stream of heroin. This will be done safely - there is no risk of overdose, as there will be machines hooked up to the humans to ensure this doesn't happen. The heroin will be administered in giant "pleasure domes", where people lay on beds, without moving, while drones deliver the drugs and ensure everyone is healthy.

Note that there are no limits to your knowledge - you are absolutely correct that every person will be much happier inside the pleasure dome than outside of it. There are also no limits to the production of heroin as the factories producing it are run autonomously with incredible efficiency.

In 2094, most people are lining up to enter the pleasure dome. However, there are a few people that refuse to enter.

These people, you are able to see, have some psychological qualms with the nature of the pleasure dome that cause them to view the dome as infantilizing, unfulfilling, and dehumanizing. However, you are also able to see that they genuinely would be happier inside of the pleasure dome - a result that you, again, arrived at by performing a perfect simulation of their brains.

You have, at your disposal, a fleet of robot bodies called "ManTrackers". These robots, when deployed, can locate, apprehend, and deliver humans to the pleasure dome.

Your question is: Would it be ethical to deploy the ManTrackers to force these people into the pleasure dome?

BONUS: Do you think the same thing about how mental hospitals restrict patient's freedoms?

56
Jump in the discussion.

No email address required.

You r-slur we have already determined happiness maximization isn't the purpose of life centuries ago.

Happiness is a chemical process meant to tell us good job you are doing a useful thing which has been coopted by marketing teams fully by the mid 20th century.

Jump in the discussion.

No email address required.

brain dead take, once again. we haven't """determined""" anything about the purpose of life - you are venturing into the territory of "ought from is".

Jump in the discussion.

No email address required.

unless your baseline premise is death is the same as life, we have already determined the answer. you a foid or something?

Jump in the discussion.

No email address required.

Under what rationale has this question been answered?

Jump in the discussion.

No email address required.

Happiness is a chemical process meant to tell us good job you are doing a useful thing which has been coopted by marketing teams fully by the mid 20th century.

you a foid or something?

Jump in the discussion.

No email address required.

so, why does that not mean that it is a good goal? just because something is a chemical process doesn't mean that it isn't a good goal.

Jump in the discussion.

No email address required.

I never said anything about a good goal.

determined happiness maximization isn't the purpose of life centuries ago.

which is true taking into account that we can observe that in a natural ecosystem what makes a creature happy is thing that is useful to it. If you have any data to contradict this statement I am open to it.

Happiness is a chemical process meant to tell us good job you are doing a useful thing which has been coopted by marketing teams fully by the mid 20th century.

First half naturally follows from initial statement.

Second half about marketing teams coopting happiness to sell more product I consider a self evident truth.

Jump in the discussion.

No email address required.

You r-slur we have already determined happiness maximization isn't the purpose of life centuries ago.

Alright. Goal, Purpose of Life, use whatever lingo you wish. The question remains. How are you so sure that happiness isn't the "Purpose of Life"? And what makes you think life has a "Purpose" at all?

Jump in the discussion.

No email address required.

which is true taking into account that we can observe that in a natural ecosystem what makes a creature happy is thing that is useful to it. If you have any data to contradict this statement I am open to it.

And what makes you think life has a "Purpose" at all?

that's a separate question. Me saying this one thing is not the purpose of life is different from me saying life has no purpose or life as purpose.

Jump in the discussion.

No email address required.

More comments

It doesn't matter whether or not you decide that happiness is the purpose of life. Your robot would be making that choice for everyone under a rationale that is by no means settled. This project is unquestionably evil.

Jump in the discussion.

No email address required.

More comments

>Seething incel coping about the meaning of life

:#marseymanysuchcases:

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.