Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Science

People Become More Utilitarian When They Face Moral Dilemmas In Virtual Reality 146

First time accepted submitter vrml writes "Critical situations in which participant's actions lead to the death of (virtual) humans have been employed in a study of moral dilemmas which just appeared in the Social Neuroscience journal. The experiment shows that participants' behavior becomes more utilitarian (that is, they tend to minimize the number of persons killed) when they have to take a decision in Virtual Reality rather than the more traditional settings used in Moral Psychology which ask participants to read text descriptions of the critical situations. A video with some of the VR moral dilemmas is available, as is the paper."

This discussion has been archived. No new comments can be posted.

People Become More Utilitarian When They Face Moral Dilemmas In Virtual Reality

Comments Filter:
  • by Anonymous Coward on Thursday January 09, 2014 @04:31PM (#45910469)

    Until you're faced with the choice of saving your sister versus five anonymous others.

    Utilitarianism is false, because no human being can know how to globally maximize the good. They just believe they do, and then use "the end justifies the means" to commit atrocities.

    Our quirky affective behavior is arguably an optimal heuristic in a world where you only have a peep-hole view of the global state of things. For example, in those trolley dilemma you're _told_ that the trolley is random. But we're hard wired to believe that nothing is random, which means you have to fight a belief that the trolley was purposefully sent to kill those five individuals. Maybe the lone individual would save the world. In any event, maintenance of the status quo (letting the five get killed) is, again, arguably an optimal behavior when there is insufficient information to justify doing something else.

  • by Calydor ( 739835 ) on Thursday January 09, 2014 @04:32PM (#45910491)

    He is not only a witness if he KNOWS that he had the power to prevent the five deaths at the cost of one other. Inaction is also an action by itself.

  • by elfprince13 ( 1521333 ) on Thursday January 09, 2014 @04:33PM (#45910499) Homepage
    Harm and benefit according to whose definition? Utilitarianism is incredibly subjective.
  • Poorly-designed VR (Score:3, Insightful)

    by Impy the Impiuos Imp ( 442658 ) on Thursday January 09, 2014 @04:34PM (#45910517) Journal

    "Become more utilitarian", i.e. they choose to save more lives, which is already at 88% in a non-VR, simple textual scenario like the trolly switch issue.

    This is odd, because in most scenarios of VR, people seem to want to throw a switch to deliberately divert a trolly from one person to kill 5 instead, as long as they have a chat line where they can type "lolf49z!"

  • by TrumpetPower! ( 190615 ) <ben@trumpetpower.com> on Thursday January 09, 2014 @04:54PM (#45910745) Homepage

    I can't believe that people still think that these trolley car "thought experiments" are telling them anything novel about human moral instincts.

    All they are are less-visceral variations on Milgram's famous work. An authority figure tells you you must kill either the hot chick on the left or the ugly fatty on the right and that you mustn't sound the alarm or call 9-1-1 or anything else. And, just as Milgram found out, virtually everybody goes ahead and does horrific things in such circumstances.

    Just look at the videos in question. The number of laws and safety regulations and bad designs of the evil-mad-scientist variety in each scenario are innumerable. They take it beyond Milgram's use of a white lab coat to establish authority and into psychotic Nazi commander territory. In the real world, the victims wouldn't be anywhere near where they are. If they were, there wouldn't be any operations in progress at the site. If there were, there would be competent operators at the controls, not the amateur being manipulated by the experimenter; and those operators would be well drilled in both standard and emergency procedures that would prevent the disaster or mitigate it if unavoidable -- for example, airline pilots trained to the point of instinct to avoid crashing a doomed plane into a crowded area.

    The proper role of the experimenter's victims ("subjects") is to yell for help, to not fucking touch critical safety infrastructure in the event of a crisis unless instructed to by a competent professional, to render first aid to the best of their abilities once help is on the way, and to assist investigators however possible once the dust has settled.

    Yet, of course, the experimenter is too wrapped up in the evil genius role to permit their victims to even consider anything like that, and instead convinces the victims that they're bad people who'll kill innocents when ordered to. Just as we already knew from Milgram.

    How any of this bullshit makes it past ethics review boards is utterly beyond me.

    Cheers,

    b&

  • by Anonymous Coward on Thursday January 09, 2014 @05:03PM (#45910853)

    The healthy person isn't part of a potentially doomed set unless you harvest his organs.
    You cannot ethically *start* the process of saving lives by unnecessarily killing someone.

    In the train scenario, either 5 people die, or 1 person dies. There is no other option, because there's no way to stop the train in time. Your choice is simply whether to:
    a) minimize the deaths by action, or
    b) maximize them by inaction.

    In the organ harvest scenario, you have a potentially doomed set, and a non-doomed set. You also have numerous options beyond:
    a) kill the healthy guy for his organs, or
    b) don't kill healthy guy for his organs.

    For example, you also have:
    c) convince the healthy guy to donate a subset of his organs which can be spared in order to save some of the terminal patients.
    d) continue looking for compatible harvested organs.
    e) harvest organs from the first terminal patient to pass on in order to save some of the other terminal patients.

    There's more, but I think you can see the difference between the two scenarios.

  • by Rockoon ( 1252108 ) on Thursday January 09, 2014 @05:08PM (#45910905)

    Harm and benefit according to whose definition? Utilitarianism is incredibly subjective.

    Exactly. I recognize full well that killing 1 will save 5, and in general I do not have a moral problem with choosing to alter fate to change the outcome to favor the 5, but I do not view any of the participants in the video cases as being faultless.

    You and others are walking down the train tracks, a train is coming, and none of you move. Why arent you moving? Maybe that lone guy on the side track knows that the train isnt going to run down his track, which full well makes me a murderer if I divert the train to his track. The larger group has to take responsibility for their own damn actions.

    That, my friend, is utilitarian in my eyes.

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...