Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Science

People Become More Utilitarian When They Face Moral Dilemmas In Virtual Reality 146

First time accepted submitter vrml writes "Critical situations in which participant's actions lead to the death of (virtual) humans have been employed in a study of moral dilemmas which just appeared in the Social Neuroscience journal. The experiment shows that participants' behavior becomes more utilitarian (that is, they tend to minimize the number of persons killed) when they have to take a decision in Virtual Reality rather than the more traditional settings used in Moral Psychology which ask participants to read text descriptions of the critical situations. A video with some of the VR moral dilemmas is available, as is the paper."

This discussion has been archived. No new comments can be posted.

People Become More Utilitarian When They Face Moral Dilemmas In Virtual Reality

Comments Filter:
  • it's the new you!
  • by jeffb (2.718) ( 1189693 ) on Thursday January 09, 2014 @04:15PM (#45910293)

    So, we're assuming that all participants considered the death of (virtual) humans to be a bad thing?

    • by Anonymous Coward

      So that makes it moral to kill people?

    • by Anonymous Coward

      It's pretty obvious that the participants thought they were playing a game, and they thought they were losing an equal number of virtual "points" for each virtual human that died.

      (Most people are familiar with the concept of video games, the study took place in something that resembles a video game, and people like to "win" games by getting the highest possible score. What the F*** were you expecting, researchers?)

      • by Koby77 ( 992785 )
        I think that's the point of the experiment, to create more life-like situations to attempt to find out how people would actually react in real life. A test that actualls kills people would be obviously immoral. But would people react differently from a text description of events? What happens if we someday create a simulation so life-like that human participants believe the virtual victims are real? We can't answer that yet, but we're inching closer and closer. And it appears that we are trending towards "s
    • by icebike ( 68054 )

      So, we're assuming that all participants considered the death of (virtual) humans to be a bad thing?

      Not only that, but we are assuming that how humans in obvious simulations of one sort or another has any bearing on situations in the real world.

      As best I can fathom, the report suggests people are less bored and more willing to play along in a VR simulation than they are when reading (and trying to imagine) text based scenarios.

      Probably explains (yet again) more about why text based MUDs are entering extinction while every one and their brother is coming out with another on-line virtual reality game.

    • Personally I'd be overwhelmed with curiosity with how the game physics would respond to situations the developers may not have considered. What happens if you rapidly cycle the train switch or switch it right as the train is passing over it? Perhaps you could get the train to derail and start to accordion thus clearing both sides of the tracks and destroying the train itself.
    • by rtb61 ( 674572 )

      In video game play, losing, is the bad thing. You play according to the rules of the game and attempt to win. Although there is a difference some people do prefer to play the bad guys and others prefer to play the good guys. So the idea that people become more Utilitarian when with video rather than spoken is an incorrect interpretation, their video game mode is kicking in and they simply try to win as they have programmed by playing video games since youth.

    • There's a series of good studies, done with brain scanning in place, described in Joshua Greene's Moral Tribes, Emotion, Reason and the Gap Between Us and Them, New York (Penguin), 2013.

  • Every other moral system makes claims it can't provide real justification for. Minimizing harm and maximizing benefit is the best you can manage(and sometimes you don't know enough to do that either)

    • by tiberus ( 258517 ) on Thursday January 09, 2014 @04:29PM (#45910459)
      In the various versions of the train dilemma, you have two options 1) don't act and five people will die; or 2) act and only one person will die. While I see the logic of your argument, and tend to agree that it is the best overall or numerical result. It does seem to be a rather chilling choice. It avoids the premise that by taking action the actor becomes a murderer; having taken action that directly resulted in the death of another. In the other case the actor is only a witness to a tragic event.
      • by Calydor ( 739835 ) on Thursday January 09, 2014 @04:32PM (#45910491)

        He is not only a witness if he KNOWS that he had the power to prevent the five deaths at the cost of one other. Inaction is also an action by itself.

        • by AthanasiusKircher ( 1333179 ) on Thursday January 09, 2014 @04:43PM (#45910625)

          He is not only a witness if he KNOWS that he had the power to prevent the five deaths at the cost of one other. Inaction is also an action by itself.

          Yes, and that ultimately leads to the "next level" of utilitarian dilemmas. What if you're a doctor, with five terminal patients who all need different organs. In walks a healthy person who is (miraculously) compatible with all of them.

          Should you kill the healthy person, harvest the organs, and save the five terminal patients? (For the sake of argument, we assume that the procedures involved have a high chance of success, so you'll definitely save a number of people by killing one.)

          Many people who say we should flip the switch in the trolley problem think it's wrong to murder someone to harvest their organ and ensure the same outcome. Why is "inaction" appropriate for the doctor, but not in the case of the trolley?

          (I'm not saying I have the right answers -- but once you start down the philosophical path of utilitarian hypotheticals, there's a whole world of wacko and bizarre situations waiting to challenge just about anyone's moral principles. I can't wait until the "I was kidnapped and forced to keep a famous violinist alive" scenarios come up!)

          • I think the scenarios handle the worth of the individual being sacrificed differently. In the clinical case, the individual is not just being killed, but they are objectified as a set of resources that can be exploited. In the train case, the individual is killed due to being in the wrong-place-at-the-wrong-time. I think most people would make this distinction out of empathy. That is, they may be more ok with dying due to an unfortunate set of circumstances, and they may not be ok with dying to suit other p
          • by Anonymous Coward on Thursday January 09, 2014 @05:03PM (#45910853)

            The healthy person isn't part of a potentially doomed set unless you harvest his organs.
            You cannot ethically *start* the process of saving lives by unnecessarily killing someone.

            In the train scenario, either 5 people die, or 1 person dies. There is no other option, because there's no way to stop the train in time. Your choice is simply whether to:
            a) minimize the deaths by action, or
            b) maximize them by inaction.

            In the organ harvest scenario, you have a potentially doomed set, and a non-doomed set. You also have numerous options beyond:
            a) kill the healthy guy for his organs, or
            b) don't kill healthy guy for his organs.

            For example, you also have:
            c) convince the healthy guy to donate a subset of his organs which can be spared in order to save some of the terminal patients.
            d) continue looking for compatible harvested organs.
            e) harvest organs from the first terminal patient to pass on in order to save some of the other terminal patients.

            There's more, but I think you can see the difference between the two scenarios.

            • The one person wasn't "potentially doomed" at the start either. He was just crossing a set of empty railroad tracks.

            • There's more, but I think you can see the difference between the two scenarios.

              Most of your argument makes the assumption that the patients are not close to death. What if they are? (Disaster scenario or something.) And what if the healthy guy says, "No!" even to the idea of donating some organs?

              If you find it repugnant to kill him, do you still favor forced "donation" of his organs if it won't kill him, but will save the lives of other people in imminent danger of dying? After all, you seem in favor of killing a guy in one scenario to save five people, what's wrong with stealin

            • The middle group between these two is pushing a fat guy off a bridge to stop the train. Again, assuming this
              is guaranteed to work (and other options like jumping off yourself) won't, is it still ok to kill 1 to save 5?
              There are other scenerios too. What if you killing one person (for instance they hold the key to an antidote)
              could save millions, is it then ok? What if an aircraft full of innocent people is getting ready to crash into
              a building that will kill thousands. Is it ok to shoot it down? That i

            • The healthy person isn't part of a potentially doomed set unless you harvest his organs.

              Well, I suppose you anonymous cowards live forever, but I don't assume that I will.

          • by Lazere ( 2809091 )
            I think it's the level of involvement in the death of the one. In the train example, you would be directly responsible for the person's death, but ultimately, it's something else that actually does the killing. In the doctor example, not only would you be directly responsible for the person's death, but you'd be the one doing the killing and cutting. It's easier to distance yourself when somebody is being squashed by a train, not so easy if you're cutting them open and harvesting his/her organs. Additionall
          • by icebike ( 68054 )

            The inaction by the doctor is required by law. (More than a few doctors have or would have taken matters into their own hands).

            So the situation is quite different.
            The simulation presents false choices in unrealistic situations, and therefore you can't attribute much insight to the study, and would probably learn more watching a few rounds of team Capture the Flag.

          • Should you kill the healthy person, harvest the organs, and save the five terminal patients?

            That's not the utilitarian option. Providing an unlimited supply of transplant organs would produce an unsustainable demand for expensive transplant operations and expensive aftercare (transplants do have long-term consequences you know) not to mention the social injustice caused when rich people start harvesting poor people to replace their alcohol-addled livers. On the positive side, ruling yourself out as an involuntary organ donor is a nice rationalisation for enjoying the unhealthy things in life...

            I

        • He is not only a witness if he KNOWS that he had the power

          By the way, perhaps you were making a reference with your KNOWS. If not, I'd be careful about emphasizing the word knows when talking about the trolley problem, unless you've dug around in the vast philosophical literature on it, where knows in italics has special meaning. If you're not careful, pretty soon you end up piling on philosophical nonsense conundrums and end up with something like this [mindspring.com].

        • by icebike ( 68054 )

          He is not only a witness if he KNOWS that he had the power to prevent the five deaths at the cost of one other. Inaction is also an action by itself.

          And thinking outside the box is an action as well. Like maybe shouting and blowing the train horn, throwing rocks, depending on where said witness stands. Even deaf people react to being it with a rock.

          The dilemmas shown are false, and Its amazing that the participants would even take the simulation seriously enough to give meaningful results.

        • One of the great takeaways (read few) from Catholic School for me was learning about omission and commission. Omission being inaction in your description. I doubt other teenagers across the country learned about omission.
        • No, it is not. To quote Thoreau: "It is not a man's duty, as a matter of course, to devote himself to the eradication of any, even the most enormous wrong; he may still properly have other concerns to engage him; but it is his duty, at least, to wash his hands of it, and, if he gives it no thought longer, not to give it practically his support. If I devote myself to other pursuits and contemplations, I must first see, at least, that I do not pursue them sitting upon another man's shoulders."

          The first imper

      • by blackraven14250 ( 902843 ) on Thursday January 09, 2014 @04:45PM (#45910633)

        It's a chilling choice, but the train dilemma is flawed when you consider that it would never happen in real life anyway. I'm not saying that the 5 vs. 1 scenario wouldn't happen, but I highly doubt someone is even going to consider the second option at all if presented with the scenario. If the thought doesn't even cross the person's mind, there's not a choice being made between the options. If no choice is being made in reality, the thought experiment is worthless as a way explain human behavior. The whole concept of the thought experiment is undermined when you realize that it's not something any person would ever end up doing because of another variable that the thought experiment does not consider.

        • by Livius ( 318358 )

          The flaw is that people make decisions of that nature immediately and emotionally based on heuristics rooted in instinct. For tens or hundreds of thousands of years of natural selection, no-one has ever been presented with a situation that featured the ideal certainty that the train dilemma is based on.

          • by xevioso ( 598654 )

            Well, that is clearly untrue; these sorts of examples happen all the time in war.

            The Nazis were well known for presenting innocent people with these sorts of tortuous dilemmas before committing atrocities.
            I.e., "Choose which one of your 5 children will die, Jewess, or I will shoot all of them."
            Multiple accounts of these sorts of evil choices having been foisted on folks during WWII exist.

            So while the Train dilemma does occur on occasion, I'd argue that in reality, there is no correct choice at all. They are

        • the train dilemma is flawed when you consider that it would never happen in real life anyway

          Wrong, the title says that the virtual reality results are relative to real life. Clearly (the article is paywalled) the researchers conducted a similar real life experiment with participants standing idly by as trains plummeted into groups of 5 bystanders. I guess ethics committees aren't what they used to be.

    • by Anonymous Coward on Thursday January 09, 2014 @04:31PM (#45910469)

      Until you're faced with the choice of saving your sister versus five anonymous others.

      Utilitarianism is false, because no human being can know how to globally maximize the good. They just believe they do, and then use "the end justifies the means" to commit atrocities.

      Our quirky affective behavior is arguably an optimal heuristic in a world where you only have a peep-hole view of the global state of things. For example, in those trolley dilemma you're _told_ that the trolley is random. But we're hard wired to believe that nothing is random, which means you have to fight a belief that the trolley was purposefully sent to kill those five individuals. Maybe the lone individual would save the world. In any event, maintenance of the status quo (letting the five get killed) is, again, arguably an optimal behavior when there is insufficient information to justify doing something else.

      • This comment should be at +5 Insightful. Mods, do your duty.
      • Being unable to attain a moral behavior, due to ones own needs, doesn't make it less moral.

        • by xevioso ( 598654 )

          It's possible that neither choice is morally correct, and that a person is placed into a situation where both choices are equally immoral.

      • by eepok ( 545733 )

        There are many flavors of utilitarianism and like all forms of ethics, philosophy, and sciences, the later versions tend to be the best.

        Utilitarianism is a sub-category of consequentialist ethics within which are multiple versions of Utilitarianism. One of the first descriptions sought to maximize pleasure and minimize pain. That's old and busted. Another sought to maximize happiness (slightly different). These versions of Utilitarianism are fairly easily defeated by what I call the World Cup Conundrum: Yo

      • Utilitarianism is false, because no human being can know how to globally maximize the good.

        This is like saying "mathematics is false, because no human being can know if a statement should be an axiom or not". In both cases the subordinate "because" clause is trivially true, but not logically related to the independent clause it pretends to justify. Mathematics is a tool for generating models, some of which are useful for approximating how the real world behaves; utilitarianism is a subtool within mathematics that's appropriate for generating models of the part of reality we call "human morality

    • by elfprince13 ( 1521333 ) on Thursday January 09, 2014 @04:33PM (#45910499) Homepage
      Harm and benefit according to whose definition? Utilitarianism is incredibly subjective.
      • by Rockoon ( 1252108 ) on Thursday January 09, 2014 @05:08PM (#45910905)

        Harm and benefit according to whose definition? Utilitarianism is incredibly subjective.

        Exactly. I recognize full well that killing 1 will save 5, and in general I do not have a moral problem with choosing to alter fate to change the outcome to favor the 5, but I do not view any of the participants in the video cases as being faultless.

        You and others are walking down the train tracks, a train is coming, and none of you move. Why arent you moving? Maybe that lone guy on the side track knows that the train isnt going to run down his track, which full well makes me a murderer if I divert the train to his track. The larger group has to take responsibility for their own damn actions.

        That, my friend, is utilitarian in my eyes.

        • by xevioso ( 598654 )

          That never exists is the real world though.

          A real situation would involve war. A Nazi in a ghetto telling a woman to choose which of her five children will die? He will hand the gun to her, and she must shoot the child; otherwise the Nazi will kill all five.

          THAT is a real world Train dilemma, and utilitarianism has no say here, because there is no good choice. Each choice is equally imorral. Probably the only thing the mother could do to absolve herself of the moral guilt she will surely feel is to tell

      • by Livius ( 318358 )

        Something harming the gods would certainly be vastly worse than something that only harms mere mortals. That's the utilitarian calculus we've had for most of human history.

    • The first time I heard this dilemma it was posed by a bible basher attempting to recruit me into his church. In that version the individual is your own child. The point is that God would flip the switch and sacrifice his son to save everyone else whereas a mere human would normally save their child. Why an omnipotent God could not break the rules and save both the individual and the group was left unexplained.

      Disclaimer: Grandad to three. The instinct to protect your child can overcome the instinct to de
      • The first time I heard this dilemma it was posed by a bible basher attempting to recruit me into his church. In that version the individual is your own child. The point is that God would flip the switch and sacrifice his son to save everyone else whereas a mere human would normally save their child. Why an omnipotent God could not break the rules and save both the individual and the group was left unexplained.

        Disclaimer: Grandad to three. The instinct to protect your child can overcome the instinct to defend yourself, sacrificing a bunch of strangers to save your own child is a no-brainer for most parents.

        Which is the justification for all sorts of terrible things that are actually done. We hold it as high praise that one is self-sacrificing for their relatives, but evolutionarily, we also have to understand that as a self-serving position.

        • but evolutionarily, we also have to understand that as a self-serving position.

          Understanding human behaviour won't make it go away.

          • No, I'm not asking anyone to make it go away. Just trying to account for things is better than ignoring them.

      • by xevioso ( 598654 )

        Except that the bullshit of this explanation is shown by the fact that "his son" came back three days later. There was no sacrifice.

        Stephen, who dies a martyr, gave more than Christ ever did, because he knew he would not come back. Jesus, being part of an infinite being, surely knew he would come back. He gave up nothing except a few days on earth.

        And this is partly why I am not a Christian.

    • by hey! ( 33014 )

      Right. So you're a billionaire and I work for you. My embezzling a hundred thousand dollars from you to send my kid through school is moral because you won't really miss it and it does a great deal of good for my kid and no discernible harm to you. That's the *pure* utilitarian way of looking at it, although such purity in outlook is something at least very rare and very probably non-existent.

      There's another way of looking at this problem that seems built into human beings which philosophers call deontologi

      • As a utilitarian, I find that lots of people fail to consider long-term consequences of actions, concentrating on the short-term. Therefore, deontological ethics are useful because they establish expectations, which are useful, and because such things as embezzlement tend to tick people off, reducing happiness. Aretaic ethics are useful because they cover long-term consequences. There are things I simply do not do, and I believe both I and others are happier for that.

        It is my opinion that all ethical

  • In games like Counter-Strike: Global Offense, I take hostages, set up bombs are willing to give up my virtual live to protect it from being defused, kill people.

    While in reality I'm not a suicide bombing terrorist. Who would have guess?
  • I get through a million virtual dollars in a single session of online poker. And you should have seem my driving on RollCage.

    What's the point here?

  • by Anonymous Coward on Thursday January 09, 2014 @04:20PM (#45910353)

    ... (they tend to minimize the number of persons killed)

    Anyone who's played Black & White knows that's not true. They don't even minimize the number of persons killed by poop.

  • Poorly-designed VR (Score:3, Insightful)

    by Impy the Impiuos Imp ( 442658 ) on Thursday January 09, 2014 @04:34PM (#45910517) Journal

    "Become more utilitarian", i.e. they choose to save more lives, which is already at 88% in a non-VR, simple textual scenario like the trolly switch issue.

    This is odd, because in most scenarios of VR, people seem to want to throw a switch to deliberately divert a trolly from one person to kill 5 instead, as long as they have a chat line where they can type "lolf49z!"

    • by Anonymous Coward

      So a more proper test would be one with two levers. Pull none, 5 people get killed. Pull one of them, only one person dies. Pull the other and the trolley goes on a merry bloodbath through a crowded mall (with yakety sax playing in the background).

  • by Okian Warrior ( 537106 ) on Thursday January 09, 2014 @04:50PM (#45910703) Homepage Journal

    One issue that studies never seem to take into account is responsibility.

    If a group of people will be killed but you could decide to kill a single person, there is a third option: you could choose not to decide.

    When you switch the tracks you are taking responsibility for making the decision, and all consequences thereto. There will be an inquest, you will be brought up under charges for manslaughter, your actions will be made public in the newspaper... all sorts of bad things will happen, and your life will be forever changed.

    For a recent example, consider the recent Asiana Airlines Flight 214 [wikipedia.org], where a woman was run over by a fire truck. The battalion chief responsible for directing operations was put through the wringer by over-zealous bureaucrats looking for someone to blame. His helmet cam [sfgate.com] footage was all that saved him. Blameless, he only narrowly escaped taking the blame.

    If you simply walk away, then it's not your problem. The responsibility lies somewhere else, no one can blame you for not making the decision. You weren't expected to handle it, it's not your fault.

    This makes perfect sense in the current study: there's no consequences for killing virtual people, so it's easy to make the moral choice.

    Real morality takes courage, and the willingness to sacrifice.

    • by xevioso ( 598654 )

      Choosing not to decide is still a choice.
      You could choose to flip a coin, and let fate decide, that is to say, allow the choice to be a random one, but you are still chooing not to decide.

      • Choosing not to decide is still a choice.
        You could choose to flip a coin, and let fate decide, that is to say, allow the choice to be a random one, but you are still chooing not to decide.

        I will choose Freewill!

      • Sure, but choosing not to decide can be a valid choice.

  • by TrumpetPower! ( 190615 ) <ben@trumpetpower.com> on Thursday January 09, 2014 @04:54PM (#45910745) Homepage

    I can't believe that people still think that these trolley car "thought experiments" are telling them anything novel about human moral instincts.

    All they are are less-visceral variations on Milgram's famous work. An authority figure tells you you must kill either the hot chick on the left or the ugly fatty on the right and that you mustn't sound the alarm or call 9-1-1 or anything else. And, just as Milgram found out, virtually everybody goes ahead and does horrific things in such circumstances.

    Just look at the videos in question. The number of laws and safety regulations and bad designs of the evil-mad-scientist variety in each scenario are innumerable. They take it beyond Milgram's use of a white lab coat to establish authority and into psychotic Nazi commander territory. In the real world, the victims wouldn't be anywhere near where they are. If they were, there wouldn't be any operations in progress at the site. If there were, there would be competent operators at the controls, not the amateur being manipulated by the experimenter; and those operators would be well drilled in both standard and emergency procedures that would prevent the disaster or mitigate it if unavoidable -- for example, airline pilots trained to the point of instinct to avoid crashing a doomed plane into a crowded area.

    The proper role of the experimenter's victims ("subjects") is to yell for help, to not fucking touch critical safety infrastructure in the event of a crisis unless instructed to by a competent professional, to render first aid to the best of their abilities once help is on the way, and to assist investigators however possible once the dust has settled.

    Yet, of course, the experimenter is too wrapped up in the evil genius role to permit their victims to even consider anything like that, and instead convinces the victims that they're bad people who'll kill innocents when ordered to. Just as we already knew from Milgram.

    How any of this bullshit makes it past ethics review boards is utterly beyond me.

    Cheers,

    b&

    • And, just as Milgram found out, virtually everybody goes ahead and does horrific things in such circumstances.

      No. He found out that people would do the horrific thing in the abstract. This proves not one thing about how they would react in reality. Some will, some won't. We have historic examples to prove that. Anyone who thinks his subjects didn't know it was an experiment is a fool.

    • by xevioso ( 598654 ) on Thursday January 09, 2014 @06:19PM (#45911645)

      There ARE real world versions of this. I pointed this out above, but the real world versions tend to involve atrocities during wartime, something that the armchair ethicists here don't seem to want to discuss much. A REAL scenario would involve a soldier telling a mother to shoot one of her children or the soldier would shoot all of them himself. These things have, and will continue to happen in real life on occasion.

      What's the proper response here? Attack the soldier with the gun he gives you to shoot your kid? OK, what if he tells you to choose which child will die and he will do it himself while you are tied up? The point is, in th real world, it is the CHOICE ITSELF which is the atrocity, and there is NO correct decision. In the real world. Which is one of the many reasons why war is evil.

  • Like kobayashi maru, tic-tac-toe, and thermonuclear war...

    The only way to win is not to play.
    • by Anonymous Coward

      tic-tac-toe

      The only winning move is to play, perfectly [xkcd.com], waiting for your opponent to make a mistake.

      FTFY

  • by Anonymous Coward

    this "vr" is really stupid. The best approximation of reality they can do is to offer a binary choice. This is unnatural and not predictive of people's behavior. In real situations, people think they have more courses of action, even when they don't: they would scream at the people to get out of the way, in a (possibly futile) attempt to save everybody, in addition to playing with the switch and attempting other things as well.
    Very rarely we are presented with dangerous situations in life where the choice i

  • When the lifting magnet dropped the car on the guy's head, I laughed. It was funny because the door flung open, or maybe because it just had a certain cartoony look about it. It could have been an anvil or a safe, then it would have been even funnier. Then of course there's the whole premise of a bunch of guys sort of doing a slow dance in a salvage yard, and they don't even look like yard workers at all. It's just too surreal.

    In real life, the car falls on the guy's head without any moral dilemma. It'

  • So... more realisting simulations yield more realistic results?

    I suspect there's a film at 11.

  • Yelling, "get the fuck off the train tracks, you fucking morons!"

    Then let Darwin take care of the rest.

  • I thought it said "Unitarian".

  • That's because it isn't a moral dilemma, it's virtual reality.
  • Sure it's VR but what are we comparing it to? The actual experiment? No. We are comparing it to hypothetical talk. So between VR and talk, I would guess VR gives the more realistic view of what people would do. Talk is cheap. You don't know what you'd do until you are in the situation.

  • Here is the problem: In these virtual scenarios, there is a binary choice and in both choices somebody dies.

    However, in reality people try to find alternative solutions that none of these tests take into account. In these scenarios the viewer cannot attempt to warn or alert the victims. The victims idly reside to their fates and make no attempt to protect or preserve their lives.

    In other words: These tests do little to prove anything. Most of the viewers taking these tests know that it is VR. How do
  • In the paper based scenario the participant is present as a moral agent in a real world scenario with the moral and legal responsibility that implies. In the VR scenario the participant is god with no moral or legal responsibility.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...