Posted on 09/13/2001 6:25:30 PM PDT by Nebullis
Princeton, N.J. -- In a study that combines philosophy and neuroscience, researchers have begun to explain how emotional reactions and logical thinking interact in moral decision-making.
Princeton University researchers reported in the Sept. 14 issue of Science that they used functional magnetic resonance imaging (fMRI) to analyze brain activity in people who were asked to ponder a range of moral dilemmas.
The results suggest that, while people regularly reach the same conclusions when faced with uncomfortable moral choices, their answers often do not grow out of the reasoned application of general moral principles. Instead, they draw on emotional reactions, particularly for certain kinds of moral dilemmas.
The results also show how tools of neuroscience are beginning to reveal the biological underpinnings of the subtlest elements of human behavior, said Joshua Greene, a graduate student in philosophy who conducted the study in collaboration with scientists in the psychology department and the Center for the Study of Brain, Mind and Behavior.
"We think of moral judgments as so ethereal," said Greene. "Now we're in a position to start looking at brain anatomy and understanding how neural mechanisms produce patterns in our behavior."
The study focused on a classic set of problems that have fascinated moral philosophers for years because of the difficulty in identifying moral principles that agree with the way people react.
One dilemma, known as the trolley problem, involves a runaway train that is about to kill five people. The question is whether it is appropriate for a bystander to throw a switch and divert the trolley onto a spur on which it will kill one person and allow the five to survive.
Philosophers compare this problem to a second scenario, sometimes called the footbridge problem, in which a train is again heading toward five people, but there is no spur. Two bystanders are on a bridge above the tracks and the only way to save the five people is for one bystander to push the other in front of the train, killing the fallen bystander.
Both cases involve killing one person to save five, but they evoke very different responses. People tend to agree that it is permissible to flip the switch, but not to push a person off the bridge. People in the study also followed this pattern. This distinction has puzzled philosophers who have not been able to find a hard and fast rule to explain why one is right and the other wrong. For each potential principle, there seems to be another scenario that undermines it.
One reason for the difficulty, said Greene, appears to be that the two problems engage different psychological processes -- some more emotional, some less so -- that rely on different areas of the brain.
"They're very similar problems -- they seem like they are off the same page -- but we appear to approach them in very different ways," said Greene.
Greene emphasized that the researchers were not trying to answer questions about what is right or wrong. Instead, given that people follow a pattern of behavior, the study seeks to describe how that behavior arises. In turn, a better understanding of how moral judgments are made may change our attitudes toward those judgments, Greene said.
The researchers conducted the study with two groups of nine people, who each answered a battery of 60 questions while undergoing MRI scanning. The researchers divided the questions into person and non-personal categories based on the general notion that the difference between the trolley and footbridge problems may have to do with the degree of personal involvement, and ultimately the level of emotional response.
Examples of non-personal ethical dilemmas included a case of keeping money from a lost wallet and a case of voting for a policy expected to cause more deaths than its alternatives. The researchers also included non-moral questions, such as the best way to arrange a travel schedule given certain constraints and which of two coupons to use at a store.
The scanning consistently showed a greater level of activation in emotion-related brain areas during the personal moral questions than during the impersonal moral or non-moral questions. At the same time, areas associated with working memory, which has been linked to ordinary manipulation of information, were considerably less active during the personal moral questions than during the others.
The researchers also measured how long it took subjects to respond to the questions. In the few cases in which people said it is appropriate to take action in the personal moral questions -- like pushing a person off the footbridge -- they tended to take longer to make their decisions. These delays suggest that this subgroup of people were working to overcome a primary emotional response, the researchers said.
Taken together, the imaging and response time results strongly suggest that emotional responses influenced moral decision-making and were not just a coincidental effect, the researchers concluded.
Professor of psychology John Darley, a coauthor of the paper, said the result fits into a growing area of moral psychology which contends that moral decision-making is not a strictly reasoned process, as has been believed for many years. "Moral issues do not come to you with a sign saying 'I'm a moral issue; treat me in a special way,'" Darley said. Instead, they engage a range of mental processes.
Other coauthors on the paper are Brian Sommerville, a former research assistant now at Columbia University Medical School; Leigh Nystrom, a research scientist in psychology; and Jonathan Cohen, a professor of psychology at Princeton.
Cohen also is director of the University's newly established Center for the Study of Brain, Mind and Behavior, which houses the fMRI scanner used in the study, and which seeks to combine the methods of cognitive psychology with neuroscience.
"Measuring people's behavior has served psychology well for many years and will continue to do so, but now that approach is augmented by a whole new set of tools," said Cohen.
Brain imaging allows scientists to build a catalog of brain areas and their functions, which can then be cross-referenced with behaviors that employ the same processes, Cohen said. Eventually, this combination of behavioral analysis and biological neuroscience could inform questions in fields from philosophy to economics, he said.
The current study, he said, "is a really nice example of how cognitive neuroscience -- and neuroimaging in particular -- provide an interface between the sciences and the humanities."
We get our hackles up at nameless posters on this forum and the problems they present.
Well, if the problem presents itself such that one set of neurons, in one part of the brain is preferentially fired over another set, different solutions will result.
I can't figure out how dumping yet another person in front of a train would save any of them. Wouldn't that just kill 6 people instead of five? What about the folks on the train?
But let's assume the one you, the other bystander, would push could cause the train to derail and somehow save the others... it isn't right to toss someone else in to do the job when you can do it yourself. So their belief that the situations they propose are essentially the same is a bit of a stretch.
Joe eats shark
Shark eats Joe
Shark Joe eats
Joe shark eats
Eats Joe shark
Eats shark Joe
I'm not sure we can be that precise. I know that we can see the firing of a single neuron in a fly's brain when visually stimulated but I don't think that human thought is circumscribed to that level.
We have a hint here, in this study.
That's why they use these kinds of problems. The numbers work out the same, but one situation feels wrong while the other doesn't.
But you and I know that it is not. That is what my post demonstrates. Even a computer program is not just simply a linear sequence of instructions. A program can be rearranged linearly to a certain degree and function precisely the same(excluding minor timing variances) yet you can invert one logical test and produce profound changes in operation. I again point out the "language" aspects of the code.
Rank and organization: Sergeant First Class, U.S. Army, Battery A, 2d Battalion, 320th Field Artillery, 101st Airborne Infantry Division (Airmobile). Place and date: Tam Ky, Republic of Vietnam, 15 October 1967. Entered service at: Winnsboro, S.C. Born: 15 July 1933, Winnsboro, S.C. Citation: Sfc. Anderson (then S/Sgt.), distinguished himself by conspicuous gallantry and intrepidity in action while serving as chief of section in Battery A, against a hostile force. During the early morning hours Battery A's defensive position was attacked by a determined North Vietnamese Army infantry unit supported by heavy mortar, recoilless rifle, rocket propelled grenade and automatic weapon fire. The initial enemy onslaught breached the battery defensive perimeter. Sfc. Anderson, with complete disregard for his personal safety, mounted the exposed parapet of his howitzer position and became the mainstay of the defense of the battery position. Sfc. Anderson directed devastating direct howitzer fire on the assaulting enemy while providing rifle and grenade defensive fire against enemy soldiers attempting to overrun his gun section position. While protecting his crew and directing their fire against the enemy from his exposed position, 2 enemy grenades exploded at his feet knocking him down and severely wounding him in the legs. Despite the excruciating pain and though not able to stand, Sfc. Anderson valorously propped himself on the parapet and continued to direct howitzer fire upon the closing enemy and to encourage his men to fight on. Seeing an enemy grenade land within the gun pit near a wounded member of his gun crew, Sfc. Anderson heedless of his own safety, seized the grenade and attempted to throw it over the parapet to save his men. As the grenade was thrown from the position it exploded and Sfc. Anderson was again grievously wounded. Although only partially conscious and severely wounded, Sfc. Anderson refused medical evacuation and continued to encourage his men in the defense of the position. Sfc. Anderson by his inspirational leadership, professionalism, devotion to duty and complete disregard for his welfare was able to maintain the defense of his section position and to defeat a determined attack. Sfc. Anderson's gallantry and extraordinary heroism at the risk of his life above and beyond the call of duty are in the highest traditions of the military service and reflect great credit upon himself, his unit, and the U.S. Army.
Yes, obviously, but what I find interesting is the passive voice. What 'prefers' the firing of one set of neurons over another - another set of neurons? How do mere physiological processes account for the notion of morality itself, with its attendant concepts of freedom, agency, obligation, accountability, and dignity? How is it that mere electrochemical reactions in the brain lead logically to any coherent, intelligible concept of 'right' or 'wrong'?
Cordially,
At the same time, there could be good reasons to trust our gut responses, he suggests. "Emotions may well be important adaptations. We don't have to write them off as silly, murky, irrational responses."
I don't think there is anything "mere" about the workings of our brains.
Because of blindness of the experimenters in the following problem, one that they posed.
Two bystanders are on a bridge above the tracks and the only way to save the five people is for one bystander to push the other in front of the train, killing the fallen bystander.
If one can push another into the path of a train one can sacrifice themselves.
*AUSTIN, OSCAR P.
Rank and organization: Private First Class, U.S. Marine Corps, Company E, 2d Battalion, 7th Marines, 1st Marine Division, (Rein), FMF. Place and date: West of Da Nang, Republic of Vietnam, 23 February 1969. Entered service at: Phoenix, Ariz. Born: 15 January 1948, Nacogdoches, Tex. Citation: For conspicuous gallantry and intrepidity at the risk of his life above and beyond the call of duty while serving as an assistant machine gunner with Company E, in connection with operations against enemy forces. During the early morning hours Pfc. Austin's observation post was subjected to a fierce ground attack by a large North Vietnamese Army force supported by a heavy volume of hand grenades, satchel charges, and small arms fire. Observing that 1 of his wounded companions had fallen unconscious in a position dangerously exposed to the hostile fire, Pfc. Austin unhesitatingly left the relative security of his fighting hole and, with complete disregard for his safety, raced across the fire-swept terrain to assist the marine to a covered location. As he neared the casualty, he observed an enemy grenade land nearby and, reacting instantly, leaped between the injured marine and the lethal object, absorbing the effects of its detonation. As he ignored his painful injuries and turned to examine the wounded man, he saw a North Vietnamese Army soldier aiming a weapon at his unconscious companion. With full knowledge of the probable consequences and thinking only to protect the marine, Pfc. Austin resolutely threw himself between the casualty and the hostile soldier, and, in doing, was mortally wounded. Pfc. Austin's indomitable courage, inspiring initiative and selfless devotion to duty upheld the highest traditions of the Marine Corps and the U.S. Naval Service. He gallantly gave his life for his country.
That's a bit silly, AndrewC. The problems are given with options. Heroic alternatives, and one can think of many, are not given as a choice. The elegance in an experiment is to get an answer with the simplest design possible. Throwing heroics into the mix, something that not everyone resorts to, is adding a complicating factor. It would be interesting for a future study, but it is certainly not a design flaw for this study.
Speak for yourself! (just kidding).
Put another way though, are moral "right" and "wrong" solely electrochemical reactions in the brain?
Cordially,
How does an idea, a statement, such as (lets pick one out of thin air) "Concepts are natural functions of our brains" arise from molecules? I guess what I trying to ask is, what is the origin, the meaning, and the significance of the concept "morality", if the concept itself is nothing but the result of brute forces of chemistry or electricity?
What if the pulses of depolarization in my brain traverse different pathways? What if the molecules that produce the idea, "morality" happen to go a different direction? Does morality then change? What if the neurotransmitters produce the output, "Concepts are NOT natural functions of our brains"?
Under a purely naturalistic premise I think that my brain could only be physically obligated, not morally obligated because it would be operating completely and solely by physical forces. When a machine's actions are completely determined by physical forces, the moral intent cannot be known because the entire operation of the machine is based on coercion.
Yet we both know intuitively that I am morally obligated. Why?
Cordially,
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.