Posted on 09/08/2009 1:12:36 PM PDT by null and void
Researchers from Portugal and Indonesia describe an approach to decision making based on computational logic that might one day give machines a sense of morality. Science fiction authors often use the concept of "evil" machines that attempt to take control of their world and to dominate humanity. Skynet in the "Terminator" stories and Arthur C Clarke's Hal from "2001: A Space Odyssey" are two of the most often cited examples.
However, for malicious intent to emerge in artificial intelligence systems requires that such systems have an understanding of how people make moral decisions. Luís Moniz Pereira of the Universidade Nova de Lisboa, in Portugal and Ari Saptawijaya of the Universitas Indonesia, in Depok, are both interested in artificial intelligence and the application of computational logic.
"Morality no longer belongs only to the realm of philosophers. Recently, there has been a growing interest in understanding morality from the scientific point of view," the researchers say.
They have turned to a system known as prospective logic to help them begin the process of programming morality into a computer. Put simply, prospective logic can model a moral dilemma and then determine the logical outcomes of the possible decisions. The approach could herald the emergence of machine ethics.
The development of machine ethics will allow us to develop fully autonomous machines that can be programmed to make judgements based on a human moral foundation. "Equipping agents with the capability to compute moral decisions is an indispensable requirement," the researchers say, "This is particularly true when the agents are operating in domains where moral dilemmas occur, e.g., in healthcare or medical fields."
The researchers also point out that machine ethics could help psychologists and cognitive scientists find a new way to understand moral reasoning in people and perhaps extract fundamental moral principles from complex situations that help people decide what is right and what is wrong. Such understanding might then help in the development of intelligent tutoring systems for teaching children morality.
The team has developed their program to help solve the so-called "trolley problem." This is an ethical thought experiment first introduced by British philosopher Philippa Foot in the 1960s. The problem involves a trolley running out-of-control down a track. Five people are tied to the track in its path. Fortunately, you can flip a switch, which will send the trolley down a different track to safety. But, there is a single person tied to that track. Should you flip the switch?
The prospective logic program can consider each possible outcome based on different versions of the trolley problem and demonstrate logically, what the consequences of the decisions made in each might be. The next step would be to endow each outcome with a moral weight, so that the prototype might be further developed to make the best judgement as to whether to flip the switch.
The research is published in the International Journal of Reasoning-based Intelligent Systems.
Garbage in...garbage out!
Evil machines? Anyone been to an NEA convention?
The solution to the trolly problem is obvious...
Precisely! Because the one-person track has a liberal tied to it.
Interesting, if a machine can have the morality of the robot hero in the movie “I Robot” then the leftists who seek to enslave are in deep do do.
Then again Obama has the morality of the master computer of the same movie...
|
Nope. There is a zero fatality option.
Queensryche: NM156.....
This situation would have busted any of Isaac Asimov’s positronic brained robots.
See post #9.
(Hint: Think outside the implied limits of the scenario)
It would be even simpler for a robot than a human...
The problem, if you want to call it that, is that our morality depends on our empathic proximity to those involved in the moral decisions. We normally care more about friends and family than strangers and more about strangers we can see than strangers we can’t. Further, the disgust that we feel for morally repugnant acts is emotional rather than rational. Any program that doesn’t take that into account won’t work properly. And with regard to killer computers, it’s important to understand that much of what separates psychopaths from normal people is that they lack the empathy and visceral disgust over immoral acts often called a conscience.
Nothing more frightening to a devout muslim than girls, especially educated girls.
Exactly.
OTOH, if we can get them to fake sincerity, we’re half way there...
Yes, formation of conscience requires the capacity for love and compassion.
A logic-only base is doomed to either big mistakes or simple calculations.
Or as Chesterton put it: A madman is not someone who has lost his reason; he is someone who has lost everything but reason.
Psychopaths fake sincerity. That’s not a solution.
It’s a step on the way to humanity...
I think faking it is a step on the way to humanity the way building “airplanes” shaped like birds with flapping wings were a step on the way to powered flight. Making something that simply looks like something else that works doesn’t necessarily work.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.