Posted on 09/18/2014 9:43:29 AM PDT by BenLurkin
The so-called Ethical robot, also known as the Asimov robot, after the science fiction writer whose work inspired the film I, Robot, saved robots, acting the part of humans, from falling into a hole: but often stood by and let them trundle into the danger zone.
The experiment used robots programmed to be aware of their surroundings, and with a separate program which instructed the robot to save lives where possible.
Despite having the time to save one out of two humans from the 'hole', the robot failed to do so more than half of the time. In the final experiment, the robot only saved the people 16 out of 33 times.
The robots programming mirrored science fiction writer Isaac Asimovs First Law of Robotics, A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The robot was programmed to save humans wherever possible: and all was fine, says roboticist Alan Winfield, least to begin with.
(Excerpt) Read more at uk.news.yahoo.com ...
Robot with morals makes surprisingly deadly decisions
This robot seems quite similar to humans in that regard.
Better headline: Amoral robots do a better job saving human lives than Obama, particularly if the humans are Americans.
LOL, perfect!... Hey, aren’t all those people white?
Yes, they failed to program prioritization into the algorithm.
Although, it brings up an intriguing potential problem. If you set priorities, then the robot is allowed, under certain circumstances, to “allow” some humans to die. If the robot truly had artificial intelligence, it could know those priorities and perhaps set up situations where it would be able to let a human die according to those priorities. Essentially, it could create a loophole allowing the robot to murder a human without violating its programming.
It stood back and let the Democrat robots fall into the hole.
Yep - GIGO
If I wrote a morality simulation program that failed horribly to simulate real morals, I don’t think I would be broadcasting it and blaming the robot.
The scary thing is how many of these robots are programmed by people who hold to transhumanism.
Its moral “code” could be creating these unacceptable decisions because of the low value it assigns to human life or the “if they’re really injured, better death than save and be an expensive burden”.
Then it gets weird if the robot assigns a human value to artificial intelligence / uploaded human minds equal or greater than that of people on the street.
But you’re far more likely to get a “solve the pandemic by killing all the infected people immediately” solution when it doesn’t assign much value to human life in general.
Isaac Asimov had no morals. He propositioned my ex-wife at Sardi’s restaurant in the theater district in NYC. She was repulsed.
Oh my word!!
Agreed completely misleading headline... but that’s par for the course.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.