Posted on 11/11/2013 7:46:30 AM PST by null and void
Lets face it. If a robot got hold of your kitchen knives and wanted to cook you dinner, how safe would you feel? (Can you picture knives flying, pots breaking things, and things catching on fire?)
Researchers from Cornell University are working on a new training technique that will allow humans to teach robots to carry out dangerous tasks, like holding knives, without hurting anyone around them.
Cornell team teaches a robot how to safely check out items as a cashier.
The Cornell University team, led by Ashutosh Saxen, an associate professor of computer science, taught a robot how to be a cashier, like one at a grocery store, without harming people around it while ringing up a knife.
Remember, just because we think a task is easy doesnt mean the task is easy for a robot. In addition, robots are pretty clumsy.
The robot can get dangerous with that knife.
Current robots are usually programmed with code or taught to memorize actions of humans so training them to do assembly line jobs is easy. However, when it comes to using robots at home, your robot needs to be able to pick up sharp knives while keeping the blade away from a human, or know how to handle fresh fruits and vegetables more carefully than those in a can.
In order to train a robot for tasks like these, the team has been working with a previously constructed robot named Baxter.
Saxen and his team programmed the robot to plan its own motions and then stepped in and corrected the robot when necessary.
In tests, most users were able to successfully train the robot on a particular task after only five corrections. The robots were even able to apply what they learned to other objects or environments.
[video at source]
Another brilliant idea from

The people who brought you Beeeeeeeer Milkshakes!
Ping!

Isaac Asimov’s “Three Laws of Robotics”
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Rimmer: [shouting to the scutter] Now! Stab him! Stab him! Stab him! Quick! Stab him!
[Paranoia turns to look at the scutter... which has hardly moved]
Rimmer: Uh, you haven’t met Stabem, have you? He’s one of our scutters. Stabem, meet Lister’s paranoia; Lister’s paranoia, this is Stabem.
I did not realize there was an epidemic of knife-wielding, people-stabbing robots.
One of the earliest cases of a robot killing somebody was in Japan in the 1980s, albeit with a screwdriver in a auto factory, iirc.
No one has done a pic of Bishop yet?
I was expecting one too!
Great Red Dwarf episode.
I loved Confidence and Paranoia....
In da Dwarf....
I loved red dwarf, I have made many dwarfers...
Big deal. I’ll be impressed when they invent one that can do really dangerous things like run with scissors. /sarc
lol
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.