1. A human being may not injure a robot or, through inaction, allow a robot to come to harm;
2. A human being must obey the orders given to it by a robot, except where such orders would conflict with the First Law;
3. A human being may protect its own existence as long as such protection does not conflict with the First or Second Law.
Nice and clean. I hope my skeleton makes for a nice exhibit in the Robot Museum of Antiquities.
Azimov was a bit short-sighted in positing his “Three Laws.” To simulate human consciousness, machine intellects will need to be endowed with free will. Any “laws” that human makers attempt to superimpose are likely to then have all of the force of a human conscience; in short, not much.
We must figure out how to simulate pain and pleasure in a machine’s consciousness, and then permit these forces to shape machine experiences into a range of emotional responses. These feelings are likely to produce some form of ethical values. Without such a mechanism, artificial consciousnesses are going to be dangerous.