(AIronicially, here’s the summary generated by Brave’s AI)
https://search.brave.com/search?q=asimov%27s+laws+of+robotics
Asimov’s Laws of Robotics are a set of rules devised by science fiction author Isaac Asimov, which were to be followed by robots in several of his stories. The rules were introduced in his 1942 short story “Runaround” (included in the 1950 collection I, Robot), although similar restrictions had been implied in earlier stories. The Three Laws form an organizing principle and unifying theme for Asimov’s robot-based fiction, appearing in his Robot series, the stories linked to it, and in his (initially pseudonymous) Lucky Starr series of young-adult fiction.
The original laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov later added another rule, known as the fourth or zeroth law, which superseded the others. It stated that a robot may not harm humanity, or, by inaction, allow humanity to come to harm.
These laws are often used as a starting point for discussions on ethics, safety, and responsibility in the field of robotics and artificial intelligence. However, numerous arguments have demonstrated why they are inadequate, and Asimov’s own stories are arguably a deconstruction of the laws, showing how they repeatedly fail in different situations.
Asimov’s Dr. Susan Calvin thinks that robots are “a cleaner better breed than we are” (Introduction.32).
Let us know when an AI comes up with an original thought, instead of just surfing around to find stuff humans have already thought-up :-)
The problem is rule sets themselves (an algorithm is nothing more than a rule set when you come down to it) and their limitations. Students of philosophy familiar with logical positivism have already hit this little brick wall. Mechanizing that doesn't help, the limitations are inherent in the system. Hard-wiring rule sets will not allow a human intelligence model, however imperfect that turns out to be, because we don't do that, Noam Chomsky despite.
There is one additional caution: neural networks consisting of a trillion or so nodes and capable of learning, i.e. self-modification, already exist. We call them babies. These take time and guidance to develop behavior models that are consistent with human society, and it doesn't take much imagination to suspect the same will be true of similarly complex AI's. You wouldn't give a baby a loaded gun (well, I might...) and the concern is that AI's will be given capability before they learn responsibility. Every parent will understand.