simple programming failure - the programmer didnt’ apply a process for prioritizing, and the robot, not having clear instructions, kept looping from person a to person b.
Indeed.
The early Phalanx missile defense systems on US warships had the same problem with their programming.
Now they are programmed to “flip a coin”, all other things being equal.
Or so I have been told.
Lots of interesting problems that few robots could solve to a humane satisfaction.
If you program the robot to save only the people wearing a blue shirt, it will do it successfully every time until you shake things up by sending two blue shirted people into danger.
A human will make a completely different set of assessments and calculations. A human is going to consider saving both and formulate a quick plan. A human will consider risking one to save the other with an eye on going back to save the one he risked. Then there are the situations where a human will try to save the one (like a child) with little hope of survival on purely altruistic grounds.