Robots don’t have morals. “Morals” is a human concept. A computer program can SIMULATE some aspects of human behavior, and humans are often so stupid that they think that the program actually exhibits human behavior.
Bingo. It feels nothing. It will only kill if programmed to do so by “executing” a set of commands having no awareness of what it is doing. Some people have been watching too much science fiction...: )
Pretty much. Who in their right mind would trust a machine programmed by humans?? I sure wouldnt...
The scary thing is how many of these robots are programmed by people who hold to transhumanism.
Its moral “code” could be creating these unacceptable decisions because of the low value it assigns to human life or the “if they’re really injured, better death than save and be an expensive burden”.
Then it gets weird if the robot assigns a human value to artificial intelligence / uploaded human minds equal or greater than that of people on the street.
But you’re far more likely to get a “solve the pandemic by killing all the infected people immediately” solution when it doesn’t assign much value to human life in general.