That is another HUGE concern. I’m not sure there are any good answers.
Examples:
If an accident is inevitable and unavoidable and there is a choice between hitting one person or hitting two people, the “obvious” choice would be to hit one person, thus minimizing the damage. But what if the one person is in the car and the two people are on the road (a mother holding a child who has suddenly jumped into the street)? Should the car crash itself into a brick wall and kill the passenger or hit the two people in the street?
Who gets to decide?
If it were me in the car, I might (*MIGHT* - depending on additional circumstances) be willing to sacrifice myself. But if it were my child in the car, I’d want to save my child instead of the irresponsible person who jumped in front of the car.
Good examples.
What if the choices are:
A) Sideswipe another car off the road/cliff
B) sacrifice crash your own vehicle
C) attempt a driving maneuver that would avoid either A or B
- maneuver requires skill level A
- maneuver requires skill level B
- maneuver requires skill level C
- maneuver not possible now due to weather conditions
- maneuver not possible due to driver mental alertness condition
D) Other car contains two killers fleeing a murder
E) Other contains a family of five
And on and on. My point is that even if you could quantify and program for all variable, which is impossible, driving still involves moral decisions and value judgements.
Humans make these, machines cannot. Machines can implement them, but not make them.
I don’t want Lord Google making them for me.