This case stinks and the engineering staff owe the public an explanation as to how the sensors operate, what they saw, and what software was tasked with the decision making as far as avoidance and brake. So far I would believe anything, but I am afraid what we will learn will be a sorry case of failure to test in realistic situations and even running with some aspects of the system disabled.
I could not believe that they said the lidar would have detected but was not tied into the decision to apply the brakes. What are they doing running a vehicle with this attitude and what was the human doing while engineers were making “improvements” to software. This is where the human should have been highly alert and have have full command of the vehicle override systems.
All I can say is this does not look good.
It is reckless. No wonder Uber said after the accident they anticipated criminal charges. Their careless development and disregard of the public gives technology a bad name. Bad enough we have software released on our computers and all with bugs, but that is not life threatening. To take the same quality approach to cars where people’s lives are at risk is unconsciouseable.
If the safe operation of a driverless vehicle depends on the sensors having full coverage (it does), then there should be a routine and required procedure to test them. Such as a BIT when the car is started, or a software feature that disables auto drive mode if nothing is detected by a given sensor within a defined quantity of time or distance covered.
It will be really, really dumb if it turns out this car had a faulty sensor that wasn’t detected by software.
That's legit. Their product is a sensor and maybe some software that makes its data manageable. They're the lookout at the front of the ship. Their job is to yell "iceberg ahead starboard!", and it's somebody else's job to know what to do about it. They're not the whole system, Uber is.