That has more to do with unions than technology.
The problem with the author's premise is that he holds "fail proof" and "flawless" as some kind of standard. Fact is people are not fail proof and flawless drivers. The automation need only exceed fallible humans, not meet some perfection standard. 10s of thousands of needless deaths can be avoided annually, nevermind hundreds of thousands of injuries.
There will still be accidents, just fewer of them due to careless people.
The Washington D.C. metro subways are completely controlled by computers except for the doors. They have human train operators who open and close the doors and who are present if there is an emergency or technical issue. Otherwise, the trains operate via computer control
I believe the BART system in the SF bay area works in a similar manner.
Robo cars are never going to happen. A very highly advanced cruise control may be a possible for hi-way driving.
Liability issues make this untrue, IMO. If a driver makes a mistake, they are held responsible as an individual (or as a company for a fleet driver). If the self-piloting car screws up, the pockets of the "driver" (the car manufacturer) will be much, much deeper, and therefore the liability much, much higher.
Knowing what I know about automotive software, I am really concerned about the probability of significant issues.