I agree this remains in the future. But notice that not so long ago, many thought computers couldn’t play chess above the level of top grandmasters, and would never beat the world champion. So the world was shocked when IBM’s Deep Blue beat Garry Kasparov in 1999. Today, less than 20 years later, anyone can download chess engines to their smartphone which can destroy all human chess players.
Also notice that even fewer years ago, it was predicted that computers would *never* learn to play Go at the level of world champions. Needless to say, AlphaGo has since beaten the world champion.
In comparison to playing chess or Go, it wouldn’t seem that driving a car safely is that difficult.
Chess or Go play by very defined sets of rules.
I am not certain that driving has such a defined rule set. Between human behavior and random events, there pretty much needs to be a reactive intelligence to be able to drive. Do we really want to make intelligent cars?
Ive read where self-driving cars need to have human intervention every 3,000 to 90,000 miles, depending on how well they are programmed. How likely is it that a human being is going to be paying attention at the exact time he needs to intervene to prevent an accident? And even if he is paying attention at that moment, will he even have the skill to avoid the accident, since he likely only knows rudimentary driving skills? I can also see car manufacturers getting hit with huge liability lawsuits, since accidents would no longer be considered the drivers fault, but the fault of the manufacturer.
As I said before, I do not think self-driving cars are ready for prime-time.
Another issue is societal. As smart devices do more and more that humans used to be trained to do, humans just keep getting more stupid. The brain needs intellectual exercise just as much as muscles need physical exercise.
What is Go?
Until they can make a simple lap top not crash, I’m not trusting a car not to crash.