Prosperity is just around the corner, and self-driving cars are only months away (and have been since 2017).
Snake oil.
2018: Elon Musk says Tesla will launch its cross-country road trip in a self-driving car in 3 to 6 months
2019: Old Promises Broken, Musk Offers New Pledges on Self-Driving
2020: Elon Musk Says Tesla Robotaxis Will Still Be Ready in 2020
2020: Tesla to introduce Full Self Driving subscription by end 2020, robotaxis by 2021
2022: Tesla’s 1 million robotaxis by end of the year becomes ‘1 million people in FSD Beta’
2023: Tesla manages to weasel its way out of the Full Self-Driving class action
Aircraft fly themselves 99% of the time. Given, that’s actually easier than a car navigating complex streets and there’s far more use-cases to handle.
FSD is inevitable. Ultimately, roads deaths will go down as a large number of incidents are due to drunk, distracted, or fatigued driving - the three things computers don’t experience. They also have a better view of everything going on around the vehicle.
For me, the interesting part is using AI. When an incident occurs, and loss of life happens, how do you explain why the system failed? There’s no safety standards to define how you can know if an AI model has been trained appropriately. If the defense is “we need to train the AI more for similar situations”, how will a court determine damages? To my understanding, this is usually based around identifying if there was “ignorance, incompetence, and laziness”. Which is it?
It’s taken longer than what people thought but the next several years will probably bring notable changes, especially for L3.