Posted on 06/30/2016 9:18:54 PM PDT by BenLurkin
“The car expected ALL tractor-trailers to be painted white?”
What the author of this article should have explained is that the software mkstakenly concluded that the cameras were overexposed due to sunlight bombarding the camera’s sensors. The data you’d receive in that situation is similar to viewing “all white”.
Also, if the car’s cameras were too close, the system wouldn’t be able to perceive depth ... It also would be able to perceive depth if there wasn’t much texture on the truck (sounds like this was the case since the sides were plain white).
What bothers me is that this “blind” condition should have automatically triggered a vehicle slow down and alert the driver to take over (duh! Really? ... :-) ) This exact condition I described is one that should be tested to hell and back and then some ... It’s a well known corner case test.
I develop image processing applications for a living. A project I started to work on in 2010 is just now being deployed. The testing it went through was quite rigorous. Even now, the product is in limited release and won’t be in general release for a couple more years. We didn’t have a 10th of the variables that Tesla had when developing automated driving.
I dont care how brilliant their engineers are ... there is no way they’ve done enough real world testing if the information in this article is true and a crash occurs under these conditions. Testing is *everything* when it comes to autonomous vehicles. Sadly, that’s where corners are usually cut these days.
Car passed under trailer and the top of the car was sheared off, along with some of the occupant. Didn’t see that coming did he?
I already do. No change for me.
Car went into a trailer. Head gone.
Okay technologists ... this wonderful visionary pipe dream of automated traffic fleets zipping around a dense urban area or even an area without decent roads gets its first high profile lawsuit (that Tesla the company is liable for) to bring everyone back to Earth as to why this is not going happen for a very long time (without completely rebuilding the road system to support it).
They are doing testing. In the real world as we speak...
Every second these guinea pigs are on the road, the AI is improved via “deep learning”. The machines simply observe thousands of human drivers and learn how to drive collectively.
Of course they give the caveat that the system is new and you should keep your hands on the wheel (due to new rare occurances like this). Then when the driver grabs the wheel, all the cars learn how to handle that new situation.
I was given a preview of this type of software about a year ago and even back then it felt incredible.
IMO, the biggest issue now is people becoming complacent because the system seems so amazing in 99.99% of driving situations. Even if watching and holding the wheel, your mind relaxes and you become unprepared to take over for that 0.01% chance event. I felt that “complacent” feeling even on the first ride.
This guy might have logged thousands of miles without a single error and eventually got so complacent he watched movies and even fell asleep
Not unlike what many a fine citizen said in 1905 about horseless carriages. And with 30,000 dead every year on the highway, who knows? Maybe they were right.
later
I have coined a new term to describe why I don’t think self-driving cars are technologically feasible or desirable to any safe, long-time,high performance independent driver who seeks personal autonomy and freedom to travel at will to anywhere they desire. That term is: Technological Hubris. It comes my observation that most scientists and new engineers are from a decidedly left wing philosophical bent. They think as a part of their philosophy that people and technology are perfectible.
I have lived in northern Michigan for just shy of forty years, and can say with the utmost confidence that engineers can’t program a vehicle to anticipate the unpredictability of other people, vehicles, or even animals. One time. on my way to Boyne City from Charlevoix, a doe leapt clear over the hood of my girlfriends Corvette, to only leave a mere superficial scratch on the paint of the left fender. How do you program a vehicle with it’s numerous sensors that will compute that uncertainty/unpredictability factor into an inanimate autonomous vehicle?
I am hardly a technophobe.
I say never, as the technology currently stands, and if the car can decide i am to be sacrificed over others, then damn straight, never.
“Didnt see that coming did he?”
Not for very long.
To establish a little credibility in my discussing the possible future of self-driving autonomous vehicles, let me say that I’ve been a marine mechanic and an automotive restoration specialist for over 35 years. I’ve built and restored numerous vehicles and some boats, and I can’t imagine surrendering my control or possible life to some egg-head like Elon Musk, who thinks we may be living in a computer-generated virtual world.
Poorest writing EVER!
Here is a good collection of Tesla auto-pilot stories that liberal lefty geeks like to share about technology over at Ars Technica. I generally like their content, but they are massively skewed leftist.
http://arstechnica.com/search/?ie=UTF-8&q=tesla+auto+pilot
Radio report said the driver was a former SEAL
A lot of us have been pointing out the big pocket libility for driverless cars is going to put this idea out of business. Here is a prime example.
First of all, this wasn’t a “self driving” car.
All these driver assistance features (e.g. autonomous braking) that are now available are no substitute for an alert driver.
I’m sure if the guy actually had read his owner’s manual, it would have told him exactly that. I know that’s expecting quite a lot.
Sorry the guy is dead, but he clearly wasn’t paying attention to what he was doing. Too bad he was too busy with other stuff to see the truck that the car’s sensors missed.
And I think that stuff about the white side of the truck being indistinguishable from the sky is BS. The autonomous braking systems that I am familiar with rely on radar, not light. More likely, the Tesla was going too fast for the systems to work sufficiently well to save this moron’ life.
“Not unlike what many a fine citizen said in 1905 about horseless carriages. And with 30,000 dead every year on the highway, who knows? Maybe they were right.”
Don’t underestimate the carnage caused by horses back then...much worse than today’s cars.
[Starman is driving the car, and speeds across a recently turned red light, causing crashes for the other motorists]
Starman: Okay?
Jenny Hayden: Okay? Are you crazy? You almost got us killed! You said you watched me, you said you knew the rules!
Starman: I do know the rules.
Jenny Hayden: Oh, for your information pal, that was a *yellow* light back there!
Starman: I watched you very carefully. Red light stop, green light go, yellow light go very fast.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.