Posted on 08/30/2018 7:11:25 AM PDT by budj
he 18th of March, 2018, was the day tech insiders had been dreading. That night, a new moon added almost no light to a poorly lit four-lane road in Tempe, Arizona, as a specially adapted Uber Volvo XC90 detected an object ahead. Part of the modern gold rush to develop self-driving vehicles, the SUV had been driving autonomously, with no input from its human backup driver, for 19 minutes.
(Excerpt) Read more at theguardian.com ...
Too many “cut and paste” coders, not enough actual programmers .
Way back when, I took Fortran, Cobol, C+ & C++, Basic (I think)... I did well on the course work. But it cured me of any aspirations I had of a career in programming.
1. Automation is fine when it is used to replace human functionality. A computer or machine can always be designed to do a predictable function better, faster and/or more precisely than a human.
2. The danger comes when automation is used to replace human judgement. Driving is a perfect example of this. Most of the operational aspects of a vehicle that can be upgraded to replace the human function have already been automated. Now we have industry operating in a realm where they are trying to replace human judgement, and it isn't working.
Thanks for posting. This is a very interesting article
That’s why you test. It sounds like they are testing in Production.
Good explanation of the underlying problem. You’re dead on.
In arguments with other developers I’ve tended to attack it from a coding perspective, such as, “how would the algorithm handle driving down a road with no stripes?”
“If a driver in the left lane needs to get into the right lane by merging between 2 autonomous cars, will they kindly let him in by slowing down / splitting the distance? And what if there’s a semi behind the 2nd autonomous car, wherein a human driver would say, ‘tough luck’ and not let the driver change lanes?”
You can come up with countless examples of “judgement” vs. “functionality”.
I love the “garbage in - garbage out” phrase. Applicable to a lot more than just programming. Take education for example...LOL
What proponents of autonomous vehicles don't realize is that the only way to safely replace human judgement is to add such large margins of safety that the system you're automating is actually less efficient in an automated state than it was in its "human" state.
In the example you used, the merging process wouldn't be a problem because that two autonomous vehicles would be traveling about half a mile apart ... at a speed of about 25 mph. I'm only half kidding about that.
I’ll throw in an extra wrinkle I’ve seen in my IT years, some of which were in industries that are heavily regulated like I imagine the DOT is doing with automated driving.
In all industries there’s a tendency to not change what works. That’s practical and is often a good strategy. However, in highly regulated environments you often can’t change what’s already working unless you go through a lot of red tape. So, say you’re adding a new feature in version 20.0 of the software, and you realize that there are some scenarios, though very rare, where the algorithm of the new feature would interfere with one of the old features (often by slightly changing the assumptions of what the old algorithm is based on). What do you do? Answer: you notify your supervisor/project manager and suggest the changes that need to be made. Sometimes they forward it up to the people really in charge (i.e. where the funding comes from, often the gubmint). Response: they don’t want to be responsible for authorizing the change on the algorithm that’s been working for years. They’d rather handle the fallout when it breaks by not being altered to handle the new scenario.
Over time, (one week, one month, one year...) self driving cars will reduce road fatalities significantly.
I agree with you, to a point. The article implies, correctly, that part of the problem is that code is, pretty much, writing or accessing other code on the fly...
When you do that, you better have the 100% PERFECT plan in place for that to happen and take in ALL contingencies - including a human on a bike that’s moving, not stationary, and doesn’t look like a human on a bike (shopping bags), and figures your car is being driven by a human who will swerve to miss her, and doesn’t swerve herself... whew!
One of the posters above lays it out well: It’s human judgement they are trying to replicate right now, and that ain’t easy.
I’d argue that the only way to safely replace human judgement is for -every- vehicle on the road to be autonomous, and for the road to be completely isolated, ie: free from all human activity. Some swarm-based algorithms that could operate solely on distance-to-collision, with no regard to stripes, signs, contruction, etc. The movie Minority Report showed something akin to this.
I heard a trucker say
Lets see a damn computer lol ok at the weather forecast going over apass and pull over and put chains on
It’s difficult to believe that autonomous vehicles will function well at night in a heavy snowstorm.
Sometimes the State Patrol is there to insist.
It's called the Internet.
It gives a straight-forward description of the human and computer actions that led to the death of Elaine Herzberg (though I would add she foolishly walked her bike in dark section of the road. Did she not see the headlights coming at her? Did she believe she had enough time to cross the street? Did she believe the car would see her and slow down or swerve to miss her? I have no idea what she was thinking as her actions showed complete lack of situational awareness or disregard of her actions). Computers can't predict irrational human behavior. Neither can people.
I would love a car with AUGMENTED driving and safety features BUT I would never want one with total computer control. I want a wheel and brakes under my control.
The driver was also a fool for not paying attention. As I understand it the car was in testing mode and he/she was supposed to be paying attention not looking at his/her phone.
This incident shows computers (as yet or if ever) cannot be programmed to anticipate every situation.
Throw in poor judgment by people and unforeseen situations and human judgment becomes critical.
Exactly. In other words, autonomous cars can operate safely if they start to function a lot like trains.
There's a simple reason for this. Each successive automated feature makes it easier for the motorist to drive the car without paying attention. This is exactly what happened with the Uber crash in Arizona.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.