Free Republic
Browse · Search
Bloggers & Personal
Topics · Post Article

Skip to comments.

Franken-algorithms: the deadly consequences of unpredictable code
The Guardian ^ | 8-30-2018 | Andrew Smith

Posted on 08/30/2018 7:11:25 AM PDT by budj

he 18th of March, 2018, was the day tech insiders had been dreading. That night, a new moon added almost no light to a poorly lit four-lane road in Tempe, Arizona, as a specially adapted Uber Volvo XC90 detected an object ahead. Part of the modern gold rush to develop self-driving vehicles, the SUV had been driving autonomously, with no input from its human backup driver, for 19 minutes.

(Excerpt) Read more at theguardian.com ...


TOPICS: Computers/Internet; Conspiracy; Politics; Society
KEYWORDS: ai; algorithms
Navigation: use the links below to view more comments.
first 1-2021-4041-50 next last
Very good article, worth the read, about computer-based algorithms. I used to do this for manufacturing, and agree with everything in the article.
1 posted on 08/30/2018 7:11:25 AM PDT by budj
[ Post Reply | Private Reply | View Replies]

To: budj

Too many “cut and paste” coders, not enough actual programmers .


2 posted on 08/30/2018 7:20:19 AM PDT by GingisK
[ Post Reply | Private Reply | To 1 | View Replies]

Way back when, I took Fortran, Cobol, C+ & C++, Basic (I think)... I did well on the course work. But it cured me of any aspirations I had of a career in programming.


3 posted on 08/30/2018 7:29:09 AM PDT by Clutch Martin (The trouble ain't that there is too many fools, but that the lightning ain't distributed right.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: budj
Interesting article. I have a professional interest in autonomous vehicle development, and I've been warning about the risks associated with them for years. I've determined that the core of the problem is really simple:

1. Automation is fine when it is used to replace human functionality. A computer or machine can always be designed to do a predictable function better, faster and/or more precisely than a human.

2. The danger comes when automation is used to replace human judgement. Driving is a perfect example of this. Most of the operational aspects of a vehicle that can be upgraded to replace the human function have already been automated. Now we have industry operating in a realm where they are trying to replace human judgement, and it isn't working.

4 posted on 08/30/2018 7:31:20 AM PDT by Alberta's Child ("The Russians escaped while we weren't watching them ... like Russians will.")
[ Post Reply | Private Reply | To 1 | View Replies]

To: budj

Thanks for posting. This is a very interesting article


5 posted on 08/30/2018 7:38:38 AM PDT by khelus
[ Post Reply | Private Reply | To 1 | View Replies]

To: budj

That’s why you test. It sounds like they are testing in Production.


6 posted on 08/30/2018 7:40:27 AM PDT by AppyPappy (Don't mistake your dorm political discussions with the desires of the nation)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Alberta's Child

Good explanation of the underlying problem. You’re dead on.

In arguments with other developers I’ve tended to attack it from a coding perspective, such as, “how would the algorithm handle driving down a road with no stripes?”

“If a driver in the left lane needs to get into the right lane by merging between 2 autonomous cars, will they kindly let him in by slowing down / splitting the distance? And what if there’s a semi behind the 2nd autonomous car, wherein a human driver would say, ‘tough luck’ and not let the driver change lanes?”

You can come up with countless examples of “judgement” vs. “functionality”.


7 posted on 08/30/2018 7:41:32 AM PDT by TheZMan (I am a secessionist.)
[ Post Reply | Private Reply | To 4 | View Replies]

To: budj

I love the “garbage in - garbage out” phrase. Applicable to a lot more than just programming. Take education for example...LOL


8 posted on 08/30/2018 7:42:00 AM PDT by huldah1776 ( Vote Pro-life! Allow God to bless America before He avenges the death of the innocent.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: TheZMan
Thanks! You're absolutely right, too.

What proponents of autonomous vehicles don't realize is that the only way to safely replace human judgement is to add such large margins of safety that the system you're automating is actually less efficient in an automated state than it was in its "human" state.

In the example you used, the merging process wouldn't be a problem because that two autonomous vehicles would be traveling about half a mile apart ... at a speed of about 25 mph. I'm only half kidding about that.

9 posted on 08/30/2018 7:48:36 AM PDT by Alberta's Child ("The Russians escaped while we weren't watching them ... like Russians will.")
[ Post Reply | Private Reply | To 7 | View Replies]

To: budj

I’ll throw in an extra wrinkle I’ve seen in my IT years, some of which were in industries that are heavily regulated like I imagine the DOT is doing with automated driving.

In all industries there’s a tendency to not change what works. That’s practical and is often a good strategy. However, in highly regulated environments you often can’t change what’s already working unless you go through a lot of red tape. So, say you’re adding a new feature in version 20.0 of the software, and you realize that there are some scenarios, though very rare, where the algorithm of the new feature would interfere with one of the old features (often by slightly changing the assumptions of what the old algorithm is based on). What do you do? Answer: you notify your supervisor/project manager and suggest the changes that need to be made. Sometimes they forward it up to the people really in charge (i.e. where the funding comes from, often the gubmint). Response: they don’t want to be responsible for authorizing the change on the algorithm that’s been working for years. They’d rather handle the fallout when it breaks by not being altered to handle the new scenario.


10 posted on 08/30/2018 7:49:20 AM PDT by Tell It Right (Saturday Bama will scatter the Cards)
[ Post Reply | Private Reply | To 1 | View Replies]

To: budj

Over time, (one week, one month, one year...) self driving cars will reduce road fatalities significantly.


11 posted on 08/30/2018 7:53:56 AM PDT by Drango (A liberal's compassion is limited only by the size of someone else's wallet.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: GingisK

I agree with you, to a point. The article implies, correctly, that part of the problem is that code is, pretty much, writing or accessing other code on the fly...

When you do that, you better have the 100% PERFECT plan in place for that to happen and take in ALL contingencies - including a human on a bike that’s moving, not stationary, and doesn’t look like a human on a bike (shopping bags), and figures your car is being driven by a human who will swerve to miss her, and doesn’t swerve herself... whew!

One of the posters above lays it out well: It’s human judgement they are trying to replicate right now, and that ain’t easy.


12 posted on 08/30/2018 7:54:22 AM PDT by HeadOn (Time to prosecute Hillary. Please. Pretty Please...)
[ Post Reply | Private Reply | To 2 | View Replies]

To: Alberta's Child

I’d argue that the only way to safely replace human judgement is for -every- vehicle on the road to be autonomous, and for the road to be completely isolated, ie: free from all human activity. Some swarm-based algorithms that could operate solely on distance-to-collision, with no regard to stripes, signs, contruction, etc. The movie Minority Report showed something akin to this.


13 posted on 08/30/2018 7:58:53 AM PDT by TheZMan (I am a secessionist.)
[ Post Reply | Private Reply | To 9 | View Replies]

To: Alberta's Child

I heard a trucker say
“Let’s see a damn computer lol ok at the weather forecast going over apass and pull over and put chains on “


14 posted on 08/30/2018 8:19:22 AM PDT by Truthoverpower (The guvmint you get is the Trump winning express !)
[ Post Reply | Private Reply | To 4 | View Replies]

To: Drango

It’s difficult to believe that autonomous vehicles will function well at night in a heavy snowstorm.


15 posted on 08/30/2018 8:21:44 AM PDT by Paladin2
[ Post Reply | Private Reply | To 11 | View Replies]

To: Truthoverpower

Sometimes the State Patrol is there to insist.


16 posted on 08/30/2018 8:22:51 AM PDT by Paladin2
[ Post Reply | Private Reply | To 14 | View Replies]

To: budj
"code piled on code creates ‘a universe no one fully understands’"

It's called the Internet.

17 posted on 08/30/2018 8:25:06 AM PDT by Paladin2
[ Post Reply | Private Reply | To 1 | View Replies]

To: Alberta's Child
This is a very good, fact-based, objective article.

It gives a straight-forward description of the human and computer actions that led to the death of Elaine Herzberg (though I would add she foolishly walked her bike in dark section of the road. Did she not see the headlights coming at her? Did she believe she had enough time to cross the street? Did she believe the car would see her and slow down or swerve to miss her? I have no idea what she was thinking as her actions showed complete lack of situational awareness or disregard of her actions). Computers can't predict irrational human behavior. Neither can people.

I would love a car with AUGMENTED driving and safety features BUT I would never want one with total computer control. I want a wheel and brakes under my control.

The driver was also a fool for not paying attention. As I understand it the car was in testing mode and he/she was supposed to be paying attention not looking at his/her phone.

This incident shows computers (as yet or if ever) cannot be programmed to anticipate every situation.

Throw in poor judgment by people and unforeseen situations and human judgment becomes critical.

18 posted on 08/30/2018 8:25:29 AM PDT by yesthatjallen
[ Post Reply | Private Reply | To 4 | View Replies]

To: TheZMan

Exactly. In other words, autonomous cars can operate safely if they start to function a lot like trains.


19 posted on 08/30/2018 8:30:47 AM PDT by Alberta's Child ("The Russians escaped while we weren't watching them ... like Russians will.")
[ Post Reply | Private Reply | To 13 | View Replies]

To: yesthatjallen
Here's one big problem with many safety features in cars: We're finding that they actually make cars less safe. It sounds counter-intuitive, but a car that is 75% or 95% automated is probably less safe than a car that has NO automated feature.

There's a simple reason for this. Each successive automated feature makes it easier for the motorist to drive the car without paying attention. This is exactly what happened with the Uber crash in Arizona.

20 posted on 08/30/2018 8:37:04 AM PDT by Alberta's Child ("The Russians escaped while we weren't watching them ... like Russians will.")
[ Post Reply | Private Reply | To 18 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-4041-50 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
Bloggers & Personal
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson