Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

Motor Mouth: Are We That Dumb to Make AI Cars That Get Smarter on Their Own?
Driving ^ | 5/5 | David Booth

Posted on 05/07/2017 11:43:33 AM PDT by nickcarraway

Shades of The Terminator, artificial intelligence computers are now learning by themselves and we don't know how

I’m not often given to rampant paranoia. As troubled as my psyche may often be, it holds no fear of globalists, the grassy knoll was, well, just a grassy knoll and I have never once thought of The Terminator as documentary.

Or at least, I didn’t until last week when I read an article by researcher Will Knight.

Allow me to explain. If you’ve been following the hoopla surrounding self-driving cars of late, you know that there’s enormous interest in the computational abilities of artificial intelligence. Ford recently invested a billion bucks in a month-old startup called Argo AI mainly because its ex–Carnegie Mellon staffers are some of the best robotics engineers on the planet. More dramatically, Roborace — a new series that pits driverless F1-style open-wheelers against one another — will soon be coming to a racetrack near you. And, perhaps most ominously of all, the American National Highway Traffic Safety Administration recently certified Google’s AI’ed computer controller as a “licensed” driver so that the Silicon Valley giant would be able to send its little runabouts scurrying about autonomously without the pesky human “backup” that has so far been required every time a self-driving car tries to steer itself through traffic.

It’s easy to understand automakers’ obsession with artificial intelligence. It’s virtually impossible to program a self-driving car for the countless situations/objects/living organisms it will encounter each and every day. Some problems will be mundane — the unexpected telephone line repair truck illegally parked on a narrow road that stymies a self-driving car’s prohibition against crossing a solid yellow line. It could be simple human idiosyncrasy — the autonomous Uber that reached a stalemate with a “brake standing” cyclist because it could not determine if the rider wanted to proceed forward or back. It could even be the downright weird, like the Google car that encountered a woman in a wheelchair chasing a duck into the street with a broom. “You can’t make this up,” said the CEO of Google’s self-driving car project at the time. More importantly for the engineers creating self-driving cars, if you can’t imagine something happening, you can’t program a car to avoid it.

That’s where artificial intelligence — the ability for machines to “learn” without human intervention — is supposed to come in. Essentially, it involves imbuing a computer with algorithms such that it can learn beyond its simple programming. Artificial intelligence — at least as it pertains to autonomous automobiles — will allow driverless cars to recognize situations that we forget to program it for (or, in the case of old ladies in wheelchairs chasing ducks, couldn’t in a million years have imagined) and take appropriate action. Sounds good, right? There can’t be anything even remotely conspiratorial about teaching a machine to be safer and smarter?

Right?

Until you read Knight’s The Dark Secret at the Heart of AI. Essentially, Knight’s contention is that while experts — that would be, the engineers who program these supercomputers — know what their machines can do, they don’t have a clue how they do it. Yes, you read that right: According to Knight, the guys who program these computers don’t really know how their algorithms actually work. Indeed, if anything goes wrong, says Knight, even the engineers who designed them may struggle to isolate the reason for its malfunction, there being no obvious way, says the author, “to design such a system so that it could always explain why it did what it did.” In other words, if a car directed by artificial intelligence crashed into a tree, not only might there not be an immediate answer to what happened, one might never be able to find out why.

Why this should be so concerning — actually, disconcerting if you’re even remotely paranoid — is that, again according to Knight, last year chip maker Nvidia road tested a very special autonomous car, one that didn’t rely on instructions provided by an engineer or programmer, but instead “had taught itself to drive by watching a human do it.” As impressive a feat as that is, says Knight, “it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions.” (As an example of AI’s ability to confound, Knight goes on to detail about how an NYC Mount Sinai experiment called Deep Patient taught itself to predict diseases just from looking at patient’s records. The problem is that the computer went on to also predict incidents of schizophrenia, and its programmers have no idea how that was possible.)

Now, never mind the obvious — please God, don’t let our computers learn anything from Donald Trump — there’s the “mind-boggling” possibility, as Knight suggests, that these will be the first machines their creators — even the geniuses in Silicon Valley — don’t understand. Just as important is the matter of trust. For instance, how do doctors justify changing the drugs someone is being prescribed when they don’t know how Deep Patient made its diagnosis?

Now, this would all be just another quaint little distraction if only Mr. Knight were a half-baked conspiracy theorist. Unfortunately, for those looking for some calming news at the end of this fulmination, Knight is the senior editor for artificial intelligence at the MIT Technology Review — yes, as in Massachusetts’ famed Institute of Technology — so it’s a little hard to dismiss him as just another crackpot who forgot to wear his tin hat.

But wait, like all good paranoid rants, there’s even more. To surprisingly little fanfare, Elon Musk — yes, he of the electric cars that supposedly drive themselves — recently launched Neuralink, a startup that promises to implant chips into your head so you can communicate directly with artificial intelligence, the Guardian quoting the Tesla and SpaceX CEO as saying we must all become cyborgs if we don’t want to become “house cats” to artificial intelligence.

So, let me see if I got this all right. To become absolutely autonomous, self-driving cars will have to learn to think for themselves. The problem then becomes that, once they become (at least semi) sentient, we might not necessarily be able to control them. And an automotive CEO who has already shown that he doesn’t mind using his customers as beta testers — think Autopilot and Joshua Brown — wants to put a chip in my head so that very same artificial intelligence can communicate directly with my synapses. And, oh, we’re going ahead with all of this because we’re too lazy to push our own gas pedals and steer our own wheels.

Maybe I’m not so paranoid after all.


TOPICS: Business/Economy; Computers/Internet; Science
KEYWORDS: ai; automakers

1 posted on 05/07/2017 11:43:33 AM PDT by nickcarraway
[ Post Reply | Private Reply | View Replies]

To: nickcarraway

Wait until they get programed with conspiracy theories and start killing us off.


2 posted on 05/07/2017 11:48:07 AM PDT by mountainlion (Live well for those that did not make it back.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway
You know what?

I'll bet it'll get so crazy that SOME day when an investigative reporter looks into a CIA guy working against the American people, his car no the way to LAX just up and EXPLODES..!!

Nahhh...!!!


3 posted on 05/07/2017 12:02:21 PM PDT by gaijin
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

If humans don’t know how computers learn, it just allows unknown errors to creep in. Interesting that the researcher is named Knight, because this reminds me of the old TV show Knight Rider. His car was an AI that could drive itself.


4 posted on 05/07/2017 12:05:14 PM PDT by Telepathic Intruder
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway
but instead “had taught itself to drive by watching a human do it.”

[Starman is driving the car, and speeds across a recently turned red light, causing crashes for the other motorists]

Starman: Okay?

Jenny Hayden: Okay? Are you crazy? You almost got us killed! You said you watched me, you said you knew the rules!

Starman: I do know the rules.

Jenny Hayden: Oh, for your information pal, that was a *yellow* light back there!

Starman: I watched you very carefully. Red light stop, green light go, yellow light go very fast.

5 posted on 05/07/2017 12:09:06 PM PDT by NonValueAdded (#DeplorableMe #BitterClinger #HillNO! #cishet #MyPresident #MAGA #Winning)
[ Post Reply | Private Reply | To 1 | View Replies]

To: NonValueAdded

LOL! That was one great movie scene!


6 posted on 05/07/2017 12:27:24 PM PDT by Fiddlstix (Warning! This Is A Subliminal Tagline! Read it at your own risk!(Presented by TagLines R US))
[ Post Reply | Private Reply | To 5 | View Replies]

To: nickcarraway

Self driving cars will not work when they decide they know better than we where they should take us. Crash and burn.


7 posted on 05/07/2017 12:37:13 PM PDT by exnavy (God save the republic.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Telepathic Intruder

KITT was compromised in a few episodes.

Don’t forget KARR.


8 posted on 05/07/2017 1:15:18 PM PDT by wally_bert (I didn't get where I am today by selling ice cream tasting of bookends, pumice stone & West Germany)
[ Post Reply | Private Reply | To 4 | View Replies]

To: nickcarraway

my smart alec smart car kept taking U-Turns- when I asked why it was doing so it said “Because the sign says “No! You Turn!”

Then the car stopped at a ‘Stop Ahead” Sign, and said, “I don’t see no stinking head”


9 posted on 05/07/2017 2:26:57 PM PDT by Bob434
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

Anything to avoid looking up from the smartphone.


10 posted on 05/07/2017 2:51:04 PM PDT by headstamp 2 (Ignorance is reparable, stupid is forever)
[ Post Reply | Private Reply | To 1 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson