Posted on 05/07/2017 11:43:33 AM PDT by nickcarraway
Shades of The Terminator, artificial intelligence computers are now learning by themselves and we don't know how
Im not often given to rampant paranoia. As troubled as my psyche may often be, it holds no fear of globalists, the grassy knoll was, well, just a grassy knoll and I have never once thought of The Terminator as documentary.
Or at least, I didnt until last week when I read an article by researcher Will Knight.
Allow me to explain. If youve been following the hoopla surrounding self-driving cars of late, you know that theres enormous interest in the computational abilities of artificial intelligence. Ford recently invested a billion bucks in a month-old startup called Argo AI mainly because its exCarnegie Mellon staffers are some of the best robotics engineers on the planet. More dramatically, Roborace a new series that pits driverless F1-style open-wheelers against one another will soon be coming to a racetrack near you. And, perhaps most ominously of all, the American National Highway Traffic Safety Administration recently certified Googles AIed computer controller as a licensed driver so that the Silicon Valley giant would be able to send its little runabouts scurrying about autonomously without the pesky human backup that has so far been required every time a self-driving car tries to steer itself through traffic.
Its easy to understand automakers obsession with artificial intelligence. Its virtually impossible to program a self-driving car for the countless situations/objects/living organisms it will encounter each and every day. Some problems will be mundane the unexpected telephone line repair truck illegally parked on a narrow road that stymies a self-driving cars prohibition against crossing a solid yellow line. It could be simple human idiosyncrasy the autonomous Uber that reached a stalemate with a brake standing cyclist because it could not determine if the rider wanted to proceed forward or back. It could even be the downright weird, like the Google car that encountered a woman in a wheelchair chasing a duck into the street with a broom. You cant make this up, said the CEO of Googles self-driving car project at the time. More importantly for the engineers creating self-driving cars, if you cant imagine something happening, you cant program a car to avoid it.
Thats where artificial intelligence the ability for machines to learn without human intervention is supposed to come in. Essentially, it involves imbuing a computer with algorithms such that it can learn beyond its simple programming. Artificial intelligence at least as it pertains to autonomous automobiles will allow driverless cars to recognize situations that we forget to program it for (or, in the case of old ladies in wheelchairs chasing ducks, couldnt in a million years have imagined) and take appropriate action. Sounds good, right? There cant be anything even remotely conspiratorial about teaching a machine to be safer and smarter?
Right?
Until you read Knights The Dark Secret at the Heart of AI. Essentially, Knights contention is that while experts that would be, the engineers who program these supercomputers know what their machines can do, they dont have a clue how they do it. Yes, you read that right: According to Knight, the guys who program these computers dont really know how their algorithms actually work. Indeed, if anything goes wrong, says Knight, even the engineers who designed them may struggle to isolate the reason for its malfunction, there being no obvious way, says the author, to design such a system so that it could always explain why it did what it did. In other words, if a car directed by artificial intelligence crashed into a tree, not only might there not be an immediate answer to what happened, one might never be able to find out why.
Why this should be so concerning actually, disconcerting if youre even remotely paranoid is that, again according to Knight, last year chip maker Nvidia road tested a very special autonomous car, one that didnt rely on instructions provided by an engineer or programmer, but instead had taught itself to drive by watching a human do it. As impressive a feat as that is, says Knight, its also a bit unsettling, since it isnt completely clear how the car makes its decisions. (As an example of AIs ability to confound, Knight goes on to detail about how an NYC Mount Sinai experiment called Deep Patient taught itself to predict diseases just from looking at patients records. The problem is that the computer went on to also predict incidents of schizophrenia, and its programmers have no idea how that was possible.)
Now, never mind the obvious please God, dont let our computers learn anything from Donald Trump theres the mind-boggling possibility, as Knight suggests, that these will be the first machines their creators even the geniuses in Silicon Valley dont understand. Just as important is the matter of trust. For instance, how do doctors justify changing the drugs someone is being prescribed when they dont know how Deep Patient made its diagnosis?
Now, this would all be just another quaint little distraction if only Mr. Knight were a half-baked conspiracy theorist. Unfortunately, for those looking for some calming news at the end of this fulmination, Knight is the senior editor for artificial intelligence at the MIT Technology Review yes, as in Massachusetts famed Institute of Technology so its a little hard to dismiss him as just another crackpot who forgot to wear his tin hat.
But wait, like all good paranoid rants, theres even more. To surprisingly little fanfare, Elon Musk yes, he of the electric cars that supposedly drive themselves recently launched Neuralink, a startup that promises to implant chips into your head so you can communicate directly with artificial intelligence, the Guardian quoting the Tesla and SpaceX CEO as saying we must all become cyborgs if we dont want to become house cats to artificial intelligence.
So, let me see if I got this all right. To become absolutely autonomous, self-driving cars will have to learn to think for themselves. The problem then becomes that, once they become (at least semi) sentient, we might not necessarily be able to control them. And an automotive CEO who has already shown that he doesnt mind using his customers as beta testers think Autopilot and Joshua Brown wants to put a chip in my head so that very same artificial intelligence can communicate directly with my synapses. And, oh, were going ahead with all of this because were too lazy to push our own gas pedals and steer our own wheels.
Maybe Im not so paranoid after all.
Wait until they get programed with conspiracy theories and start killing us off.
I'll bet it'll get so crazy that SOME day when an investigative reporter looks into a CIA guy working against the American people, his car no the way to LAX just up and EXPLODES..!!
Nahhh...!!!
If humans don’t know how computers learn, it just allows unknown errors to creep in. Interesting that the researcher is named Knight, because this reminds me of the old TV show Knight Rider. His car was an AI that could drive itself.
[Starman is driving the car, and speeds across a recently turned red light, causing crashes for the other motorists]
Starman: Okay?
Jenny Hayden: Okay? Are you crazy? You almost got us killed! You said you watched me, you said you knew the rules!
Starman: I do know the rules.
Jenny Hayden: Oh, for your information pal, that was a *yellow* light back there!
Starman: I watched you very carefully. Red light stop, green light go, yellow light go very fast.
LOL! That was one great movie scene!
Self driving cars will not work when they decide they know better than we where they should take us. Crash and burn.
KITT was compromised in a few episodes.
Don’t forget KARR.
my smart alec smart car kept taking U-Turns- when I asked why it was doing so it said “Because the sign says “No! You Turn!”
Then the car stopped at a ‘Stop Ahead” Sign, and said, “I don’t see no stinking head”
Anything to avoid looking up from the smartphone.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.