Posted on 11/16/2017 9:05:21 AM PST by MarchonDC09122009
Indeed.
This entire article is factored on fake science and faulty logic.
If you really fear the autonomous car, then you better never fly.
AI bots and computers have been dominating our airways for years. And soon will be on our roadways.
So. you wanna fear autonomous driving? Then yeah - the “social justice” aspect is your boogie man. The fear that the State will decide when and where you go.
And even that is goofy.
Oh, whew, I’m feeling so much better. All I need is to add a “Dump Trump” sticker on my bicycle and then the ‘autonomous vehicle’ will steer into that school bus for some very late-term abortions! Social Justice in action baby!
“If you really fear the autonomous car, then you better never fly.”
Many of us don’t fly well either. Logic has nothing to with it.
It’s the idea of handing the keys of your life over to somebody else, willingly.
General anesthesia being the ultimate terror.
The free market will take care of it in the following manner.
Say Ford, for example, sells a car in which the AI is instructed to prioritize the life of the driver and the occupants of the car, perhaps even in some specific order.
And say Chrysler sells a car in which the AI is instructed to prioritize the number of lives lost or threatened, no matter whose lives.
Fords would dramatically outsell Chryslers, at least until Chrysler re-instructed its AI.
And car companies will easily figure this out. What will screw it up royally is if the legislature gets involved.
How does the most sophisticated AI anticipate ANY off-road potential threat let alone scan for it???
Also... WHEN (not if) people start dying at the hands of a robot, who is liable?, and how easy will it be to litigate?
What happens when the Highway Patrol pulls one over?
can he shut it down?
How do you award a driver’s license to a piece of software?
So many things wrong with this horribly bad idea.
No one is allowed to examine algorithms that Google, Facebook and Twitter use to determine who and what opinions should be censored.
No one is allowed to examine voting machine tabulation algorithms which are used to determine / manipulate political vote outcomes.
What could possibly go wrong with social justice based live or die opaque algorithms that decide who lives or dies in an autonomous vehicle accident?
Let’s say that the AI makes a bad decision (and it will). Who is the liable party?
“It all made sense until they blew it. “
I agree. Today’s incarnation of “social justice” is Twitter. Look what THAT is doing to politics, news and political correctness.
If you really fear the autonomous car, then you better never fly.
Many of us dont fly well either. Logic has nothing to with it.
Its the idea of handing the keys of your life over to somebody else, willingly.
________________________________________________
Dude. Every day you place your life in the hands of other drivers. Drunk drivers, stoned drivers, idiot mouth breathing drivers who vote democrat, drivers that are too stupid to breathe.
I hope they will never take away our driving freedoms. But to fear an AI car over the millions of idiot drivers on the road today is painfully stupid thinking.
No it isn’t. Trolley problems are fun thought experiments for ethical debates. But out here in reality they just don’t happen. Bot while you’re driving anyway. Things in cars are too fast, if you wound up with the choice of 3 old ladies or 1 kid by the time you thought “3 old ladies or 1 kid” you’d have already run over whoever was in the straight line.
In this modern world where liability is already split by the insurance companies that’s one that just doesn’t matter. You gotta work really hard for the insurance companies to decide you’re 100% liable and therefore your company will pay everything. And even then you’ll probably find the companies did an 80-20 split just to maintain positive relations (they have a vested interest in getting along with each other).
Sounds like a lawsuit.
Not that there are not a jillion billboards for auto accident attorneys already out there where I live. ;)
But the good news is that there will be FAR fewer accidents, and lets be frank, most of the time, the decision regarding who to “throw under the bus” (pun intended) will be an obvious one.
A pretty standard decision matrix for these things is to “aim” for the object that’s furthest away. Furthest away give more time to slow down and more time for the “target” to take evasive action thus maximizing the opportunities to avoid the accident and reduce over all damage if it isn’t avoided. That’s actually the defensive driving instructions we’re supposed to use.
The AI is based on the opinions of 1.3 million people. Presumably, at least half of them are women. So perhaps the result will be to throw the train full of passengers off the cliff.
That, and flipping off a driverless car would have no effect!
Will the crowd source decide who gets sued in an accident? I think not! Then who will be liable...the owner, passengers, carmaker, sensor designers...?
So a group of teens run into the street forcing the car to make a decision about the greater number of lives saved, and so goes off the side of the road. Would kids do this? You betcha.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.