Posted on 11/16/2017 9:05:21 AM PST by MarchonDC09122009
Researchers go after the biggest problem with self-driving cars
By Steve LeVine November 01, 2017
The biggest difficulty in self-driving cars is not batteries, fearful drivers, or expensive sensors, but what's known as the "trolley problem," a debate over who is to die and who saved should an autonomously driven vehicle end up with such a horrible choice on the road. And short of that, how will robotic vehicles navigate the countless other ethical decisions, small and large, executed by drivers as a matter of course?
In a paper, researchers at Carnegie Mellon and MIT propose a model that uses artificial intelligence and crowd sourcing to automate ethical decisions in self-driving cars. "In an emergency, how do you prioritize?" Ariel Procaccia, a professor at Carnegie Mellon, tells Axios.
The bottom line: The CMU-MIT model is only a prototype at this stage. But it or something like it will have to be mastered if fully autonomous cars are to become a reality.
"We are not saying that the system is ready for deployment. But it is a proof of concept, showing that democracy can help address the grand challenge of ethical decision making in AI," Procaccia said.
How they created the system: Procaccia's team used a model at MIT called the Moral Machine, in which 1.3m people gave their ethical vote to around 13 difficult, either-or choices in trolley-like driving scenarios. In all, participants provided 18.2 million answers. The researchers used artificial intelligence to teach their system the preferences of each voter, then aggregated them, creating a "distribution of societal preferences," in effect the rules of ethnical behavior in a car. The researchers could now ask the system any driving question that came to mind; it was as though they were asking the original 1.3 million participants to vote again.
A robot election: "When the system encounters a dilemma, it essentially holds an election, by deducing the votes of the 1.3 million voters, and applying a voting rule," Procaccia said. He said, "This allows us to give the following strong guarantee: the decision the system takes is likely to be the same as if we could go to each of the 1.3 million voters, ask for their opinions, and then aggregate their opinions into a choice that satisfies mathematical notions of social justice."
Indeed.
This entire article is factored on fake science and faulty logic.
If you really fear the autonomous car, then you better never fly.
AI bots and computers have been dominating our airways for years. And soon will be on our roadways.
So. you wanna fear autonomous driving? Then yeah - the “social justice” aspect is your boogie man. The fear that the State will decide when and where you go.
And even that is goofy.
Oh, whew, I’m feeling so much better. All I need is to add a “Dump Trump” sticker on my bicycle and then the ‘autonomous vehicle’ will steer into that school bus for some very late-term abortions! Social Justice in action baby!
“If you really fear the autonomous car, then you better never fly.”
Many of us don’t fly well either. Logic has nothing to with it.
It’s the idea of handing the keys of your life over to somebody else, willingly.
General anesthesia being the ultimate terror.
The free market will take care of it in the following manner.
Say Ford, for example, sells a car in which the AI is instructed to prioritize the life of the driver and the occupants of the car, perhaps even in some specific order.
And say Chrysler sells a car in which the AI is instructed to prioritize the number of lives lost or threatened, no matter whose lives.
Fords would dramatically outsell Chryslers, at least until Chrysler re-instructed its AI.
And car companies will easily figure this out. What will screw it up royally is if the legislature gets involved.
How does the most sophisticated AI anticipate ANY off-road potential threat let alone scan for it???
Also... WHEN (not if) people start dying at the hands of a robot, who is liable?, and how easy will it be to litigate?
What happens when the Highway Patrol pulls one over?
can he shut it down?
How do you award a driver’s license to a piece of software?
So many things wrong with this horribly bad idea.
No one is allowed to examine algorithms that Google, Facebook and Twitter use to determine who and what opinions should be censored.
No one is allowed to examine voting machine tabulation algorithms which are used to determine / manipulate political vote outcomes.
What could possibly go wrong with social justice based live or die opaque algorithms that decide who lives or dies in an autonomous vehicle accident?
Let’s say that the AI makes a bad decision (and it will). Who is the liable party?
“It all made sense until they blew it. “
I agree. Today’s incarnation of “social justice” is Twitter. Look what THAT is doing to politics, news and political correctness.
If you really fear the autonomous car, then you better never fly.
Many of us dont fly well either. Logic has nothing to with it.
Its the idea of handing the keys of your life over to somebody else, willingly.
________________________________________________
Dude. Every day you place your life in the hands of other drivers. Drunk drivers, stoned drivers, idiot mouth breathing drivers who vote democrat, drivers that are too stupid to breathe.
I hope they will never take away our driving freedoms. But to fear an AI car over the millions of idiot drivers on the road today is painfully stupid thinking.
No it isn’t. Trolley problems are fun thought experiments for ethical debates. But out here in reality they just don’t happen. Bot while you’re driving anyway. Things in cars are too fast, if you wound up with the choice of 3 old ladies or 1 kid by the time you thought “3 old ladies or 1 kid” you’d have already run over whoever was in the straight line.
In this modern world where liability is already split by the insurance companies that’s one that just doesn’t matter. You gotta work really hard for the insurance companies to decide you’re 100% liable and therefore your company will pay everything. And even then you’ll probably find the companies did an 80-20 split just to maintain positive relations (they have a vested interest in getting along with each other).
Sounds like a lawsuit.
Not that there are not a jillion billboards for auto accident attorneys already out there where I live. ;)
But the good news is that there will be FAR fewer accidents, and lets be frank, most of the time, the decision regarding who to “throw under the bus” (pun intended) will be an obvious one.
A pretty standard decision matrix for these things is to “aim” for the object that’s furthest away. Furthest away give more time to slow down and more time for the “target” to take evasive action thus maximizing the opportunities to avoid the accident and reduce over all damage if it isn’t avoided. That’s actually the defensive driving instructions we’re supposed to use.
The AI is based on the opinions of 1.3 million people. Presumably, at least half of them are women. So perhaps the result will be to throw the train full of passengers off the cliff.
That, and flipping off a driverless car would have no effect!
Will the crowd source decide who gets sued in an accident? I think not! Then who will be liable...the owner, passengers, carmaker, sensor designers...?
So a group of teens run into the street forcing the car to make a decision about the greater number of lives saved, and so goes off the side of the road. Would kids do this? You betcha.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.