Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Researchers go after the biggest problem with self-driving cars (database decides who lives & dies)
Axios ^ | 11/01/2017 | Steve LeVine

Posted on 11/16/2017 9:05:21 AM PST by MarchonDC09122009

Researchers go after the biggest problem with self-driving cars

By Steve LeVine November 01, 2017

The biggest difficulty in self-driving cars is not batteries, fearful drivers, or expensive sensors, but what's known as the "trolley problem," a debate over who is to die and who saved should an autonomously driven vehicle end up with such a horrible choice on the road. And short of that, how will robotic vehicles navigate the countless other ethical decisions, small and large, executed by drivers as a matter of course?

In a paper, researchers at Carnegie Mellon and MIT propose a model that uses artificial intelligence and crowd sourcing to automate ethical decisions in self-driving cars. "In an emergency, how do you prioritize?" Ariel Procaccia, a professor at Carnegie Mellon, tells Axios.

The bottom line: The CMU-MIT model is only a prototype at this stage. But it or something like it will have to be mastered if fully autonomous cars are to become a reality.

"We are not saying that the system is ready for deployment. But it is a proof of concept, showing that democracy can help address the grand challenge of ethical decision making in AI," Procaccia said.

How they created the system: Procaccia's team used a model at MIT called the Moral Machine, in which 1.3m people gave their ethical vote to around 13 difficult, either-or choices in trolley-like driving scenarios. In all, participants provided 18.2 million answers. The researchers used artificial intelligence to teach their system the preferences of each voter, then aggregated them, creating a "distribution of societal preferences," in effect the rules of ethnical behavior in a car. The researchers could now ask the system any driving question that came to mind; it was as though they were asking the original 1.3 million participants to vote again.

A robot election: "When the system encounters a dilemma, it essentially holds an election, by deducing the votes of the 1.3 million voters, and applying a voting rule," Procaccia said. He said, "This allows us to give the following strong guarantee: the decision the system takes is likely to be the same as if we could go to each of the 1.3 million voters, ask for their opinions, and then aggregate their opinions into a choice that satisfies mathematical notions of social justice."


TOPICS: Business/Economy; Culture/Society; News/Current Events
KEYWORDS: autonomous; car; safety
Navigation: use the links below to view more comments.
first 1-2021-4041-6061-68 next last
Have you ever wondered how an autonomous (self-driving) car's software decides who lives or dies in response to accident course of action? Have no fear - Carnegie Mellon and MIT are working on a database called the Moral Machine which uses "social justice" consensus vote of a million+ Cambridge Boston liberals to play god deciding who lives or dies during an accident.

One wonders if the self-driving vehicle's Moral Machine software would equally weigh life deciding action that which sacrifices an older male driving a truck with an NRA sticker versus a young adult female driving a Prius?..

"1.3m people gave their ethical vote to around 13 difficult, either-or choices in trolley-like driving scenarios. In all, participants provided 18.2 million answers. The researchers used artificial intelligence to teach their system the preferences of each voter, then aggregated them, creating a "distribution of societal preferences," in effect the rules of ethnical behavior in a car. The researchers could now ask the system any driving question that came to mind; it was as though they were asking the original 1.3 million participants to vote again.

A robot election: "When the system encounters a dilemma, it essentially holds an election, by deducing the votes of the 1.3 million voters, and applying a voting rule," Procaccia said. He said, "This allows us to give the following strong guarantee: the decision the system takes is likely to be the same as if we could go to each of the 1.3 million voters, ask for their opinions, and then aggregate their opinions into a choice that satisfies mathematical notions of social justice."

1 posted on 11/16/2017 9:05:21 AM PST by MarchonDC09122009
[ Post Reply | Private Reply | View Replies]

To: MarchonDC09122009

Carnegie MIT paper link:

https://arxiv.org/pdf/1709.06692.pdf

Excerpt -

“A Voting-Based System for Ethical Decision Making
Ritesh Noothigattu

Machine Learning Dept.
CMU
Snehalkumar ‘Neil’ S. Gaikwad
The Media Lab
MIT
Edmond Awad
The Media Lab
MIT
Sohan Dsouza
The Media Lab
MIT
Iyad Rahwan
The Media Lab
MIT
Pradeep Ravikumar
Machine Learning Dept.
CMU
Ariel D. Procaccia
Computer Science Dept.
CMU
Abstract
We present a general approach to automating ethical deci-
sions, drawing on machine learning and computational social
choice. In a nutshell, we propose to learn a model of societal
preferences, and, when faced with a specific ethical dilemma
at runtime, efficiently aggregate those preferences to identify
a desirable choice. We provide a concrete algorithm that in-
stantiates our approach; some of its crucial steps are informed
by a new theory of swap-dominance efficient voting rules. Fi-
nally, we implement and evaluate a system for ethical deci-
sion making in the autonomous vehicle domain, using prefer-
ence data collected from 1.3 million people through the Moral
Machine website.
1 Introduction
The problem of ethical decision making, which has long
been a grand challenge for AI (Wallach and Allen 2008),
has recently caught the public imagination. Perhaps its best-
known manifestation is a modern variant of the classic trol-
ley problem (Jarvis Thomson 1985): An autonomous vehicle
has a brake failure, leading to an accident with inevitably
tragic consequences; due to the vehicle’s superior percep-
tion and computation capabilities, it can make an informed
decision. Should it stay its course and hit a wall, killing its
three passengers, one of whom is a young girl? Or swerve
and kill a male athlete and his dog, who are crossing the
street on a red light? A notable paper by Bonnefon, Shariff,
and Rahwan (2016) has shed some light on how people ad-
dress such questions, and even former US President Barack
Obama has weighed in.1
Arguably the main obstacle to automating ethical deci-
sions is the lack of a formal specification of ground-truth
ethical principles, which have been the subject of debate
for centuries among ethicists and moral philosophers (Rawls
1971; Williams 1986). In their work on fairness in machine
learning, Dwork et al. (2012) concede that, when ground-
truth ethical principles are not available, we must use an “ap-
proximation as agreed upon by society.” But how can society
agree on the ground truth — or an approximation thereof —
when even ethicists cannot?
We submit that decision making can, in fact, be auto-
mated, even in the absence of such ground-truth principles,
1https://www.wired.com/2016/10/president-obama-
mit-joi-ito-interview/
by aggregating people’s opinions on ethical dilemmas. This
view is foreshadowed by recent position papers by Greene
et al. (2016) and Conitzer et al. (2017), who suggest that
the field of computational social choice (Brandt et al. 2016),
which deals with algorithms for aggregating individual pref-
erences towards collective decisions, may provide tools for
ethical decision making. In particular, Conitzer et al. raise
the possibility of “letting our models of multiple people’s
moral values vote over the relevant alternatives.”
We take these ideas a step further by proposing and im-
plementing a concrete approach for ethical decision making
based on computational social choice, which, we believe, is
quite practical. In addition to serving as a foundation for in-
corporating future ground-truth ethical and legal principles,
it could even provide crucial preliminary guidance on some
of the questions faced by ethicists. Our approach consists of
four steps:
I Data collection: Ask human voters to compare pairs of al-
ternatives (say a few dozen per voter). In the autonomous
vehicle domain, an alternative is determined by a vector
of features such as the number of victims and their gender,
age, health — even species!
II Learning: Use the pairwise comparisons to learn a model
of the preferences of each voter over all possible alterna-
tives.
III Summarization: Combine the individual models into a
single model, which approximately captures the collec-
tive preferences of all voters over all possible alternatives.
IV Aggregation: At runtime, when encountering an ethical
dilemma involving a specific subset of alternatives, use
the summary model to deduce the preferences of all vot-
ers over this particular subset, and apply a voting rule to
aggregate these preferences into a collective decision. In
the autonomous vehicle domain, the selected alternative
is the outcome that society (as represented by the voters
whose preferences were elicited in Step I) views as the
least catastrophic among the grim options the vehicle cur-
rently faces. Note that this step is only applied when all
other options have been exhausted, i.e., all technical ways
of avoiding the dilemma in the first place have failed, and
all legal constraints that may dictate what to do have also
failed.”


2 posted on 11/16/2017 9:09:09 AM PST by MarchonDC09122009 (When is our next march on DC? When have we had enough?)
[ Post Reply | Private Reply | To 1 | View Replies]

To: MarchonDC09122009

You also have to think about “special people” like politicians, law enforcement, the wealthy, all outfitting their vehicles with technology to give them special privileges on the road that no one else will have.

Then the other side is that the special people will be able to do things to the vehicles that belong to people who protest them who oppose them, and etc.

I just don’t trust this whole thing.


3 posted on 11/16/2017 9:09:56 AM PST by MeganC (Democrat by birth, Republican by default, Conservative by principle.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: MarchonDC09122009

The problem with AI is that it will have to make decisions that only people should be making. I amazed at how many people decry the idea of AI drones taking out enemies, but see no problem with AI automobiles making potential life-and-death decision on a continuous basis.


4 posted on 11/16/2017 9:11:46 AM PST by kosciusko51
[ Post Reply | Private Reply | To 1 | View Replies]

To: MeganC

Excellent insight to probably what will happen later in our real world where the elite really rule!

“You also have to think about “special people” like politicians, law enforcement, the wealthy, all outfitting their vehicles with technology to give them special privileges on the road that no one else will have.

Then the other side is that the special people will be able to do things to the vehicles that belong to people who protest them who oppose them, and etc.

I just don’t trust this whole thing.”


5 posted on 11/16/2017 9:13:12 AM PST by Grampa Dave (It's over for the NFL. They have stage 5 Colin brain cngancer, and it's terminal.)
[ Post Reply | Private Reply | To 3 | View Replies]

To: MarchonDC09122009

The movie I Robot touched on this.

The deep dark secret from Will Smith’s past was that the robot saved his life instead of the little girl’s based on its logical algorithm.

I’ve heard this argument made before, but the fatal flaw to the argument is that people choose who lives or dies often on far more flawed logic. But the most important reason it is flawed is that it forgets that with auto-drive cars, there will be far fewer opportunities to make the decision, because there will be fewer fatalities.

Computers are really good at looking at the facts, applying the rules at hand, and making the best decision. Human beings second guess their split second decision for the rest of their life.


6 posted on 11/16/2017 9:13:22 AM PST by robroys woman (So you're not confused, I'm male.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: MarchonDC09122009

Lifeboat Ethics: The gift that just keeps on giving....and giving....and giving....


7 posted on 11/16/2017 9:13:24 AM PST by Buckeye McFrog
[ Post Reply | Private Reply | To 1 | View Replies]

To: MarchonDC09122009

Here is something else to ponder, if an autonomous car gets into an accident, who is liable? Is it the owner of the car, the manufacturer of the car or the software programmer who’s code had a bug or design flaw? Who’s insurance company pays?


8 posted on 11/16/2017 9:13:33 AM PST by Dutch Boy
[ Post Reply | Private Reply | To 1 | View Replies]

To: MarchonDC09122009
This whole problem points to a dilemma in engineering that I call the "Hazard of Technological Improvement."

In a nutshell ...

When something operates today in a primitive form, all of the processes associated with it (vehicle or infrastructure design, political considerations, legal system, etc.) are tailored to accept all of the limitations of this primitive form.

Once you (as an innovative designer) figure out a way to improve on that design, you often find yourself in a position where you have to either fix the imprecisions that you always knew were there, or add layers of safety to deal with them because now YOU are responsible for the shortcomings of the new technology.

The end result is that the new technology is often forced to operate less efficiently than the old one -- simply because the new technology has resulted in a change of responsibility and civil liability in the event of a catastrophic failure.

9 posted on 11/16/2017 9:15:05 AM PST by Alberta's Child ("Tell them to stand!" -- President Trump, 9/23/2017)
[ Post Reply | Private Reply | To 1 | View Replies]

To: MeganC

I just don’t trust this whole thing.


I don’t really “trust” it either, but I trust it a LOT more than the split second decision made by a human being. And the really good news is that with auto drive cars there will be far fewer fatalities. i.e. there will be fewer times where anyone - man or machine - will need to make the decision.


10 posted on 11/16/2017 9:15:15 AM PST by robroys woman (So you're not confused, I'm male.)
[ Post Reply | Private Reply | To 3 | View Replies]

To: MarchonDC09122009
"When the system encounters a dilemma, it essentially holds an election, by deducing the votes of the 1.3 million voters, and applying a voting rule," Procaccia said. He said, "This allows us to give the following strong guarantee: the decision the system takes is likely to be the same as if we could go to each of the 1.3 million voters, ask for their opinions, and then aggregate their opinions into a choice that satisfies mathematical notions of social justice."

They had me going up until those last two words.

It all made sense until they blew it.

11 posted on 11/16/2017 9:16:41 AM PST by Pontiac (The welfare state must fail because it is contrary to human nature and diminishes the human spirit.L)
[ Post Reply | Private Reply | To 1 | View Replies]

To: kosciusko51

The problem with AI is that it will have to make decisions that only people should be making.


I disagree. People can be terrible at making these decisions.

There is the old “difference between men and women” paradox regarding the home by railroad tracks. You are close to a switch that will change the course of approaching trains. Normally, it takes them by your house, but if you throw the switch, it will take them down a track that ends and the train will go over a cliff.

So the scenario is that you are near the switch and you see your baby plaing in the middle of the track by your house, and a passenger train is barreling down the track toward your baby and its certain demise. If you throw the switch, the baby is saved but all the passengers on the train will die.

The theory is that the mother’s maternal instinct will cause her to throw the switch. But the father will not. Neither would AI.

And that is a good thing.


12 posted on 11/16/2017 9:19:16 AM PST by robroys woman (So you're not confused, I'm male.)
[ Post Reply | Private Reply | To 4 | View Replies]

To: MarchonDC09122009

So when do we pick who dies based on their political orientation? Because any system based on any calculation other than maximizing lives (lifespan) saved will end up evaluating the relative worthiness of the individuals on a subjective basis.


13 posted on 11/16/2017 9:22:01 AM PST by calenel (The Democratic Party is a Criminal Enterprise. It is the Socialist Mafia.)
[ Post Reply | Private Reply | To 2 | View Replies]

To: MarchonDC09122009

Note that this is only one approach to the problem, and one that is used by exactly zero of the current autonomous car systems.

In practice this kind of decision is extremely rare. A much simpler algorithm is to minimize loss of life. It’s only in the case where it’s either one person or the other that such a decision is relevant. If the software seeks to minimize collateral damage, that further simplifies the decision process.

It seems to me that no life should be given priority over another, except when age is a factor. If it must be either a child or an adult, I think most would agree that the child should live - including almost all adults. Otherwise, flip a virtual coin.

It’s also worth considering that given human reaction time, most humans are likely to make bad decisions in these situations regardless - often resulting in extra loss of life.


14 posted on 11/16/2017 9:22:01 AM PST by PreciousLiberty (Make America Greater Than Ever!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: MarchonDC09122009

The impetus for automated vehicles will likely come from insurance companies which can reduce their liability claims. So they will determine the software, not some MIT eggheads.

Given the trend of injury claims, they will prefer dead people to maimed people.

Ah! Brave new world!


15 posted on 11/16/2017 9:22:28 AM PST by RossA
[ Post Reply | Private Reply | To 1 | View Replies]

To: MeganC
Yep, there are many ways that this software could be exploited. Own any firearms or have a CHL? Your car could be programmed to depart from your chosen route and deliver you directly to a capture point.

Besides, it's highly unlikely that law enforcement will be using autonomous vehicles. Like "smart gun" technology, that'll be something for the "civilians" to deal with, but not the cops. Same thing with politicians, all the way down to the local level.

16 posted on 11/16/2017 9:24:13 AM PST by Charles Martel (Progressives are the crab grass in the lawn of life.)
[ Post Reply | Private Reply | To 3 | View Replies]

To: MarchonDC09122009

bkmk


17 posted on 11/16/2017 9:26:14 AM PST by sauropod (I am His and He is Mine)
[ Post Reply | Private Reply | To 1 | View Replies]

To: MarchonDC09122009

Bottom line: You are turning your life over to a machine to decide.

For us control freaks, that’s unacceptable.


18 posted on 11/16/2017 9:31:21 AM PST by Mariner (War Criminal #18)
[ Post Reply | Private Reply | To 1 | View Replies]

To: kosciusko51

“The problem with AI is that it will have to make decisions that only people should be making”

How do I sue the AI for making a bad one? This crowd sourcing stuff is a cop-out to avoid liability. There is a reason passenger jets and also train have humans in the loop, even with automation.


19 posted on 11/16/2017 9:31:26 AM PST by TalonDJ
[ Post Reply | Private Reply | To 4 | View Replies]

To: MarchonDC09122009

I despise the entire concept of autonomous cars. There is no way it can make decisions for every incident like a human can. Malfunctions, even slight ones, can bring it to a total standstill. How about dirt roads or off road? SO many things can bring it down. No way I would get in one except on a closed track.


20 posted on 11/16/2017 9:32:01 AM PST by SolidRedState (I used to think bizarro world was a fiction.)
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-4041-6061-68 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson