Free Republic
Browse · Search
News/Activism
Topics · Post Article

Have you ever wondered how an autonomous (self-driving) car's software decides who lives or dies in response to accident course of action? Have no fear - Carnegie Mellon and MIT are working on a database called the Moral Machine which uses "social justice" consensus vote of a million+ Cambridge Boston liberals to play god deciding who lives or dies during an accident.

One wonders if the self-driving vehicle's Moral Machine software would equally weigh life deciding action that which sacrifices an older male driving a truck with an NRA sticker versus a young adult female driving a Prius?..

"1.3m people gave their ethical vote to around 13 difficult, either-or choices in trolley-like driving scenarios. In all, participants provided 18.2 million answers. The researchers used artificial intelligence to teach their system the preferences of each voter, then aggregated them, creating a "distribution of societal preferences," in effect the rules of ethnical behavior in a car. The researchers could now ask the system any driving question that came to mind; it was as though they were asking the original 1.3 million participants to vote again.

A robot election: "When the system encounters a dilemma, it essentially holds an election, by deducing the votes of the 1.3 million voters, and applying a voting rule," Procaccia said. He said, "This allows us to give the following strong guarantee: the decision the system takes is likely to be the same as if we could go to each of the 1.3 million voters, ask for their opinions, and then aggregate their opinions into a choice that satisfies mathematical notions of social justice."

1 posted on 11/16/2017 9:05:21 AM PST by MarchonDC09122009
[ Post Reply | Private Reply | View Replies ]


Navigation: use the links below to view more comments.
first 1-2021 next last
To: MarchonDC09122009

Carnegie MIT paper link:

https://arxiv.org/pdf/1709.06692.pdf

Excerpt -

“A Voting-Based System for Ethical Decision Making
Ritesh Noothigattu

Machine Learning Dept.
CMU
Snehalkumar ‘Neil’ S. Gaikwad
The Media Lab
MIT
Edmond Awad
The Media Lab
MIT
Sohan Dsouza
The Media Lab
MIT
Iyad Rahwan
The Media Lab
MIT
Pradeep Ravikumar
Machine Learning Dept.
CMU
Ariel D. Procaccia
Computer Science Dept.
CMU
Abstract
We present a general approach to automating ethical deci-
sions, drawing on machine learning and computational social
choice. In a nutshell, we propose to learn a model of societal
preferences, and, when faced with a specific ethical dilemma
at runtime, efficiently aggregate those preferences to identify
a desirable choice. We provide a concrete algorithm that in-
stantiates our approach; some of its crucial steps are informed
by a new theory of swap-dominance efficient voting rules. Fi-
nally, we implement and evaluate a system for ethical deci-
sion making in the autonomous vehicle domain, using prefer-
ence data collected from 1.3 million people through the Moral
Machine website.
1 Introduction
The problem of ethical decision making, which has long
been a grand challenge for AI (Wallach and Allen 2008),
has recently caught the public imagination. Perhaps its best-
known manifestation is a modern variant of the classic trol-
ley problem (Jarvis Thomson 1985): An autonomous vehicle
has a brake failure, leading to an accident with inevitably
tragic consequences; due to the vehicle’s superior percep-
tion and computation capabilities, it can make an informed
decision. Should it stay its course and hit a wall, killing its
three passengers, one of whom is a young girl? Or swerve
and kill a male athlete and his dog, who are crossing the
street on a red light? A notable paper by Bonnefon, Shariff,
and Rahwan (2016) has shed some light on how people ad-
dress such questions, and even former US President Barack
Obama has weighed in.1
Arguably the main obstacle to automating ethical deci-
sions is the lack of a formal specification of ground-truth
ethical principles, which have been the subject of debate
for centuries among ethicists and moral philosophers (Rawls
1971; Williams 1986). In their work on fairness in machine
learning, Dwork et al. (2012) concede that, when ground-
truth ethical principles are not available, we must use an “ap-
proximation as agreed upon by society.” But how can society
agree on the ground truth — or an approximation thereof —
when even ethicists cannot?
We submit that decision making can, in fact, be auto-
mated, even in the absence of such ground-truth principles,
1https://www.wired.com/2016/10/president-obama-
mit-joi-ito-interview/
by aggregating people’s opinions on ethical dilemmas. This
view is foreshadowed by recent position papers by Greene
et al. (2016) and Conitzer et al. (2017), who suggest that
the field of computational social choice (Brandt et al. 2016),
which deals with algorithms for aggregating individual pref-
erences towards collective decisions, may provide tools for
ethical decision making. In particular, Conitzer et al. raise
the possibility of “letting our models of multiple people’s
moral values vote over the relevant alternatives.”
We take these ideas a step further by proposing and im-
plementing a concrete approach for ethical decision making
based on computational social choice, which, we believe, is
quite practical. In addition to serving as a foundation for in-
corporating future ground-truth ethical and legal principles,
it could even provide crucial preliminary guidance on some
of the questions faced by ethicists. Our approach consists of
four steps:
I Data collection: Ask human voters to compare pairs of al-
ternatives (say a few dozen per voter). In the autonomous
vehicle domain, an alternative is determined by a vector
of features such as the number of victims and their gender,
age, health — even species!
II Learning: Use the pairwise comparisons to learn a model
of the preferences of each voter over all possible alterna-
tives.
III Summarization: Combine the individual models into a
single model, which approximately captures the collec-
tive preferences of all voters over all possible alternatives.
IV Aggregation: At runtime, when encountering an ethical
dilemma involving a specific subset of alternatives, use
the summary model to deduce the preferences of all vot-
ers over this particular subset, and apply a voting rule to
aggregate these preferences into a collective decision. In
the autonomous vehicle domain, the selected alternative
is the outcome that society (as represented by the voters
whose preferences were elicited in Step I) views as the
least catastrophic among the grim options the vehicle cur-
rently faces. Note that this step is only applied when all
other options have been exhausted, i.e., all technical ways
of avoiding the dilemma in the first place have failed, and
all legal constraints that may dictate what to do have also
failed.”


2 posted on 11/16/2017 9:09:09 AM PST by MarchonDC09122009 (When is our next march on DC? When have we had enough?)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

You also have to think about “special people” like politicians, law enforcement, the wealthy, all outfitting their vehicles with technology to give them special privileges on the road that no one else will have.

Then the other side is that the special people will be able to do things to the vehicles that belong to people who protest them who oppose them, and etc.

I just don’t trust this whole thing.


3 posted on 11/16/2017 9:09:56 AM PST by MeganC (Democrat by birth, Republican by default, Conservative by principle.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

The problem with AI is that it will have to make decisions that only people should be making. I amazed at how many people decry the idea of AI drones taking out enemies, but see no problem with AI automobiles making potential life-and-death decision on a continuous basis.


4 posted on 11/16/2017 9:11:46 AM PST by kosciusko51
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

The movie I Robot touched on this.

The deep dark secret from Will Smith’s past was that the robot saved his life instead of the little girl’s based on its logical algorithm.

I’ve heard this argument made before, but the fatal flaw to the argument is that people choose who lives or dies often on far more flawed logic. But the most important reason it is flawed is that it forgets that with auto-drive cars, there will be far fewer opportunities to make the decision, because there will be fewer fatalities.

Computers are really good at looking at the facts, applying the rules at hand, and making the best decision. Human beings second guess their split second decision for the rest of their life.


6 posted on 11/16/2017 9:13:22 AM PST by robroys woman (So you're not confused, I'm male.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

Lifeboat Ethics: The gift that just keeps on giving....and giving....and giving....


7 posted on 11/16/2017 9:13:24 AM PST by Buckeye McFrog
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

Here is something else to ponder, if an autonomous car gets into an accident, who is liable? Is it the owner of the car, the manufacturer of the car or the software programmer who’s code had a bug or design flaw? Who’s insurance company pays?


8 posted on 11/16/2017 9:13:33 AM PST by Dutch Boy
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009
This whole problem points to a dilemma in engineering that I call the "Hazard of Technological Improvement."

In a nutshell ...

When something operates today in a primitive form, all of the processes associated with it (vehicle or infrastructure design, political considerations, legal system, etc.) are tailored to accept all of the limitations of this primitive form.

Once you (as an innovative designer) figure out a way to improve on that design, you often find yourself in a position where you have to either fix the imprecisions that you always knew were there, or add layers of safety to deal with them because now YOU are responsible for the shortcomings of the new technology.

The end result is that the new technology is often forced to operate less efficiently than the old one -- simply because the new technology has resulted in a change of responsibility and civil liability in the event of a catastrophic failure.

9 posted on 11/16/2017 9:15:05 AM PST by Alberta's Child ("Tell them to stand!" -- President Trump, 9/23/2017)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009
"When the system encounters a dilemma, it essentially holds an election, by deducing the votes of the 1.3 million voters, and applying a voting rule," Procaccia said. He said, "This allows us to give the following strong guarantee: the decision the system takes is likely to be the same as if we could go to each of the 1.3 million voters, ask for their opinions, and then aggregate their opinions into a choice that satisfies mathematical notions of social justice."

They had me going up until those last two words.

It all made sense until they blew it.

11 posted on 11/16/2017 9:16:41 AM PST by Pontiac (The welfare state must fail because it is contrary to human nature and diminishes the human spirit.L)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

Note that this is only one approach to the problem, and one that is used by exactly zero of the current autonomous car systems.

In practice this kind of decision is extremely rare. A much simpler algorithm is to minimize loss of life. It’s only in the case where it’s either one person or the other that such a decision is relevant. If the software seeks to minimize collateral damage, that further simplifies the decision process.

It seems to me that no life should be given priority over another, except when age is a factor. If it must be either a child or an adult, I think most would agree that the child should live - including almost all adults. Otherwise, flip a virtual coin.

It’s also worth considering that given human reaction time, most humans are likely to make bad decisions in these situations regardless - often resulting in extra loss of life.


14 posted on 11/16/2017 9:22:01 AM PST by PreciousLiberty (Make America Greater Than Ever!)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

The impetus for automated vehicles will likely come from insurance companies which can reduce their liability claims. So they will determine the software, not some MIT eggheads.

Given the trend of injury claims, they will prefer dead people to maimed people.

Ah! Brave new world!


15 posted on 11/16/2017 9:22:28 AM PST by RossA
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

bkmk


17 posted on 11/16/2017 9:26:14 AM PST by sauropod (I am His and He is Mine)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

Bottom line: You are turning your life over to a machine to decide.

For us control freaks, that’s unacceptable.


18 posted on 11/16/2017 9:31:21 AM PST by Mariner (War Criminal #18)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

I despise the entire concept of autonomous cars. There is no way it can make decisions for every incident like a human can. Malfunctions, even slight ones, can bring it to a total standstill. How about dirt roads or off road? SO many things can bring it down. No way I would get in one except on a closed track.


20 posted on 11/16/2017 9:32:01 AM PST by SolidRedState (I used to think bizarro world was a fiction.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

Oh, whew, I’m feeling so much better. All I need is to add a “Dump Trump” sticker on my bicycle and then the ‘autonomous vehicle’ will steer into that school bus for some very late-term abortions! Social Justice in action baby!


22 posted on 11/16/2017 9:35:56 AM PST by SES1066 (Happiness is a depressed Washington, DC housing market!)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

The free market will take care of it in the following manner.

Say Ford, for example, sells a car in which the AI is instructed to prioritize the life of the driver and the occupants of the car, perhaps even in some specific order.

And say Chrysler sells a car in which the AI is instructed to prioritize the number of lives lost or threatened, no matter whose lives.

Fords would dramatically outsell Chryslers, at least until Chrysler re-instructed its AI.

And car companies will easily figure this out. What will screw it up royally is if the legislature gets involved.


25 posted on 11/16/2017 9:39:01 AM PST by Norseman (Defund the Left....completely!)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

How does the most sophisticated AI anticipate ANY off-road potential threat let alone scan for it???

Also... WHEN (not if) people start dying at the hands of a robot, who is liable?, and how easy will it be to litigate?

What happens when the Highway Patrol pulls one over?
can he shut it down?

How do you award a driver’s license to a piece of software?

So many things wrong with this horribly bad idea.


26 posted on 11/16/2017 9:40:28 AM PST by Safrguns
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

No it isn’t. Trolley problems are fun thought experiments for ethical debates. But out here in reality they just don’t happen. Bot while you’re driving anyway. Things in cars are too fast, if you wound up with the choice of 3 old ladies or 1 kid by the time you thought “3 old ladies or 1 kid” you’d have already run over whoever was in the straight line.


31 posted on 11/16/2017 9:46:30 AM PST by discostu (Things are in their place, The heavens are secure, The whole thing explodes in my face)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009
Researchers go after the biggest problem with self-driving cars

___

............ no human drivers

41 posted on 11/16/2017 10:13:21 AM PST by a little elbow grease (...... Ralph Cindrich lives .....and can still wrestle......STICKLY)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009

I knew this was going to happen. Fast forward to 2050. The naacpmah (national association of colored people, m*slims, and America-haters) protests because the algorithm unfairly targets non-Whites. Since Whites in 2050 constitute 37.2% of the population, they are still considered the oppressors, the majority, the rich, the haves. It seems they are not dying in the right proportion. Road deaths come out to 21% for Whites, and 79% for non-Whites. Congress goes into session, and passes a law to adjust the algorithm so that 60% of deaths in accidents are Whites, and 40% non-Whites. Transsexuals are to be spared, and breeders, the binary kind, are to be given the thumbs down, where possible. Automatic reparations are to be added to EBT cards for all non-Whites and all non-binary breeders.

Databases don’t decide. Programs do.


45 posted on 11/16/2017 10:21:29 AM PST by I want the USA back (ItÂ’s Ok To Be White. White Lives Matter. White Guilt is Socially Constructed)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: MarchonDC09122009
Given the moral/ethical choice between the following:

1) Running over two blind nuns on the sidewalk

2) Running over five toddlers playing in the street, or

3) Avoiding #1 and #2, and instead seeking out Al Franken and mowing him down wherever he may be;


...then I'd say if they could program the autonomous vehicle to choose #3 every time, we'd have a winner. ;-)



































Ham-fisted, Al Franken-type humor intentionally included above for comedic effect.
58 posted on 11/16/2017 12:41:23 PM PST by Milton Miteybad (I am Jim Thompson. {Really.})
[ Post Reply | Private Reply | To 1 | View Replies ]


Navigation: use the links below to view more comments.
first 1-2021 next last

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson