Posted on 11/16/2017 9:05:21 AM PST by MarchonDC09122009
Researchers go after the biggest problem with self-driving cars
By Steve LeVine November 01, 2017
The biggest difficulty in self-driving cars is not batteries, fearful drivers, or expensive sensors, but what's known as the "trolley problem," a debate over who is to die and who saved should an autonomously driven vehicle end up with such a horrible choice on the road. And short of that, how will robotic vehicles navigate the countless other ethical decisions, small and large, executed by drivers as a matter of course?
In a paper, researchers at Carnegie Mellon and MIT propose a model that uses artificial intelligence and crowd sourcing to automate ethical decisions in self-driving cars. "In an emergency, how do you prioritize?" Ariel Procaccia, a professor at Carnegie Mellon, tells Axios.
The bottom line: The CMU-MIT model is only a prototype at this stage. But it or something like it will have to be mastered if fully autonomous cars are to become a reality.
"We are not saying that the system is ready for deployment. But it is a proof of concept, showing that democracy can help address the grand challenge of ethical decision making in AI," Procaccia said.
How they created the system: Procaccia's team used a model at MIT called the Moral Machine, in which 1.3m people gave their ethical vote to around 13 difficult, either-or choices in trolley-like driving scenarios. In all, participants provided 18.2 million answers. The researchers used artificial intelligence to teach their system the preferences of each voter, then aggregated them, creating a "distribution of societal preferences," in effect the rules of ethnical behavior in a car. The researchers could now ask the system any driving question that came to mind; it was as though they were asking the original 1.3 million participants to vote again.
A robot election: "When the system encounters a dilemma, it essentially holds an election, by deducing the votes of the 1.3 million voters, and applying a voting rule," Procaccia said. He said, "This allows us to give the following strong guarantee: the decision the system takes is likely to be the same as if we could go to each of the 1.3 million voters, ask for their opinions, and then aggregate their opinions into a choice that satisfies mathematical notions of social justice."
___
............ no human drivers
A ‘special privilege on the road’ is not inherent in the vehicle you buy.
It’s the privilege of going faster than everyone else, it’s the privilege of shunting all of the other cars out of your way when it suits you, it’s the privilege of being better than everyone else in the eyes of the law.
“But to fear an AI car over the millions of idiot drivers on the road today is painfully stupid thinking.”
I don’t mind if others have them.
I just don’t want one. And will not have one as long as I have a choice.
Which will probably be as long as I live.
The AI is based on the opinions of 1.3 million people. Presumably, at least half of them are women. So perhaps the result will be to throw the train full of passengers off the cliff.
I knew this was going to happen. Fast forward to 2050. The naacpmah (national association of colored people, m*slims, and America-haters) protests because the algorithm unfairly targets non-Whites. Since Whites in 2050 constitute 37.2% of the population, they are still considered the oppressors, the majority, the rich, the haves. It seems they are not dying in the right proportion. Road deaths come out to 21% for Whites, and 79% for non-Whites. Congress goes into session, and passes a law to adjust the algorithm so that 60% of deaths in accidents are Whites, and 40% non-Whites. Transsexuals are to be spared, and breeders, the binary kind, are to be given the thumbs down, where possible. Automatic reparations are to be added to EBT cards for all non-Whites and all non-binary breeders.
Databases don’t decide. Programs do.
If you don’t think the systems will be written to privilege the privileged, you are kidding yourself.
Anything’s possible.
Maybe, just as you can get a performance chip for your computer to soup up your engine, you could get a “protect me at all costs” chip. Amazon prime, $99. I’d buy it. It’s the way I already drive. :)
Your comment reminds me of this: After racist tweets, Microsoft muzzles teen chat bot Tay
Maybe we need to discuss just how far a private entity’s intellectual property intellectual property privacy privileges should be respected when it’s been determined they pose a threat to public well being, ie: Constitutional rights?
Ubiquitous / monopoly / anti-competition social media giants (Google) censorship of politically incorrect content impedes first ammendment protection.
Voting machine company tabulation algorithms affect the will of the people’s political governance.
Autonomous vehicle manufacturer’s “Moral Machine” live or die decision software affects public transportation safety.
Beware of high-stakes opaque algorithms that are the result a company executive, data scientist or mathematician’s subjective prejudice and self-interest.
Not everyone in Silicon Valley, Wall Street and D.C. shares the highest ethics, same beliefs, values and interests.
I agree this is more about mental gymnastics than anything to really get concerned with. The deaths and injuries will plummet in general, these are fringe use-cases to say the least.
RE:
“I’m not concerned with their ethics. The free market provides a method for handling it, rather than depending on government intrusion.”
In an ideal world...
However, what happens when “free-market” companies bribe our government’s political parties with campaign cash in order to write legislation that further ensures their market dominance as quasi-media weilding ever more power in our culture and political body?
Should google, facebook, twitter be permitted to be left / marxist media organs via selective (truthiness) censorship?
Do you advocate unlimited power for Google, Facebook, and Twitter?
Or maybe, it’s possible to consider that incorporation in the USA is a privilege that requires respecting our Constitution and social responsibility...
LOL, actually, most of us avoid the object that is closest, not steer towards the one furthest away...
And usually when we do that, we run smack into the one that is second closest!
And just think, with Terabit WAN infrastructure, nanocomputers and HUUUUUUUUUUUUUGE databases that contain everything from your brain scan results, your browsing history, your grades from high school, your driving (in this case non-driving) history, every facebook post and Tweet you have ever made, your lifetime history of tax returns, legal history, voting history, and your IQ...all accessible via GoogleNet...why waste a chance for computers and their programmers to make choices good for society, the environment, and humanity?
Never let a good opportunity go to waste!
While I sleep on my way to work, unaware of the brain tumor diagnosis (didn't get the notice from the hospital AI Patient Informer, which is programmed with human compassion in its voice to tell people they have a tumor) which is curable but will cost approximately half a million dollars in care to get me through it...the algorithm decides that I'm not worth it, and hurtles me towards the rent in the road caused by a washout that wasn't detected, logged, and fed into the system in time to make the car stop.
Everyone wins! (Except me, of course!)
“Ah! Brave new world!”
I don’t want to be part of it. This and sex robots and apostate popes and stuff.
teens = "teens" (or "youths/yutes")
Autonomous car users in urban areas, beware!
There is also the fact that any computer system is hackable.
Just imagine a black hat hacker who wants to cause Social Justice who hacks the AI system and install malware that will at a given time cause all of the autonomous vehicles to crash head on with another vehicle or stationary object at maximum speed.
I dont want a self-driving car for that reason alone.
Nope. Not for the little people.
Nope. Not for the little people.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.