Posted on 01/08/2020 4:07:03 PM PST by ransomnote
Innovations in artificial intelligence are creating personalized cancer treatments, improving search and rescue disaster response, making our roadways safer with automated vehicles, and have the potential for so much more.
But with growing concerns about data privacy, big tech companies, and the rise of technology-enabled authoritarianism in China and elsewhere, more people are starting to wonder: Must we decide between embracing this emerging technology and following our moral compass?
That’s a false choice. We can advance emerging technology in a way that reflects our values of freedom, human rights and respect for human dignity.
As part of the Trump Administration’s national AI strategy—the American AI Initiative—the White House is today proposing a first-of-its-kind set of regulatory principles to govern AI development in the private sector. Guided by these principles, innovators and government officials will ensure that as the United States embraces AI we also address the challenging technical and ethical questions that AI can create.
As long as they dont incorporate Asimovs 3 laws. Those would totally destroy mankind
I worry a bit about self-driving vehicles and the “trolley conundrum.”
They will have to have thousands (millions?) of “no win” scenarios and run them through the AI engine to see how it discerns.
Will Smith’s character in “I Robot” (which no doubt had the Good Doctor turning in his grave for the abuse of the Laws AND his characters) — did have a good point. We, as humans, value children over adults for the most part. The robot’s differential engine used exclusively probability of survival.
Here’s the thing...
If a method or technology is developed that can be repurposed by evil men to do evil thing, it will be so repurposed.
100% of the time.
That’s the First Law of Mariner.
Of course we have to make decisions. AI is like any other technology - a ‘double-edged sword’ which can be used for good or evil - and which can also be very Unintelligent and even downright STUPID - as William Binney explains in this talk Binney’s introduction begins at 1:50:30.
(I have no involvement with or interest in the Schiller Institute, but appreciate Binney wherever he shows up):
And probably the robot used probability to choose. The probability of saving the girl looked low. One thing that bothered me is all his hand wringing. I may have felt sadness about the little girl but I would have been happy to be alive instead of viirtue signaling my mock outrage all over the place.
Three American values known to every G.I. that AI cannot compute... Mom, ApplePie,, and FordChevyMopar!
Please explain.
You could have said "look, if you're not against Dr.Asimovs 3 laws... don't cross this line. If yes, do." Is that clear?
Also, if you didn’t know Asimov later added the Zeroth Law, above all the others A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
trolley conundrum.
Dont hit that switch. Let it crash.
>>Dont hit that switch. Let it crash<<
That has always been my default position. A little Calvanistic but if I was not there that is what would happen.
Now, if the people on the other track were a bunch of Birkinstock-wearing, gray ponytail SJWs, my “differential engine” would have an easier time of it.
>>If a method or technology is developed that can be repurposed by evil men to do evil thing, it will be so repurposed.<<
Or games.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.