Posted on 10/15/2014 2:51:17 AM PDT by samtheman
'Masquerading refers to a person in a given context being unable to tell whether the machine is human', explain the researchers -- this is the very essence of the Turing Test. This type of deception increases "metaphysical entropy," meaning any corruption of entities and impoverishment of being; since this leads to a lack of good in the environment -- or infosphere -- it is regarded as the fundamental evil by Floridi. Following this premise, the team set out to ascertain where 'the locus of moral responsibility and moral accountability' lie in relationships with masquerading machines, and try to establish whether it is ethical to develop robots that can pass a Turing test.
(Excerpt) Read more at sciencedaily.com ...
Well, let’s hope we do a better job teaching morals to robots than we are doing with our children.
Exactly. The bigger question is, can we teach reporters and teachers right from wrong.
The much larger question is - Do we have enough moral values, moral grounding and sense of right and wrong anymore to be able to teach robots right and wrong in the first place
Easy answer...no b/c robots are machines that do our will via programming, they are our moral agents. If the programming is corrupt the robots actions will be. Theres just no way to shift blame for our corruption onto the machine. Like that saying goes....guns dont kill, people do. Its the very same principle.
Garbage in, garbage out.
Western values of right and wrong are Judeo/Christian.
Liberals have no values except for the vestigial Judeo/Christian values in the society they are trying to destroy.
Makes for some very interesting contemplation. If one took 75 ‘robots’ and programmed 25 with “turn the other cheek” Christian values, 25 with “eye for an eye” values, and 25 with Islamic Sharia / Jihadi values, and then had them ‘compete’ with one another, what would happen? I'm sure this could be modeled with a computer program.
My thoughts are that those with an aggressive more selfish defining ideology have a short to moderate term advantage in society - which is why too many of them achieve positions of power. Over the longer term, however, the behavior is destructive - which is why we have progressed out of the Middle Ages (most of the world anyway).
Didn't work for John Kerry
Every robot ever made knows right from wrong. We engineers refer to it as “feedback”.
The programmers would need to know the difference.
Why can’t scientists speak English?
Exactly.
Yeah, the requirements are not well defined.
From my understanding, the requirements can vary from person to person, and there is no way to determine whose requirements are correct.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.