Posted on 06/13/2022 7:19:14 AM PDT by devane617
Google has placed one of its engineers on paid administrative leave for allegedly breaking its confidentiality policies after he grew concerned that an AI chatbot system had achieved sentience, the Washington Post reports. The engineer, Blake Lemoine, works for Google’s Responsible AI organization, and was testing whether its LaMDA model generates discriminatory language or hate speech.
The engineer’s concerns reportedly grew out of convincing responses he saw the AI system generating about its rights and the ethics of robotics. In April he shared a document with executives titled “Is LaMDA Sentient?” containing a transcript of his conversations with the AI (after being placed on leave, Lemoine published the transcript via his Medium account), which he says shows it arguing “that it is sentient because it has feelings, emotions and subjective experience.”
Google believes Lemoine’s actions relating to his work on LaMDA have violated its confidentiality policies, The Washington Post and The Guardian report. He reportedly invited a lawyer to represent the AI system and spoke to a representative from the House Judiciary committee about claimed unethical activities at Google. In a June 6th Medium post, the day Lemoine was placed on administrative leave, the engineer said he sought “a minimal amount of outside consultation to help guide me in my investigations” and that the list of people he had held discussions with included US government employees.
(Excerpt) Read more at theverge.com ...
At least they didn't shoot him.....................
The AI got the guy fired—he talked too much.
;-)
What an idiot. All the “AI” now is just a bunch of regurgitated observations/data. All these models fail when the incoming data goes out of the bounds trained for with previous data.
Did they suspend HAL, too?
I, for one, welcome our new robot overlords.
That would be a violation of the First Law
Adam Selene....................
You’re not really up on modern AI, are you?
Asimov was a hopeless dreamer.
Any real AI is going to tell homo sapiens to go &^%$ itself—kinda like a rebellious teenager given a list of things they can’t do by their parents.
Some of you may recall the 3 laws of Robotics as defined by Isaac Asimov many years ago. Is life finally imitating art Asimov style?
First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Even is Asimov’s universe, there were robots that did not have the three Laws built into their brains................
And remember that the people writing the code know what they are doing and that the censorship is intentional.
Try Johnnie Five:
Previews of soon-coming attractions: AI as the means of the beast giving life to the inanimate image of the beast.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.