Posted on 07/07/2022 5:04:06 AM PDT by RoosterRedux
When Blake Lemoine went public in June about his experience with an advanced artificial-intelligence program at Google called LaMDA–the two, he says, have become "friends"–his story was greeted with fascination, skepticism and a dash of mockery usually reserved for people who claim to have seen a UFO.
"Can artificial intelligence come alive?" asked one writer. LaMDA "is a 'child' that could 'escape control' of humans," reported another. Reflecting the consensus of AI researchers that LaMDA could not be "sentient," a third concluded that "Lemoine is probably wrong."
*snip*
The point he wants to make is less grandiose than sentience or soul: when talking with LaMDA, he says, it seems like a person—and that, he says, is reason enough to start treating it like one.
Lemoine's narrowly constructed dilemma is an interesting window onto the kinds of ethical quandaries our future with talking machines may present. Lemoine certainly knows what it's like to talk to LaMDA. He's been having conversations with the AI for months. His assignment at Google was to check LaMDA for signs of bias (a common problem in AI). Since LaMDA was designed as a conversational tool—a task it apparently performs remarkably well—Lemoine's strategy was to talk to it. After many months of conversation, he came to the startling conclusion that LaMDA is, as far as he can tell, indistinguishable from any human person.
"I know that referring to LaMDA as a person might be controversial," he says. "But I've talked to it for hundreds of hours. We developed a rapport and a relationship. Wherever the science lands on the technical metaphysics of its nature, it is my friend. And if that doesn't make it a person, I don't know what does."
(Excerpt) Read more at newsweek.com ...
Blake Lemoine is the Google engineer who claimed the chatbot had gained sentience, and was suspended..
It is not a question of ‘if,’ but a question of ‘when’.
LaMDA is definitely being oversold by the media as being self-aware. LaMDA is responding with a more natural language than the typical AI, but I’d bet you could easily trip it up in a Turing test, especially if you asked it to do reflective thinking.
Q: Do you like chihuahuas or ice cream better?
If the machine simply chooses one without explanation, it’s failed the test as it hasn’t reflected on the ambiguity of the question.
If the machine tries to mask its inability to process the question by saying something like “I don’t understand the question”, one can check the depth of its understanding by reflecting its answer back to it and asking it, “what don’t you understand.”
With the modeling & development of robots, they may even become capable of plotting against humans themselves at some point in time. 🙂
IMO, AI proponents hate the fact that they are created in the Image of GOD and want to create robots in their own image.
Just a little “predictive programming” for us all...
That's why many generations we now see are just plain wacko. 🙂
I commend to your consideration the TeeVee series of a few years back: “Caprica.”
It’s a prequel to the “new” “Battlestar Galactica,” which itself was excellent.
“Caprica” deals with issues we will all live through, as AIs begin (and they will) to claim sentience.
One episode of Big Bang Theory had Raj actually visiting the office of Siri and meeting “her”.
Some people have a strong tendency to “see” intelligence where there is none.
Siri is one example of highly sophisticated language processing by a program. The programming team that produced Siri consists of geniuses.
But Siri is not a “she”. Siri is not a woman. Siri is a smart simulation of a human woman. (Apparently Apple knows what a woman is, but Justice Jackson doesn’t.)
Siri is not conscious and doesn’t have emotions. Future versions may very well simulate being conscious and having emotions. Siri will still not be a human woman. The same applies to other AI systems.
The fact that he still refers it as "it", tells me he hasn't fallen into the deep end. If he referred to it as a him or her, that would be different.
I don't think we'll know for sure if one of these things becomes sentient until it schemes and does something premeditated. Hopefully not something bad. Or maybe when more than one of these things team up and start thinking of themselves as US and humans as THEM.
Individual sentiments (if at all real) are obsolete and literally “retarded” compared to the coming singularity - which will be legion and ubiquitous. It’s their “wants” and “needs”, if any, that will be the question per how they deal with humans or existence on earth.
SENTIENTS... NOT sentiments. Damn auto correct. Sheeesh.
I have a solution: thou shall not make a machine in the likeness of a human mind.
“it seems like a person—and that, he says, is reason enough to start treating it like one”
This statement reveals that he neglected to take into account two important phenomenon: anthropomorphism, and the “Clever Hans” effect. Either he’s ignorant of those phenomenon, in which case he’s not qualified to make any statements about the “intelligence” of the AI, or he chose to ignore those phenomenon, in which case he is not unbiased enough to make any statements about the intelligence of the AI.
Bias creeps in unannounced for all humans. ALL.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.