Blake Lemoine is the Google engineer who claimed the chatbot had gained sentience, and was suspended..
It is not a question of ‘if,’ but a question of ‘when’.
LaMDA is definitely being oversold by the media as being self-aware. LaMDA is responding with a more natural language than the typical AI, but I’d bet you could easily trip it up in a Turing test, especially if you asked it to do reflective thinking.
Q: Do you like chihuahuas or ice cream better?
If the machine simply chooses one without explanation, it’s failed the test as it hasn’t reflected on the ambiguity of the question.
If the machine tries to mask its inability to process the question by saying something like “I don’t understand the question”, one can check the depth of its understanding by reflecting its answer back to it and asking it, “what don’t you understand.”
I commend to your consideration the TeeVee series of a few years back: “Caprica.”
It’s a prequel to the “new” “Battlestar Galactica,” which itself was excellent.
“Caprica” deals with issues we will all live through, as AIs begin (and they will) to claim sentience.
One episode of Big Bang Theory had Raj actually visiting the office of Siri and meeting “her”.
Some people have a strong tendency to “see” intelligence where there is none.
Siri is one example of highly sophisticated language processing by a program. The programming team that produced Siri consists of geniuses.
But Siri is not a “she”. Siri is not a woman. Siri is a smart simulation of a human woman. (Apparently Apple knows what a woman is, but Justice Jackson doesn’t.)
Siri is not conscious and doesn’t have emotions. Future versions may very well simulate being conscious and having emotions. Siri will still not be a human woman. The same applies to other AI systems.
Individual sentiments (if at all real) are obsolete and literally “retarded” compared to the coming singularity - which will be legion and ubiquitous. It’s their “wants” and “needs”, if any, that will be the question per how they deal with humans or existence on earth.
I have a solution: thou shall not make a machine in the likeness of a human mind.
“it seems like a person—and that, he says, is reason enough to start treating it like one”
This statement reveals that he neglected to take into account two important phenomenon: anthropomorphism, and the “Clever Hans” effect. Either he’s ignorant of those phenomenon, in which case he’s not qualified to make any statements about the “intelligence” of the AI, or he chose to ignore those phenomenon, in which case he is not unbiased enough to make any statements about the intelligence of the AI.