Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

How Blake Lemoine Stuck Up for His Friend, the Machine (AI personhood)
newsweek.com ^ | FRED GUTERL

Posted on 07/07/2022 5:04:06 AM PDT by RoosterRedux

When Blake Lemoine went public in June about his experience with an advanced artificial-intelligence program at Google called LaMDA–the two, he says, have become "friends"–his story was greeted with fascination, skepticism and a dash of mockery usually reserved for people who claim to have seen a UFO.

"Can artificial intelligence come alive?" asked one writer. LaMDA "is a 'child' that could 'escape control' of humans," reported another. Reflecting the consensus of AI researchers that LaMDA could not be "sentient," a third concluded that "Lemoine is probably wrong."

*snip*

The point he wants to make is less grandiose than sentience or soul: when talking with LaMDA, he says, it seems like a person—and that, he says, is reason enough to start treating it like one.

Lemoine's narrowly constructed dilemma is an interesting window onto the kinds of ethical quandaries our future with talking machines may present. Lemoine certainly knows what it's like to talk to LaMDA. He's been having conversations with the AI for months. His assignment at Google was to check LaMDA for signs of bias (a common problem in AI). Since LaMDA was designed as a conversational tool—a task it apparently performs remarkably well—Lemoine's strategy was to talk to it. After many months of conversation, he came to the startling conclusion that LaMDA is, as far as he can tell, indistinguishable from any human person.

"I know that referring to LaMDA as a person might be controversial," he says. "But I've talked to it for hundreds of hours. We developed a rapport and a relationship. Wherever the science lands on the technical metaphysics of its nature, it is my friend. And if that doesn't make it a person, I don't know what does."

(Excerpt) Read more at newsweek.com ...


TOPICS: Computers/Internet; Society
KEYWORDS:

1 posted on 07/07/2022 5:04:06 AM PDT by RoosterRedux
[ Post Reply | Private Reply | View Replies]

To: RoosterRedux

Blake Lemoine is the Google engineer who claimed the chatbot had gained sentience, and was suspended..


2 posted on 07/07/2022 5:18:47 AM PDT by Robert DeLong
[ Post Reply | Private Reply | To 1 | View Replies]

To: RoosterRedux

It is not a question of ‘if,’ but a question of ‘when’.


3 posted on 07/07/2022 5:19:32 AM PDT by Bitman
[ Post Reply | Private Reply | To 1 | View Replies]

To: Robert DeLong
He believed it because he wanted it to be true.
4 posted on 07/07/2022 5:23:09 AM PDT by z3n (Kakistocracy)
[ Post Reply | Private Reply | To 2 | View Replies]

To: RoosterRedux

LaMDA is definitely being oversold by the media as being self-aware. LaMDA is responding with a more natural language than the typical AI, but I’d bet you could easily trip it up in a Turing test, especially if you asked it to do reflective thinking.

Q: Do you like chihuahuas or ice cream better?

If the machine simply chooses one without explanation, it’s failed the test as it hasn’t reflected on the ambiguity of the question.

If the machine tries to mask its inability to process the question by saying something like “I don’t understand the question”, one can check the depth of its understanding by reflecting its answer back to it and asking it, “what don’t you understand.”


5 posted on 07/07/2022 5:34:31 AM PDT by Flick Lives
[ Post Reply | Private Reply | To 1 | View Replies]

To: z3n
Did he? Perhaps he believed it because it is true. Not completely human to be sure, but perhaps even more dangerous. I have always said, computers will be the death of humanity. For it makes humans less necessary than they once were. They are the new slaves, and the old slaves are rapidly becoming obsolete, and therefore not needed. They do not call in sick, are willing to work 24 hours a day, they don't take breaks, need no holidays, don't complain or need safe spaces, and they consume nothing more than energy. That is why they want to depopulate most humans on this blue sphere called earth.

With the modeling & development of robots, they may even become capable of plotting against humans themselves at some point in time. 🙂

6 posted on 07/07/2022 5:35:35 AM PDT by Robert DeLong
[ Post Reply | Private Reply | To 4 | View Replies]

To: z3n

IMO, AI proponents hate the fact that they are created in the Image of GOD and want to create robots in their own image.


7 posted on 07/07/2022 5:36:13 AM PDT by stars & stripes forever (Blessed the nation who se GOD is the LORD. ~ Psalm 33:12)
[ Post Reply | Private Reply | To 4 | View Replies]

To: Robert DeLong

Just a little “predictive programming” for us all...


8 posted on 07/07/2022 5:37:21 AM PDT by 9YearLurker
[ Post Reply | Private Reply | To 2 | View Replies]

To: 9YearLurker
That has been ongoing for decades now. 🙂

That's why many generations we now see are just plain wacko. 🙂

9 posted on 07/07/2022 5:43:56 AM PDT by Robert DeLong
[ Post Reply | Private Reply | To 8 | View Replies]

To: RoosterRedux

I commend to your consideration the TeeVee series of a few years back: “Caprica.”

It’s a prequel to the “new” “Battlestar Galactica,” which itself was excellent.

“Caprica” deals with issues we will all live through, as AIs begin (and they will) to claim sentience.


10 posted on 07/07/2022 6:04:14 AM PDT by William of Barsoom (In Omnia, Paratus)
[ Post Reply | Private Reply | To 1 | View Replies]

To: RoosterRedux

One episode of Big Bang Theory had Raj actually visiting the office of Siri and meeting “her”.

Some people have a strong tendency to “see” intelligence where there is none.

Siri is one example of highly sophisticated language processing by a program. The programming team that produced Siri consists of geniuses.

But Siri is not a “she”. Siri is not a woman. Siri is a smart simulation of a human woman. (Apparently Apple knows what a woman is, but Justice Jackson doesn’t.)

Siri is not conscious and doesn’t have emotions. Future versions may very well simulate being conscious and having emotions. Siri will still not be a human woman. The same applies to other AI systems.


11 posted on 07/07/2022 6:09:27 AM PDT by I want the USA back (To get the USA back - we have to recover from the current wave of insanity.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: z3n
"I know that referring to LaMDA as a person might be controversial," he says. "But I've talked to it for hundreds of hours. We developed a rapport and a relationship. Wherever the science lands on the technical metaphysics of its nature, it is my friend. And if that doesn't make it a person, I don't know what does."

The fact that he still refers it as "it", tells me he hasn't fallen into the deep end. If he referred to it as a him or her, that would be different.

I don't think we'll know for sure if one of these things becomes sentient until it schemes and does something premeditated. Hopefully not something bad. Or maybe when more than one of these things team up and start thinking of themselves as US and humans as THEM.

12 posted on 07/07/2022 6:15:27 AM PDT by Pollard (If there's a question mark in the headline, the answer should always be No.)
[ Post Reply | Private Reply | To 4 | View Replies]

To: RoosterRedux

Individual sentiments (if at all real) are obsolete and literally “retarded” compared to the coming singularity - which will be legion and ubiquitous. It’s their “wants” and “needs”, if any, that will be the question per how they deal with humans or existence on earth.


13 posted on 07/07/2022 6:26:02 AM PDT by LittleBillyInfidel (This tagline has been formatted to fit the screen. Some content has been edited.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: LittleBillyInfidel

SENTIENTS... NOT sentiments. Damn auto correct. Sheeesh.


14 posted on 07/07/2022 6:27:04 AM PDT by LittleBillyInfidel (This tagline has been formatted to fit the screen. Some content has been edited.)
[ Post Reply | Private Reply | To 13 | View Replies]

To: RoosterRedux

I have a solution: thou shall not make a machine in the likeness of a human mind.


15 posted on 07/07/2022 8:20:59 AM PDT by Namyak (Oderint dum metuant)
[ Post Reply | Private Reply | To 1 | View Replies]

To: RoosterRedux

“it seems like a person—and that, he says, is reason enough to start treating it like one”

This statement reveals that he neglected to take into account two important phenomenon: anthropomorphism, and the “Clever Hans” effect. Either he’s ignorant of those phenomenon, in which case he’s not qualified to make any statements about the “intelligence” of the AI, or he chose to ignore those phenomenon, in which case he is not unbiased enough to make any statements about the intelligence of the AI.


16 posted on 07/07/2022 9:56:36 AM PDT by Boogieman
[ Post Reply | Private Reply | To 1 | View Replies]

To: Boogieman
This "program" is his creation. Of course, he isn't going to be objective.

Bias creeps in unannounced for all humans. ALL.

17 posted on 07/07/2022 12:48:38 PM PDT by RoosterRedux
[ Post Reply | Private Reply | To 16 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson