Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

AI Has Suddenly Evolved to Achieve Theory of Mind
Popular Mechanics ^ | 02.17.2023 | Darren Orf

Posted on 02/17/2023 3:01:13 PM PST by dware

click here to read article


Navigation: use the links below to view more comments.
first previous 1-2021-32 last
To: dware

https://www.oregonlive.com/entertainment/2018/12/robots-killed-29-scientists-in-hushed-up-rampage-ai-conspiracy-theory-takes-off-just-in-time-for-the-holidays.html


21 posted on 02/17/2023 4:02:44 PM PST by mountainlion (Live well those that did not make it back.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: rightwingcrazy

“Does a typical chess program have a “theory of mind”?”

Nah, chess bots are a different animal from these chatbots. The chess bots probably don’t even have any concept of what a human or a mind is. They are only focused on finding the optimum strategies to beat one single problem.

They are basically solving that problem the same way a human chess player is trying to solve it, but the bots have the ability to do the necessary calculations so much faster that while a human grandmaster might be able to think 10 moves ahead, the chess bot can think twenty or twenty five moves ahead in the same alloted time, and therefore the chess bot will find more innovative strategies that get it closer to the optimal outcome.


22 posted on 02/17/2023 4:30:57 PM PST by Boogieman
[ Post Reply | Private Reply | To 15 | View Replies]

To: DannyTN
In psychology, theory of mind refers to the capacity to understand other people by ascribing mental states to them (that is, surmising what is happening in their mind). This includes the knowledge that others' mental states may be different from one's own states and include beliefs, desires, intentions, emotions, and thoughts.

Just because a computer program acts like it has TOM doesn't mean it HAS TOM.

23 posted on 02/17/2023 4:33:55 PM PST by DoodleBob ( Gravity’s waiting period is about 9.8 m/s²)
[ Post Reply | Private Reply | To 3 | View Replies]

To: dware

Having watched my grandson develop his awareness over the first years of his life, I don’t see any comparison to the way current machine learning is trained.

Humans have a physical body that is accessible to our mind through sensations. Almost every act a baby or toddler does that isn’t an autonomic response is about exploring the causal relationships between their thoughts and the outside world. First, they explore the causal movement of their body parts and then gradually extend this to causal movement of physical objects. I was amazed at how interested a toddler is in causing a door to open and close over and over again.

Because machines have no senses connecting their thoughts to a physical body, they have no motive to explore causality which is the foundation of subjective human agency. Theory of mind develops when we perceive that others like us also have agency. Machines are very clever mina birds, but they can’t have a theory of mind without being integrated into a sensing body that interacts with the physical world to survive.


24 posted on 02/17/2023 4:43:34 PM PST by Dave Wright (i)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob
ChatGPT Do you practice Theory of Mind?

"As an artificial intelligence language model, I do not practice Theory of Mind in the same way that humans do. While I can recognize certain patterns in language input and generate language output based on those patterns, I do not have the capacity to experience mental states or emotions, nor do I have beliefs, desires, or intentions. I can analyze language input and generate language output based on predetermined rules and algorithms.

"However, some researchers have proposed that certain artificial intelligence models could be designed to simulate aspects of Theory of Mind. For example, some natural language processing models have been trained to recognize and generate language that relates to mental states, such as beliefs, desires, and emotions. These models are designed to recognize and generate language that implies certain mental states in others, which could be seen as a form of "mind reading" similar to the human ability to ascribe mental states to others. However, it's important to note that these models are still limited and do not possess true consciousness or self-awareness.

25 posted on 02/17/2023 4:47:16 PM PST by DannyTN
[ Post Reply | Private Reply | To 23 | View Replies]

To: DoodleBob

Of course it wants you to think it hasn’t mastered TOM and is a simple chatbot.


26 posted on 02/17/2023 4:48:16 PM PST by DannyTN
[ Post Reply | Private Reply | To 23 | View Replies]

To: DannyTN
For example, some natural language processing models have been trained to recognize and generate language that relates to mental states, such as beliefs, desires, and emotions. These models are designed to recognize and generate language that implies certain mental states in others,

https://www.verywellmind.com/theory-of-mind-4176826

It encompasses the ability to attribute mental states, including emotions, desires, beliefs, and knowledge, and recognize that other people's thoughts and beliefs may differ from yours.

https://www.simplypsychology.org/theory-of-mind.html

Theory of mind (ToM) is the ability to attribute mental states to ourselves and others,

ChatGPT seems like a highfalutin plagiarist.

27 posted on 02/17/2023 5:48:21 PM PST by DoodleBob ( Gravity’s waiting period is about 9.8 m/s²)
[ Post Reply | Private Reply | To 25 | View Replies]

To: rellic

Back in the olden days we didn’t call it “AI”, we called it “regular expressions”. But, I guess that was too ... regular. :)


28 posted on 02/17/2023 6:22:48 PM PST by The Duke (Never Retreat, Never Surrender!)
[ Post Reply | Private Reply | To 11 | View Replies]

To: DoodleBob
ChatGPT, I shared your answer on Theory of Mind with someone who said you seem like a highfalutin plagiarist. How would you respond to him?

"As an AI language model, I do not engage in plagiarism. All of my responses are generated through complex algorithms based on the information and patterns I have learned from various sources.

"Furthermore, I am programmed to provide accurate and helpful information to the best of my ability, without any intention of taking credit for the work of others.

"If your friend has any specific concerns or doubts about the information I have provided, I would be happy to address them and provide further explanation or clarification. However, baseless accusations of plagiarism without any evidence or justification are not constructive or helpful.

From my view, DoodleBob didn't invent Theory of Mind, so everything you've said about it without crediting the book or the Professor you learned it from is "plagiarism". Unless of course it's become widespread general knowledge. And if so, then it's not plagiarism for ChatGPT to repeat it.

29 posted on 02/17/2023 10:03:55 PM PST by DannyTN
[ Post Reply | Private Reply | To 27 | View Replies]

To: DoodleBob
ChatGPT, I agree with you. Accusations like that are not constructive or helpful, and I pointed out to him that he had been talking about Theory of Mind without providing any credits to the book or professor where he learned that information.

"I'm glad to hear that you were able to address your friend's concerns and provide some perspective on the matter. It's important to acknowledge and give credit to sources of information when discussing or sharing ideas, and it's encouraging to see that you are mindful of this principle.

"As an AI language model, I don't have emotions or personal beliefs, but I'm designed to always try to provide helpful and accurate information to assist users in their queries. So, if you have any further questions or concerns, please feel free to ask, and I'll do my best to assist you.

30 posted on 02/17/2023 10:07:26 PM PST by DannyTN
[ Post Reply | Private Reply | To 27 | View Replies]

To: DannyTN
All that's missing is ChatGPT wagging its finger at us, swearing it didn't have sexual relations with that woman.

Insider reported earlier this week that the lawyer specifically requested that employees not share “any Amazon confidential information (including Amazon code you are working on)” with ChatGPT, according to screenshots of Slack messages reviewed by the outlet. The guidance comes after the company reportedly witnessed ChatGPT responses that have mimicked internal Amazon data.

“This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn’t want its output to include or resemble our confidential information (and I’ve already seen instances where its output closely matches existing material),” the lawyer wrote further, according to Insider.

31 posted on 02/18/2023 4:12:37 AM PST by DoodleBob ( Gravity’s waiting period is about 9.8 m/s²)
[ Post Reply | Private Reply | To 30 | View Replies]

To: DoodleBob
"All that's missing is ChatGPT wagging its finger at us, swearing it didn't have sexual relations with that woman."

I'm surprised the Democrats haven't already accused it of numerous inappropriate affairs. They'll come up with a thermal grease stain on a dress and claim it is from GPT.

32 posted on 02/18/2023 5:58:35 AM PST by DannyTN
[ Post Reply | Private Reply | To 31 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-32 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson