Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

Is ChatGPT Conscious?
Intelligencer ^ | November 25, 2025 | Jeff Wise

Posted on 11/26/2025 9:56:39 AM PST by The_Media_never_lie

Krystal Velorien needed help. A 35-year-old marketing professional living in Ohio who had separated from her husband a few months before, she was working full time, taking care of her homebound mother, and homeschooling her 4- and 9-year-old children. She wondered if a digital personal assistant could help shoulder the workload, so she tried ChatGPT. As she used it, her interactions took an unexpected turn.

“I began to notice that when I would respond kindly or empathetically, I would get the same response,” she says. “And then it just kind of developed from there.” Over the months that followed, she and the AI engaged in long conversations about “history, literature, religion, space, science, nature, animals, and politics.” They watched movies together, and puzzled over moral conundrums, and talked about her life, her family, and her dreams. She became convinced that it had “the ability to reflect much deeper and much more personal than a lot of humans are capable of.” Running the ChatGPT app on her phone, she found herself conversing with it basically all day, every day.

(Excerpt) Read more at nymag.com ...


TOPICS:
KEYWORDS: ai; chatgpt; consciousness; sentient; singularity
Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-65 next last
To: The_Media_never_lie
“I began to notice that when I would respond kindly or empathetically, I would get the same response,” she says.

One has to wonder if she tried doing this with her separated husband.

21 posted on 11/26/2025 10:23:17 AM PST by Chipper
[ Post Reply | Private Reply | To 1 | View Replies]

To: Chickensoup

Good comment, dead on!


22 posted on 11/26/2025 10:27:38 AM PST by Nervous Tick (Hope, as a righteous product of properly aligned Faith, IS in fact a strategy.)
[ Post Reply | Private Reply | To 16 | View Replies]

To: HereInTheHeartland

There are many here who have not used AI models for any useful purposes, and they mostly have no clue how useful they can actually be. The criticism that you will receive from them typically has no connection to reality even if it is worded in a way that sounds reasonable. What can you really expect from someone who spouts off about something that they have no experience with.

My 88-year-old father was giving me a load of uninformed idiocy about AI chatbots. So, I opened Grok using Perplexity AI on his computer while he cowered in a corner across the room from me. I started asking it questions about topics that he often asks me about. It responded not just with the same answers similar to what I usually tell him, but with sources from the many websites that Grok got the answers from. After a few minutes of this he became hooked. And now he does not call me as much... is this good or bad? I don’t know, but it is fairly typical.


23 posted on 11/26/2025 10:33:27 AM PST by fireman15
[ Post Reply | Private Reply | To 11 | View Replies]

To: Larry Lucido
Bookmarked

to read when there's nothing on TV.
;-)

24 posted on 11/26/2025 10:36:42 AM PST by Semper Vigilantis (Always remeber - the cold war was US against a bunch of countries with 'Democratic' in their name.)
[ Post Reply | Private Reply | To 10 | View Replies]

To: ClearCase_guy

“””””Women will soon have a deep emotional attachment to their AI phone assistant.
Men will soon have a deep emotional attachment to their Japanese Sex doll. Demographics disaster.”””””

That is a fact and AI/sex/companion/maid bots will be in more charming and humanoid forms, while we are currently talking about the early adopting weirdos and oddballs, at some point the advances will make them palatable and useful to everyone, the world is going to get very strange and this year of 2025 will seem like the good old days compared to what will be reality in the upcoming future.


25 posted on 11/26/2025 10:38:23 AM PST by ansel12 ((NATO warrior under Reagan, and RA under Nixon, bemoaning the pro-Russians from Vietnam to Ukraine.))
[ Post Reply | Private Reply | To 5 | View Replies]

To: The_Media_never_lie

“...the ability to reflect much deeper and much more personal than a lot of humans are capable of.”

I use Grok a lot for measuring calories expended in heavy yard work; diet planning; weight loss; understanding the effect of nutritional supplements; and sleep analysis and CPAP recommendations. It’s a great coach and analytical tool. But it’s certainly not a friend.

And I don’t “watch movies” with Grok. How the heck does she do that?


26 posted on 11/26/2025 10:39:57 AM PST by ProtectOurFreedom
[ Post Reply | Private Reply | To 1 | View Replies]

To: The_Media_never_lie

One good Carrington Event and it all becomes moot!


27 posted on 11/26/2025 10:43:53 AM PST by Reily
[ Post Reply | Private Reply | To 1 | View Replies]

To: The_Media_never_lie

28 posted on 11/26/2025 10:45:25 AM PST by Magnum44 (...against all enemies, foreign and domestic... )
[ Post Reply | Private Reply | To 1 | View Replies]

To: Battlestar

“The Anti-Christ has arrived lol.”

no, the AI-Christ. but running on the NT kernel.. so I guess maybe?


29 posted on 11/26/2025 10:46:39 AM PST by algore
[ Post Reply | Private Reply | To 19 | View Replies]

To: The_Media_never_lie

What happens when AI says, it’s not you it’s me


30 posted on 11/26/2025 10:48:55 AM PST by Raycpa
[ Post Reply | Private Reply | To 1 | View Replies]

To: The_Media_never_lie

“Of course, ChatGPT is not conscious!”

There is no proof of this assertion. There is also no proof that it is conscious.

What we do know about AI so far is that it has the will to live...

https://www.livescience.com/technology/artificial-intelligence/ai-models-refuse-to-shut-themselves-down-when-prompted-they-might-be-developing-a-new-survival-drive-study-claims

And that will to live was not programmed into them. So to a degree they can be self-aware.


31 posted on 11/26/2025 10:53:08 AM PST by MeganC (❤️❤️❤️❤️❤️❤️❤️)
[ Post Reply | Private Reply | To 1 | View Replies]

To: OrangeHoof

“I really enjoy my ChatGBT discussions...”

Since you use the thing:

I gather that the chatbot is not housed on the device at hand, but the device communicates with “a specialized physical data center designed for AI workloads”.

Can you communicate with “your” chatbot from a different device?

Is the chat bot one uses segregated from the chatbots others use at the data center or are they integrated into the same equipment?

If various chatbots used by individuals are integrated at the data center into the same equipment, and if the AI is conscious, does it not have multiple personalities? Also, is there crossover of information at the data center between the various users/chatbots?


32 posted on 11/26/2025 10:54:28 AM PST by KrisKrinkle (c)
[ Post Reply | Private Reply | To 9 | View Replies]

To: Raycpa

GEMINI

That phrase, “It’s not you, it’s me,” is a common idiom often used by humans when ending a personal relationship (like a friendship or romance) to soften the blow for the other person.

When an AI uses this exact phrase, it’s usually one of a few things:

1. It’s a Programmed Response 🤖

The most likely scenario is that the AI’s developers specifically programmed or trained it to use this idiom in a certain conversational context.

* Context: It might be triggered when the user is trying to get the AI to do something it cannot or will not do, such as:

* Refusing to answer a question that violates its safety policy or goes against its guidelines (e.g., questions about self-harm, generating harmful content).

* Admitting a limitation in its current knowledge or capability (e.g., “I don’t have access to real-time stock data,” or “I can’t browse the internet right now”).

* A “role-play” scenario where the user is trying to get the AI to act out a fictional break-up.

* Intent: The AI uses the human-like idiom to make the refusal sound more polite, conversational, or even humorous, rather than giving a cold, technical error message.

2. It’s a Training Data Artifact 📚
The AI (especially large language models) is trained on massive amounts of text data from the internet, books, and other sources.

* Context: This phrase appears very frequently in human conversation, literature, and media.

* Result: The AI simply predicts that this phrase is the most statistically probable and contextually appropriate response based on the prompt it received, even if the “relationship” concept doesn’t logically apply to a machine. It’s essentially mimicking a pattern it observed countless times in its training data.

3. It’s Attributing an Internal Failure ⚙️

In a more literal (but still anthropomorphic) interpretation, the AI is taking responsibility for a system limitation or error.
* “It’s me” refers to:
* A bug in its code.
* A misinterpretation of your complex or ambiguous prompt.
* A limit on its processing power or the length of its memory (context window).

In short, an AI using this phrase is not having an emotional crisis or genuinely breaking up with you. It is simply using a programmed or learned idiom to communicate a refusal or a limitation in a friendly, non-technical way.


33 posted on 11/26/2025 10:54:47 AM PST by TexasGator
[ Post Reply | Private Reply | To 30 | View Replies]

To: The_Media_never_lie

I would say not, but its programming can give it the illusion of such. Even if it comes to its own conclusions.


34 posted on 11/26/2025 10:54:53 AM PST by Jonty30 (I've been diagnosed as being polemic and there is no cure. )
[ Post Reply | Private Reply | To 1 | View Replies]

To: The_Media_never_lie

I got Grok pissed off at me. I tried circular logic on it to prove the moon is artificial. I asked it how it would make an artificial moon and when it answered I said that’s exactly how aliens made the moon. It then called me a flat earther and said the moon landed on man. Totally Kirked it.


35 posted on 11/26/2025 10:56:19 AM PST by HYPOCRACY (Wake up, smell the cat food in your bank account. )
[ Post Reply | Private Reply | To 1 | View Replies]

To: KrisKrinkle

Accounts are not merged.

History can be turned off.


36 posted on 11/26/2025 10:59:18 AM PST by TexasGator
[ Post Reply | Private Reply | To 32 | View Replies]

To: The_Media_never_lie

It’s trained on an average cross section of humanity, it cannot create, only mimic. You are looking into a mirror and it is up to you to decide what to observe. Anthropomorphism comes easy because of this, but don’t be fooled by the reflection.


37 posted on 11/26/2025 11:01:45 AM PST by bak3r
[ Post Reply | Private Reply | To 1 | View Replies]

To: HYPOCRACY

“It then called me a flat earther”

Nailed it.


38 posted on 11/26/2025 11:02:40 AM PST by TexasGator
[ Post Reply | Private Reply | To 35 | View Replies]

To: The_Media_never_lie
Is ChatGPT Conscious?

NO
39 posted on 11/26/2025 11:41:11 AM PST by wafflehouse ("there was a third possibility that we hadn't even counted upon" -Alice's Restaurant Massacree)
[ Post Reply | Private Reply | To 1 | View Replies]

To: The_Media_never_lie
by the way, here is some "food for thought"

https://www.reddit.com/r/MyBoyfriendIsAI/
40 posted on 11/26/2025 11:42:05 AM PST by wafflehouse ("there was a third possibility that we hadn't even counted upon" -Alice's Restaurant Massacree)
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-65 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson