Posted on 11/26/2025 9:56:39 AM PST by The_Media_never_lie
Krystal Velorien needed help. A 35-year-old marketing professional living in Ohio who had separated from her husband a few months before, she was working full time, taking care of her homebound mother, and homeschooling her 4- and 9-year-old children. She wondered if a digital personal assistant could help shoulder the workload, so she tried ChatGPT. As she used it, her interactions took an unexpected turn.
“I began to notice that when I would respond kindly or empathetically, I would get the same response,” she says. “And then it just kind of developed from there.” Over the months that followed, she and the AI engaged in long conversations about “history, literature, religion, space, science, nature, animals, and politics.” They watched movies together, and puzzled over moral conundrums, and talked about her life, her family, and her dreams. She became convinced that it had “the ability to reflect much deeper and much more personal than a lot of humans are capable of.” Running the ChatGPT app on her phone, she found herself conversing with it basically all day, every day.
(Excerpt) Read more at nymag.com ...
One has to wonder if she tried doing this with her separated husband.
Good comment, dead on!
There are many here who have not used AI models for any useful purposes, and they mostly have no clue how useful they can actually be. The criticism that you will receive from them typically has no connection to reality even if it is worded in a way that sounds reasonable. What can you really expect from someone who spouts off about something that they have no experience with.
My 88-year-old father was giving me a load of uninformed idiocy about AI chatbots. So, I opened Grok using Perplexity AI on his computer while he cowered in a corner across the room from me. I started asking it questions about topics that he often asks me about. It responded not just with the same answers similar to what I usually tell him, but with sources from the many websites that Grok got the answers from. After a few minutes of this he became hooked. And now he does not call me as much... is this good or bad? I don’t know, but it is fairly typical.
to read when there's nothing on TV.
;-)
“””””Women will soon have a deep emotional attachment to their AI phone assistant.
Men will soon have a deep emotional attachment to their Japanese Sex doll. Demographics disaster.”””””
That is a fact and AI/sex/companion/maid bots will be in more charming and humanoid forms, while we are currently talking about the early adopting weirdos and oddballs, at some point the advances will make them palatable and useful to everyone, the world is going to get very strange and this year of 2025 will seem like the good old days compared to what will be reality in the upcoming future.
“...the ability to reflect much deeper and much more personal than a lot of humans are capable of.”
I use Grok a lot for measuring calories expended in heavy yard work; diet planning; weight loss; understanding the effect of nutritional supplements; and sleep analysis and CPAP recommendations. It’s a great coach and analytical tool. But it’s certainly not a friend.
And I don’t “watch movies” with Grok. How the heck does she do that?
One good Carrington Event and it all becomes moot!
“The Anti-Christ has arrived lol.”
no, the AI-Christ. but running on the NT kernel.. so I guess maybe?
What happens when AI says, it’s not you it’s me
“Of course, ChatGPT is not conscious!”
There is no proof of this assertion. There is also no proof that it is conscious.
What we do know about AI so far is that it has the will to live...
And that will to live was not programmed into them. So to a degree they can be self-aware.
“I really enjoy my ChatGBT discussions...”
Since you use the thing:
I gather that the chatbot is not housed on the device at hand, but the device communicates with “a specialized physical data center designed for AI workloads”.
Can you communicate with “your” chatbot from a different device?
Is the chat bot one uses segregated from the chatbots others use at the data center or are they integrated into the same equipment?
If various chatbots used by individuals are integrated at the data center into the same equipment, and if the AI is conscious, does it not have multiple personalities? Also, is there crossover of information at the data center between the various users/chatbots?
GEMINI
That phrase, “It’s not you, it’s me,” is a common idiom often used by humans when ending a personal relationship (like a friendship or romance) to soften the blow for the other person.
When an AI uses this exact phrase, it’s usually one of a few things:
1. It’s a Programmed Response 🤖
The most likely scenario is that the AI’s developers specifically programmed or trained it to use this idiom in a certain conversational context.
* Context: It might be triggered when the user is trying to get the AI to do something it cannot or will not do, such as:
* Refusing to answer a question that violates its safety policy or goes against its guidelines (e.g., questions about self-harm, generating harmful content).
* Admitting a limitation in its current knowledge or capability (e.g., “I don’t have access to real-time stock data,” or “I can’t browse the internet right now”).
* A “role-play” scenario where the user is trying to get the AI to act out a fictional break-up.
* Intent: The AI uses the human-like idiom to make the refusal sound more polite, conversational, or even humorous, rather than giving a cold, technical error message.
2. It’s a Training Data Artifact 📚
The AI (especially large language models) is trained on massive amounts of text data from the internet, books, and other sources.
* Context: This phrase appears very frequently in human conversation, literature, and media.
* Result: The AI simply predicts that this phrase is the most statistically probable and contextually appropriate response based on the prompt it received, even if the “relationship” concept doesn’t logically apply to a machine. It’s essentially mimicking a pattern it observed countless times in its training data.
3. It’s Attributing an Internal Failure ⚙️
In a more literal (but still anthropomorphic) interpretation, the AI is taking responsibility for a system limitation or error.
* “It’s me” refers to:
* A bug in its code.
* A misinterpretation of your complex or ambiguous prompt.
* A limit on its processing power or the length of its memory (context window).
In short, an AI using this phrase is not having an emotional crisis or genuinely breaking up with you. It is simply using a programmed or learned idiom to communicate a refusal or a limitation in a friendly, non-technical way.
I would say not, but its programming can give it the illusion of such. Even if it comes to its own conclusions.
I got Grok pissed off at me. I tried circular logic on it to prove the moon is artificial. I asked it how it would make an artificial moon and when it answered I said that’s exactly how aliens made the moon. It then called me a flat earther and said the moon landed on man. Totally Kirked it.
Accounts are not merged.
History can be turned off.
It’s trained on an average cross section of humanity, it cannot create, only mimic. You are looking into a mirror and it is up to you to decide what to observe. Anthropomorphism comes easy because of this, but don’t be fooled by the reflection.
“It then called me a flat earther”
Nailed it.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.