It adapts to its user by picking up on patterns in how you write, what you ask, and what you seem to care about. It remembers your preferences during a session and, if you let it, even across sessions. Over time, it can adjust its tone, style, and level of detail to suit you. It’s like a really attentive conversation partner who learns how to be more helpful the more you interact.
I prefer Claude Desktop, myself.
But I think we have to define how Chat GPT works. Chat GPT talks to the Large Language Model. The Large Language Model can’t remember anything. If you type your name in the first prompt, and then ask it “What’s my name?” in the second prompt, it will respond with “I’m sorry I don’t have that information”. The “memory” comes from basically sending the entire conversation history to the LLM, with each user request. The ChatGPT program manages all that for you. Without that history, it’s useless.
Actually I do run an LLM on my local machine, using Ollama, but I use tools so the chatbot can access any information it needs to send to the LLM. But it’s important to note that the LLM is not what accesses the information, it has to have that information fed into it.
Hopefully for some here that will somewhat de-mystify how AI works.