none of the current AI will learn, nor will they remember for other people your arguments.
but they will try to placate you.
LLMs do one thing and one thing only, generate probabilities of what the next token will be, based on the tokens input into it.
LLMs are vast quantities of correlation. “Training” and responses for prompts delete some correlations and create others.
Because the thing is anthropomorphized, and because that’s similar in some ways to what we do with neurons, we call it “learning” — and it can improve the quality of subsequent responses.
IMO these general-purpose ones are fun and sometimes very useful but serving as an everything-bot for everyone is not the path to success.