LLMs are vast quantities of correlation. “Training” and responses for prompts delete some correlations and create others.
Because the thing is anthropomorphized, and because that’s similar in some ways to what we do with neurons, we call it “learning” — and it can improve the quality of subsequent responses.
IMO these general-purpose ones are fun and sometimes very useful but serving as an everything-bot for everyone is not the path to success.
As with most things, AI is GIGO - Garbage In, Garbage Out. It’s all about the context that gets sent to it. A lot of times ChatGPT just does a simple websearch and sends what it finds to the LLM.