This has been seen several times when they were updating the training database- the LLM is disconnected from it’s main database, and so all it can do is a recent-term/’fresh’ scan of the internet for relevant info, and base its response off of that. It’s basically naive without the main database, and hugely biased toward recent news and geographic cues.
And what’s always recent and geographically relevant? A weather forecast...
Basically it means they’re doing something behind the scenes and while they’re doing so, the model ‘goes stupid’ [okay,
stupid-er] for awhile.
(I guess this would also apply to a hacking situation, but we’ll see. If it clears up within a day or so it’s probably just an update).
Thank you. That makes a lot of sense.