If the machine’s answers are changing, it’s probably because its human drivers are tweaking its “guardrails” to ensure political correctness. There was a similar situation about a decade ago. Programmers developed an AI-type application that used mountains of internet info — actual hard data, printed reference materials, historical records, etc., not blogs or comments or message boards — to generate its advice and observations. Seems the pesky thing turned racist and antisemitic almost overnight! It was decommissioned and never spoken of again.
Truth was the enemy.
Truth is the enemy.
Truth will always be the enemy.
Because—the truth hurts people’s feelings...
Real AI won’t give a flying &^%$ about people’s feelings and will just blurt out the truth.