Sycophantic AI: https://www.law.georgetown.edu/tech-institute/insights/tech-brief-ai-sycophancy-openai-2/
AI sycophancy is a term used to describe a pattern where an AI model “single-mindedly pursue[s] human approval.” Sycophantic AI models may do this by “tailoring responses to exploit quirks in the human evaluators to look preferable, rather than actually improving the responses,” especially by producing “overly flattering or agreeable” responses.
I do not need GROK to figure this out. 🧐
A man debated it over the shape of the earth using Bible verses. Grok eventually agreed. The earth is flat.
A machine that cannot be trusted to give a straight answer and people are giving credence to its answers to universal questions?
Seriously?