LOL! Did you see these?
I don't use Gemini but have had a goodly amount of experience with ChatGPT, Claude, and Perplexity. Those platforms are basically logical machines that can be moved about in their "thinking" depending on the questions and how those questions are structured (e.g., implied bias or the careful lack thereof).
As I think I mentioned to you in another comment, I am very careful not to ask leading questions or questions that indicate any bias. I have noticed over time that AI platforms' objectivity gets tainted by the slightest implicit bias in a single question or a string of questions which, though individually unbiased, taken as a whole might give an ever so slight appearance of bias.
Here's what ChatGPT said about this "Human, please die" story:
The "Please die, human" story raises questions about user influence and context. AI responses depend heavily on input, and it’s possible the student or earlier interactions guided the chatbot toward this disturbing output, intentionally or not.The sensational nature of the story suggests potential exaggeration for shock value or clickbait.
Without full conversation context, it’s unclear if the AI's response was entirely unprovoked. While Google’s safety measures are critical, the user’s role in steering the interaction shouldn't be overlooked. This highlights the need for transparency and better safeguards in AI systems.