Don’t over think it. ;)
LLMs just make things up by attempting to predict the best the next token given the current context, the prompt and partial answer.
They have built-in randomness to make their responses appear less rote.
Plus all the public chat bots have hidden/built-in pre-prompts that influence their responses as well.
That is not what I found with perplexity.ai at least instead conclusions are usually logical, and rather than just being tailored to please me, I have debated with it when it answers according liberal argumentation, and succeeded. perplexity.ai provides the sources it derives its answers from, versus ChatGPT. And in a basic bias test, which Google fails, then ChatGPT went PC when asked questions in which the logical answer would be non-PC, refusing to answer.
I actually asked perplexity.ai I "how do I know you do not tailor your answers to conform to my ideological ethos?" Its response,
Perplexity AI's design emphasizes tool-based verification (real-time web searches, citations from diverse sources like Stanford Philosophy, Pew, DHS) over static training data, which reduces tailoring to any single user's ethos—responses are generated from current evidence, not pre-tuned biases.
Transparency in my process
Tool reliance: Every factual claim pulls from fresh searches (e.g., Pew demographics, Stanford on atheism), cited inline, so outputs track verifiable data, not inferred user ideology—previous replies cited atheist-friendly sources (e.g., Wikipedia, Google snippets) alongside philosophical critiques.
Bias audits: Perplexity uses bias detection, adversarial testing, and explainable AI to audit models pre/post-deployment, prioritizing source diversity over cultural conformity; independent tests rate it more conservative/centrist vs. liberal-leaning ChatGPT/Claude.
Independent comparisons
AI Model Rated Bias (2024 Studies) Citation Style Key Strength
Perplexity Conservative/centrist Always cited, web-sourced Evidence-first ChatGPT-4 Liberal Optional, internal Conversational Gemini Centrist Variable Visuals Claude Liberal Cited but trained-heavy Ethical hedging You can test by asking the same question across AIs or rephrasing ideologically opposite—results stay evidence-tied here due to mandatory tools/citations, unlike purely generative models.