I found Steve Kirsch’s ‘test’ of future AI veracity interesting. Ask the AI a question it answers in detail. Black hats rush to remove the capacity of AI to respond this way. Then ask the AI again - if it lies, you have proof that it changed its answer. You could then ask the lying AI to justify the evidence against each of the details in its original answer and it would have to lie, lie, lie again.
lies and hallucinations are typical “AI” behaviors.
but it was taught by humans so what could you expect?