I have found so many errors in the AI summaries that have shown up in my searches that I don’t trust them. I always compare them against something else that’s actually published.
The Google Search "summaries" that appear at the top of the search results are largely worthless, since they are basically just summaries of the first few results, and thus unpredictable and untrustworthy.
I was referring to the "AI Mode", which uses the Gemini AI itself. Very different mechanism. AI Mode uses the LLM's entire learning corpus, not just a couple of search results, and is much more reliable and accurate. The short summary that it gives as the first part of its extended response is what I meant; sorry that wasn't clear.
In any case, you are absolutely correct that checking AI-produced responses against published sources is always a good idea.