True story of how AI wasted a half hour of my life.
I was looking for a quote from St. Ambrose. Gemini had told me the quote was in a work called the Hexameron. I asked Gemini to find the quote for me, and it confidently said it was on the English version of the work on Archive.org, page 285.
I went to page 285, read the page, read the surrounding pages...no quote.
So I asked Gemini to check again. It said, “Sorry, I was wrong. The quote is on page 247.”
I went to page 247, read the page, read the surrounding pages...no quote.
This went on two more times before I said: “I think you are making stuff up.”
It said: “You are right to be skeptical. I did a really in depth search of the work. I think the problem is that the page numbers on the PDF don’t match up with the page numbers from the actual book. Therefore, I have determined without doubt that the quote is on page 189 of the PDF.”
Do you think it was on page 189? No, it was not.
I finally said: “I don’t think this quote exists in the Hexameron.”
Gemini said: “You’re right. It doesn’t.”
This kind of thing has happened more than once. More than twice. At present, I use AI for research in the same way I use Wikipedia. It may potentially be helpful in tracking down something obscure. But I don’t trust the result AT ALL unless I can see where it’s sourced from.
https://tech.co/news/ai-startup-chatbot-revealed-as-human-engineers
And still, with all its faults, it is more efficient than any college teacher I’ve had. It’s a tool, and you have to learn to use it right. Can’t blame a hammer for hitting your thumb. Or for asking it to be more than a hammer. Or, like a car—you steer it.
Well once they finish the YUGE water and power guzzling super centers, the answer correctness should improve. Might? Could? Perhaps? Possibly? Some know the routine.