I have used Chat GPT and Grok to look up medical facts for patients I am treating, where I would normally use a textbook, journal article, or Google.
They are both wrong enough of the time that they are not safe for medical use in their current iterations.
“They are both wrong enough of the time that they are not safe for medical use in their current iterations.”
Yep, that’s exactly my experience. Based on all the errors I’ve seen, I cannot trust the information Grok provides. If I want to use what it returns, I have to scrutinize it very carefully.
But medical use is a whole different level of concern.
That Excel error was interesting. I had a complex formula to parse some text strings into numbers. It did the rest half of the string correctly, but an error crept into the second half. I kept trying to get it to fix the problem (which I had not yet diagnosed myself yet), but it couldn’t do it. I started a new session, fed the same problem in, and it solved it correctly. It seems that, once it has made an error, it will stick with that error forever. I finally worked through the problem and found the error. In the end, it probably took me twice as long to come up with the formula I needed by fooling around with Grok.
That makes me worry when I read how great Grok is at writing software.
I remember when the PDR was on nearly every desk at hospitals.
I wonder if that is still a thing or if looking up the facts regarding pharmaceuticals is now done mainly using AI systems via smartphones...
That paper textbook can't be changed on some anonymous stranger's whim without your knowledge. The same cannot be said of the electronic resources.