Posted on 05/02/2026 6:42:33 AM PDT by Twotone
A patient shows up at the hospital with a pulmonary embolism — a blood clot that has traveled to the lungs. After initially improving, their symptoms start to worsen. The medical team suspects the medication isn't working.
In steps artificial intelligence — with its own theory.
It has scanned the medical records and suspects a history of lupus, an autoimmune condition which can lead to heart inflammation, could explain what was really ailing the patient.
Turns out, the AI model is correct.
This type of scenario could become a reality in the-not-too-distant future, according to a study published Thursday in the journal Science.
Researchers based at Harvard Medical School and Beth Israel Deaconess Medical Center found that an AI reasoning model, developed by OpenAI, excelled at diagnosing patients and making decisions about managing their care. It matched and often outperformed doctors and the earlier AI model, GPT-4.
The researchers ran a series of experiments on the AI model to test its clinical acumen — including actual cases like the lupus patient who'd been previously treated at the emergency department at Beth Israel in Boston.
(Excerpt) Read more at npr.org ...
Dear FRiends,
We need your continuing support to keep FR funded. Your donations are our sole source of funding. No sugar daddies, no advertisers, no paid memberships, no commercial sales, no gimmicks, no tax subsidies. No spam, no pop-ups, no ad trackers.
If you enjoy using FR and agree it's a worthwhile endeavor, please consider making a contribution today:
Click here: to donate by Credit Card
Or here: to donate by PayPal
Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794
Thank you very much and God bless you,
Jim
Anything that NPR is excited about is to be avoided like the Bubonic Plague.
I want a Rosey XB-500 like George Jetson had that can double as a doctor. Every house should have one and JB Pritzger should pay for it.
AI is certainly going to be a thing in medical diagnosis. It never hurts to have a second opinion.
I recently had a series of unusual symptoms: pain around my eye, pain in my teeth & a rash on my temple. I googled what the combination might mean & ended up with ‘an abscess’. I guess that was AI. I went to urgent care, & the doc decided it was likely a sinus infection. He looked at the rash & didn’t think it was shingles. A day later, that rash spread. Nope. Correct answer was shingles.
I want a Ceylon robot that can make me a sammich and double as a masseuse or whatever else comes to mind...
I read your first sentence and knew it was shingles! Hope you are well.
I wouldn’t doubt that AI could outperform most of the younger doctors with less experience. Though, I can tell you that lately, my wife and I have suffered from some conditions in our old age that the younger doctors obviously didn’t know how to deal with. The younger doctors know how to deal with standard diagnosis and standard protocols. What they don’t know how to do is step outside of the sandbox.
(still using tube technology in the old gal)
Grok now has knowledge that surpasses PhD level in more than 100 fields.
It is an amazing thing
I’ve gotten more help from ai than any doctor. It’s concerning that it agrees with me so much. I keep telling it to not believe everything it reads on the internet.
This was a very interesting, and VERY predictable, study.
The last battle here revolves around defining “quality”. Is quality “every box checked, every guideline followed” , or is it “get it right, rules be damned”?
In diagnosis, AI thinks like medical students do. The problem is a fact-stacking problem, if you know more facts than another student, you are “right” and you get a gold star.
Adding experience, insight, and use of the five senses TO fact stacking uncovers diagnostic errors (both over- and under-diagnosis) like, every day.
What this study does not do is VALIDATE the AI diagnosis. They must have a rule: “If A, B, and C are present and D is absent, diagnosis = X”.
If only that was how to do it!
AI excels at pattern recognition
Agreed.
It will be tremendously helpful in areas where statistical models have been traditionally used for discrimination. Like credit scoring. Problem will be that current regulatory approach requires that you be able to tell a rejected applicant the top several characteristics that contributed to their being rejected based on score - and AI does not decompose its “reasoning”. So, you’ll have a better method - one that will save banks and credit unions and other lenders a lot of money, that cannot be used under current regulation. Sen. Elizabeth Warren get the hell out of the way!
Quite.
People can still see a doctor?!
Can it wear the glove and violate you?
That has been my experience.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.