Posted on 02/18/2024 6:02:04 PM PST by Red Badger
"THE CART IS SO FAR AHEAD OF THE HORSE, IT’S LIKE HOW DO WE REIN IT BACK IN WITHOUT CAREENING OVER THE RAVINE?"
You may remember a series of lawyers who have attempted to use AI tools in court — and were subsequently embarrassed and sanctioned when the chatbots screwed up, sometimes even inventing plausible-sounding cases that didn't actually exist.
So consider this: how would you feel if your doctor did the same thing, feeding your symptoms into an AI system to diagnose what's wrong with you?
That's a looming question, Politico reports in a fascinating story, that's currently stressing out regulators. And it has an alarming immediacy, because according to Politico's reporting doctors are already using unregulated and little-tested AI tools to aid in diagnosing patients — so this isn't some hypothetical conversation about a far-off future, but an already-happening-right-now phenomenon that could well be just one malpractice suit away from becoming a major medical and regulatory scandal.
"The cart is so far ahead of the horse, it’s like how do we rein it back in without careening over the ravine?" University of California public health researcher San Diego John Ayers asked Politico.
The obvious answer is that that the tech needs regulation, a concept that's got nominal buy-in from every stakeholder from the White House to OpenAI.
The problem, of course, is that actually doing so is way more easily said than done. As Politico points out, one key issue is that most medical products — think pharmaceuticals, surgery equipment, or other healthcare devices — can be approved once and generally trusted to keep working the same way for an indefinite period.
Not so with AI models, which are constantly in flux as their creators tweak them and add more data, meaning that an AI that gives a perfectly fine diagnosis one day might give a poor one after routine changes. And remember that a core reality of machine learning systems is that even their creators struggle to explain exactly how they work.
Government regulators like the FDA, Politico points out, are already spread thin to the breaking point. Asking them to create and maintain workflows to test medical AI systems on an ongoing basis would require politically impossible amounts of funding. So if these AI systems are already making inroads into regular medical practice, who's going to watch over them?
One idea, the outlet reports, is that medical schools and academic health centers could create labs that would constantly audit the performance of AI health care tools.
But even that idea involves a bit of hand-waving. Where would all those resources come from? And would the interactions between the patient populations at those mostly urban and affluent institutions accurately reflect the way AI would work in different and more challenged communities?
It's possible, in the long view, that AI could turn into an incredible boon for the medical system. Tech leaders certainly love to lean into that possibilty; OpenAI CEO Sam Altman, for instance, has publicly mused that future AI could provide high-quality medical advice to people who can't afford doctors.
Here in the present, though, the messy forays of AI into the medical system highlight just how uncomfortable certain realities of the tech are going to be — even in settings that are literally life or death.
“Expert systems” have helped with diagnoses for many years now.
Something I have heard quite often is that people who go into medicine want to work with patients, want to heal people, want to actually do “doctoring”. However, our modern system involves huge amounts of paperwork, for insurance, for affiliated hospitals, for protection against malpractice. It’s just forms, forms, forms. So doctors don’t have time to keep up with the latest advances and sometimes don’t have a clear idea of what they’re doing, but if Big Pharma says “give them this pill”, then the doctor just “gives them the pill” and sends the patient on their way.
And now the “doctors” are a further step for anything like “doctoring”. Let’s have the AI diagnose things — I have these forms to fill out. The forms are more important than the patient, after all.
All of our institutions are badly broken.
This diagnosis brought to you by Burger King.
This is SOP among the DEI crowd.
After COVID this isn’t sounding so bad.
I wonder how good a liar AI is? It might go all HAL9000 on the government if they gave it the COVID job.
It is because the regulators wholly owned and operated by monied interest in keeping medicine as expensive as possible can’t control it. Maybe a diagnosis can be had in minutes rather than two dozen appointments and expensive tests.
The sad part is, used responsibly, this might be helpful for some doctors and patients - especially those with difficult or unusual symptoms and conditions. Doctors don’t know everything, after all. But we mistrust it because we know there will be charlatans out there that do not use it responsibly.
Diagnosis of even relatively simple systems can be a challenge [Just ask my furnace guy the other day].
Getting a correct diagnosis is valuable in multiple dimensions. [The furnace guy eventually got it totally right on the first trip out to the compound and deserves every penny he earned.]
The human body is not simple in theory nor practice.
As long as the AI diagnosis is not mandatory [CDC, I’m talkin’ ‘bout jou...], I’m open to a 2nd opinion.
Software for EKGs and CTs already spit out a dx, usually the worst case scenario. It is up to the doctor to take the whole patient and evidence into context to make the correct diagnosis
And one of those ‘expert systems’ damn near got me killed.
PA ran a 12 lead and used a computer to analyze the results.
“You’ve had a heart attack,” he said, “and you have...”
I asked to see the tape.
“What’s this line of steady pulses? Is it an artifact of the system?” I’d worked in an ER long enough to at least know what a 12-lead stripe should look like.
Embarrassed the PA bad enough that I got sent to a cardiologist...and got a lot of chatter. The echocardiogram showed nothing, really. “You need more testing”....
Two days later in the ER, the Doc told me I was lucky - my lungs were so filled with a pulmonary embolism (blood clots), that had I go ne to bed that night instead of the ER, I would have died in my sleep.
(Yes I got the Now Banned J&J clot shot)
Another NDE thanks to specialist w/tunnel vision and PA that was in waaay over their head....
Fast food & pornography will sponsor medications soon enough...
I like the concept of AI as a “help only” in the diagnosis.
1. Dr. makes his best diagnosis first.
2. Dr. then runs it through AI and if AI agrees it is “probably” right.
3. If AI disagrees the Dr. then reevaluates and goes with AI or his fist diagnosis or considers more tests.
4. Dr. make the final decision, not AT!
AT = AI
Having an AI making medical condition diagnosis creates a new dilemma.
If you have a DEI doctor and an AI who is more likely to make the correct diagnosis?
Medical school should teach social justice and how to input data to HippocrAItes.
“The human body is not simple in theory nor practice.”
No, but PARTS are well known.
Today’s replacement knee surgery is more regularly done by robots...
We are getting close to ‘Star Trek’ medical technology...............
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.