Posted on 07/16/2020 12:15:38 PM PDT by ransomnote
Since February of last year, tens of thousands of patients hospitalized at one of Minnesota’s largest health systems have had their discharge planning decisions informed with help from an artificial intelligence model. But few if any of those patients has any idea about the AI involved in their care.
That’s because frontline clinicians at M Health Fairview generally don’t mention the AI whirring behind the scenes in their conversations with patients.
At a growing number of prominent hospitals and clinics around the country, clinicians are turning to AI-powered decision support tools — many of them unproven — to help predict whether hospitalized patients are likely to develop complications or deteriorate, whether they’re at risk of readmission, and whether they’re likely to die soon. But these patients and their family members are often not informed about or asked to consent to the use of these tools in their care, a STAT examination has found.
The result: Machines that are completely invisible to patients are increasingly guiding decision-making in the clinic.
Hospitals and clinicians “are operating under the assumption that you do not disclose, and that’s not really something that has been defended or really thought about,” Harvard Law School professor Glenn Cohen said. Cohen is the author of one of only a few articles examining the issue, which has received surprisingly scant attention in the medical literature even as research about AI and machine learning proliferates.
In some cases, there’s little room for harm: Patients may not need to know about an AI system that’s nudging their doctor to be more thoughtful, such as with algorithms meant to encourage clinicians to broach end-of-life conversations. But in other cases, lack of disclosure means that patients may never know what happened if an AI model makes a faulty recommendation that is part of the reason they are denied needed care or undergo an unnecessary, costly, or even harmful intervention.
That’s a real risk, because some of these AI models are fraught with bias, and even those that have been demonstrated to be accurate largely haven’t yet been shown to improve patient outcomes. Some hospitals don’t share data on how well the systems work, justifying the decision on the grounds that they are not conducting research. But that means that patients are not only being denied information about whether the tools are being used in their care, but also about whether the tools are actually helping them.
MORE AT LINK
Is this really AI? Sounds like just a computer program. There is a big difference.
Fortunately, my father is the resilient, forward-looking sort and is grateful for not having terminal cancer. And he will enjoy telling the story to other doctors as a jab at the arrogance of their profession.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.