Posted on 01/13/2026 1:54:57 PM PST by nickcarraway
A bill in Congress confuses device clearance with medical judgment
Picture an older patient already juggling a box of daily pills: a blood thinner, a blood pressure combo, metformin, a proton pump inhibitor, and something for sleep. A new cough leads to an online symptom check. An automated artificial intelligence (AI) system flags likely bronchitis and, with a few clicks, prescribes an antibiotic. No clinician looks at the medication list. No one notices the new drug interacts with the blood thinner. The patient's next stop is the emergency department.
That scenario would come a step closer if Congress redefines who -- or what -- counts as a prescriber. The Healthy Technology Act of 2025 (H.R. 238) would amend the Federal Food, Drug, and Cosmetic Act so that AI or machine learning (ML) technology can qualify as a "practitioner licensed by law" to prescribe drugs, so long as (1) a state authorizes it and (2) the technology is approved, cleared, or authorized by FDA under device pathways -- the same ones used for X-ray software or ECG analysis apps. The bill, introduced in early 2025, is currently sitting in the House Committee on Energy and Commerce.
At first glance, that sounds modern and pragmatic: if FDA has already vetted an AI tool, and a state wants to let it work, why not let it prescribe?
Because this is a category error. FDA device authorization is an engineering judgment about a product's performance in a narrow, predefined use; a medical license is an indicator that a person can integrate context, values, and trade-offs, and be accountable for the result. Treating an algorithm as the legal equivalent of a clinician collapses the distinction between a tool and a professional -- and blurs responsibility when something goes wrong.
The technology is neither mature enough nor governed tightly enough to shoulder that role. Even the most advanced systems remain brittle under pressure. A 2025 study of "jailbreak" attacks in medical prompts found that leading large language models complied with unsafe instructions in most test cases -- up to 98% when the most effective techniques were used in simulated scenarios. Government testing bodies have reached similar conclusions: general-purpose models can be induced, with relatively simple prompts, to bypass safeguards and aid harmful tasks. If AI is itself the "prescriber," those exploits become a medication safety problem, not merely a content-moderation bug.
Even setting adversarial attacks aside, the evidence base for FDA-cleared AI is thinner than many assume. A 2025 analysis of 691 AI/ML devices cleared through 2023 found that only 1.6% reported randomized clinical trial data; most device summaries omitted basic information on study design, sample size, or patient demographics. The FDA is trying to strengthen oversight -- for example, by issuing guidance on Predetermined Change Control Plans so that algorithm updates are pre-specified, documented, and auditable across a device's life cycle. But those plans govern how a product changes, not whether a bot should be allowed to prescribe without human supervision.
Proponents will argue that autonomy would expand access, especially where clinicians are scarce. AI can indeed be a force multiplier. AI-enabled tools already help flag medication errors, detect adverse events, and monitor risk between visits. And decades of experience with clinical decision support -- the rules-based cousin of generative AI -- show it can curb dangerous prescribing. One trial in the New England Journal of Medicine found that a combined education and informatics intervention reduced high-risk antiplatelet prescribing. These are strong arguments for AI as an assistant. They are not arguments for AI as a prescriber.
In fact, the bill runs against a consensus forming among patients, clinicians, and ethicists: keep a human in the loop. The National Academy of Medicine's work on a code of conduct for health AI emphasizes oversight, transparency, equity, and responsibility -- guardrails that presume a clinician remains accountable for care decisions. The American College of Physicians has warned Congress that AI tools should "complement and not supplant" physicians, and that prescription drugs should never be ordered without physician involvement.
The stakes are not abstract. America already struggles with polypharmacy, which is associated with falls, frailty, disability, and mortality, especially in older adults. Nearly nine in 10 older Americans take at least one prescription drug, a sign of both therapeutic progress and a system primed to reach for medicines first. We have been trying, slowly, to rebalance toward deprescribing when risks outweigh benefits. Enshrining an AI "practitioner" in statute tilts in the opposite direction: it embeds the logic of doctor as prescriber while removing the doctor.
If you want to see how that could play out, look at telemedicine's hard lessons with controlled substances. During and after the COVID-19 pandemic, the Drug Enforcement Agency (DEA) and HHS repeatedly extended telemedicine flexibilities while it worked on permanent rules -- a moving target that remains complex even for licensed clinicians. Now imagine layering in a nonhuman prescriber. Who holds the DEA registration? Who signs the electronic prescription? How does a pharmacist verify identity, field clinical questions, or report suspected abuse when the "provider" is a server farm and a model card? H.R. 238 nods to state authorization and FDA clearance, but it cannot reconcile this maze of federal and state obligations; those frameworks were built for humans.
There is a better path. Instead of mistaking device clearance for a medical license, Congress should legislate how AI is used in prescribing:
Mandate human accountability. Require that AI-generated prescriptions be reviewed and co-signed by a licensed clinician responsible for the outcome. Anything less is not medical care; it is automated dispensing. Raise the evidence bar. Condition high-risk, autonomy-adjacent functions (such as order entry) on robust evidence -- randomized trials or strong real-world studies showing improved safety or access without widening disparities.
Bake in life-cycle safeguards. Make Predetermined Change Control Plans, post-market performance monitoring, and audit logs that record prompts, model versions, and overrides table stakes for any AI used in prescribing workflows.
Hardwire polypharmacy checks and security. Require that any AI suggesting a new drug automatically checks for interactions, overall sedating or confusion-causing medication burden, kidney-adjusted dosing, and deprescribing opportunities. And because these systems can be manipulated, they should undergo independent security testing (before they ever reach patients) such as "red-team testing," where experts try to deliberately break or mislead the model. One key vulnerability is the "prompt-injection attack," in which hidden or cleverly worded instructions can trick an AI into ignoring its safety rules. Ensuring it can withstand these attacks is a matter of basic patient safety.
None of this denies AI's promise. On the contrary: the most compelling use cases -- closing care gaps between visits, catching dangerous interactions, monitoring adherence, pointing out when a patient would benefit from fewer drugs -- are fully compatible with human-led practice. The FDA tracks hundreds of AI-enabled devices, and that list grows monthly; the task now is to channel that innovation toward safer prescribing, not to outsource judgment.
Congress should keep responsibility where it belongs -- with people -- and instead create legislation to demand better evidence, stronger guardrails, and fewer unnecessary pills. Anything else confuses machinery for medicine, and patients will pay for that confusion with their health.
Henry Bair, MD, MBA, is a resident physician at Wills Eye Hospital and a physician-writer focused on the intersection of health policy and patient care.
|
Click here: to donate by Credit Card Or here: to donate by PayPal Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794 Thank you very much and God bless you. |
Prescription drugs have become a scam that generates a thousands of dollars a year in doctor for patient monitoring of dubious value for most medications. AI monitoring combined with real time physiological testing is far better than a rushed 15 minute visit with a busy doctor.
AI is not an “algorithm” like he thinks, it is machine learning that is fundamentally similar to human learning. But unlike a human doctor who might have been tired or hung over the day in med school when an obscure but deadly drug combination was discussed, AI will have perfect recall. Prescribing meds is based on pattern matching - integrating symptom and data and weighing alternative to select the best course of action. AI excels at pattern matching.
“Picture an older patient already juggling a box of daily pills: a blood thinner, a blood pressure combo, metformin, a proton pump inhibitor, and something for sleep.” The only thing needed is the Montelukast.
“No one notices the new drug interacts with the blood thinner.”
It might be missed by the doctor or pharmacist BUT AI WOULDN’T MISS IT!
I only saw it once and half remember some of it but the movie Demon Seed (1977) with Julie Christie, from the novel by Dean Koontz has the computer change the insulin dose for someone so they will die and be out of the way. Followed by hybrid impregnation of the woman as part of a takeover plan.
As a Luddite myself, this was a horrible situation.
Alterations in the prescriptions in the current AI era would give opportunities for a coordinated attack.
Side note: Julie Christie was a finalist contender for the role of Honey Ryder in the James Bond film, Dr. No, “but producer Albert R. Broccoli reportedly thought her breasts were too small.”
https://en.wikipedia.org/wiki/Demon_Seed
“Inevitably, AI will prescribe most of the medicines we take.
And do it much better than humans.
Of course the pharmacists will not like it.”
The prescription would still go to the pharmacist.
“Not necessarily”
Why not? This legislation does not change who is legally allowed to dispense the drugs.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.