Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: SeekAndFind; cymbeline; going hot; eyeamok; zeestephen; bak3r; I-ambush; A Cyrenian; ...

I see a lot of fear on this subject, and in certain contexts, is not at all unjustified. In many cases it is FULLY justified. But I wanted to add this from someone who works in Healthcare and is actively using AI as a tool.

Everything has a duality to it. It can be used or misused. A hammer that is used to build a wonderful coffee table can also be used to murder someone by bashing their skull in.

I started out working clinically, but have worked in the IT end of this for many years now, and AI use in a specialty such as Radiology is NOT primarily used for making a diagnosis that a Radiologist would blindly accept.

THAT would be insanely stupid.

And that is not to say there won’t be a push to change this, as FReeper glorgau observes, there will come a time with pressure to accept those AI findings. But we are not there. And Freeper eyeamok is well justified in the loss of confidence. Who of us hasn’t lost confidence in healthcare over the events of the last three years.

That said, I thought I would offer my perspective. At my institution, we use AI in a couple of different ways, and it is EXTREMELY useful, especially if the volume of work is great, and you don’t have a surplus of people to get it all done quickly.

One way we use it is to have AI scan CT and MR images to look for patterns that may indicate specific abnormalities. We have multiple algorithms, one that looks for indications of Pulmonary Embolism, another that looks for rib fractures, and yet another that scans for what might be a stroke or brain bleed.

I will use the algorithm for brain bleeds (stroke) as an example.

We send our images to the server that analyzes for these specific things, it looks at certain images and uses AI to look for characteristic patterns. If it finds nothing, it uses an API to talk to the workflow orchestration application and writes a specific value to a field in the database.

The “workflow orchestration” application is the heart of the Radiology department, it displays a list of what imaging exams need interpretation, separates them into what are called subspecialties (so people who specialize in reading brain studies see only those, and someone who specializes in musculoskelatal studies see only those) and sorts them by exam age and priority (the urgency with which they must be read) so it helps the Radiogist figure out which ones need to be read right away because the patient is on a gurney in a room in the ED waiting for the result, or if they are someone who has knee pain and can be read today or tomorrow.

They work from the top of their list, and when an exam is selected for interpretation, it opens the images, speech recognition, and electronic medical record all in context for that patient and exam, so the Radiologist does not have to look them up individually in each application, which would not only be inefficient and tedious, but dangerous.

So, If AI sees nothing on the images, a value is written to the database that translates to a “badge” on the exam in the list which is colored green indicating AI did not see anything. The exam must still be read and every image examined, but...it can be done in turn.

There is also a yellow badge indicating that the AI application got the images, but is still looking at them. Also good to know. The Radiologist may choose to read it anyway, keeping in mind they may want to check again if something positive comes back, which would compel them to take a more focused look at the exam to be sure nothing was missed)

But if there is a RED badge indicating something was seen, two things happen. The “badge” is flagged with a red AI+ flag, and the priority of the exam is elevated to what is called a HIGH STAT priorty. This pushes the exam up to the very top of any worklist it is on and will be the next exam read.

So when the Radiologist selects that exam for interpretation, all the images open up, the electronic record opens up in context, a speech recognition job is initiated, any scanned documents or notes for the exam appear automatically, and a desktop AI widget is notified that the exam is opened.

When the AI application sees the exam is opened, it automatically opens the result of its AI analysis of the images and displays them for the Radiologist. It highlights the area it saw the unusual pattern indicating there might be a bleed in, and has a color map applied which makes the abnormal pattern stand out.

All this happens within five to ten seconds!

When a stroke is involved, EVERY SECOND COUNTS. It can mean the difference between simply losing strength in one hand which might be regained with therapy, or losing the entire side of the body permanently for the rest of the person’s life.

The Radiologist MUST read the entire study. But they can, instead of going through the entire brain scan, go right to that and look immediately at it. If there is indication of a stroke, they can pick up the phone, call the ED physician that there IS a stroke, and the ED doc can immediately begin aggressive treatment.

EVERY SECOND COUNTS.

So, even if a Radiogist goes right to that affected area immediatly upon opening the study and is guided on where to look first by Artificial Intelligence, they still have to read, with their human eyes and brain, the rest of the study. But that AI might have saved 30 seconds. Or a minute. Or worse, they might never have seen the defect and missed it completely, and the patient might be permanently impaired or even die.

Again, using these brain exams as an example, in the old days, they might have printed the films to put up on a light box, and a Radiologist might have had to look at a hundred images. That is a lot, but today, there are many hundreds or thousands of images, and each one must be viewed.

That is a lot.

Physicians are humans. They have faults. Some are better than others. A given physician might be distracted. They might not be feeling well, they might have a sick family member or one on the verge of dying...you name it. They could be distracted by people running in and out of the area, asking them questions, phones ringing, that kind of thing. You blink and rub your eyes, and out of those 5,000 images, you miss that ONE image that might be key. It is a huge responsibility, and they are only humans. They are not perfect.

These Artificial Intelligence applications that we use are meant to ASSIST the physicians, not take over their job. Everyone knows that is their purpose, they know the advantages and limitations, and would look at you as if you were a lunatic if you suggested letting a machine do it.

My point in this long post is to reassure the many people who might be horrified at the concept of having the interpretation of their CT Brain involve the use of AI that in these things, AI is an aid, not an end.


22 posted on 06/16/2023 9:51:04 AM PDT by rlmorel ("If you think tough men are dangerous, just wait until you see what weak men are capable of." JBP)
[ Post Reply | Private Reply | To 1 | View Replies ]


To: rlmorel

Thanks for that- I was just discussing this very thing yesterday with family- it can be good, especially when something like chat ai is used to scour the internet for new articles on specific medical issues that even the experts may be unaware of as that info may have just recently been discovered and put online etc- which can then be examined like you say in an instant- and any relevant info on the topic presented by the Chat AI program- This woudl be especially relevant to people with mystery diseases or conditions where a doc can find all current info on the symptoms etc, and what the disease might be- to narrow down the field of possibilities-

On the flipside, There is supposedly a new tech coming out using AI to mimic a person’s voice where scammers use AI to analyze captured voice of a person- then in seconds, it can supposedly speak using that person’;s voice- the hackers/scammers then use it to make calls to the person’s family members pleading for $$ to ‘stay out of jail’ or whatever-

Have been having conversations with folks over AI ‘taking over Art’ and ‘putting artists out of work’ etc- not gonna happened based on what i see produced by AI text to image programs- they still look very ‘computer generated’ to me- but, what it can do is create unique work which the artist can then take and create their own unique style that look nothing like AI generation, from what the AI produces-

AI is a double edged sword


24 posted on 06/16/2023 10:06:32 AM PDT by Bob434 (question )
[ Post Reply | Private Reply | To 22 | View Replies ]

To: rlmorel

“One way we use it is to have AI scan ...”

Very informative post. Thanks.

In my mind AI is mostly pattern recognition which has been going on for years. What’s new is advances in image and sound processing, made possible by newer hardware and the army of software people slugging through the mud advancing the front.

The computer are just doing what they’re told as has been the case since Babbage.


30 posted on 06/16/2023 10:54:25 AM PDT by cymbeline
[ Post Reply | Private Reply | To 22 | View Replies ]

To: rlmorel
AI can surely assist in health care today but there's a huge trust factor. We all know costs and profit pressures will move AI into places it doesn't need to be.

We cannot take the human factor out of decision making. A machine doesn't have the experience of being a human.

Several classic Star Trek episodes dealt with the idea of man becoming submissive to computer intelligences. Those episodes are more relevant today at the dawn of AI than they were at the dawn of basic data processing.

32 posted on 06/16/2023 11:13:03 AM PDT by newzjunkey (We need a better Trump than Trump in 2024)
[ Post Reply | Private Reply | To 22 | View Replies ]

To: rlmorel
Re: "they know the advantages and limitations, and would look at you as if you were a lunatic if you suggested letting a machine do it."

Why, exactly?

Are there reliable data that show AI interpretation is conspicuously inferior to a radiologist, or even dangerously inferior?

I actually find it comforting that a computer can compare and contrast hundreds of images in less than a minute, and instantly isolate the one or two that need special attention.

The most distressing thing about government managed health care is that there are way, way, way too many human beings involved.

I will be happy to accept a machine diagnosis and prescription as long as I have access to a comprehensive medical website and can educate myself on the specific issue and solution.

If immediate treatment is necessary, I will throw the dice and let the humans handle it.

36 posted on 06/16/2023 11:39:49 AM PDT by zeestephen (Trump "Lost" By 43,000 Votes - Spread Across Three States - GA, WI, AZ)
[ Post Reply | Private Reply | To 22 | View Replies ]

To: rlmorel
AI, like most things, can be used for good or evil. AI of itself, like most things, is neutral. The issue is who is controlling AI programming and application.

Right now, it seems to be applied for helpful things.

But the Bible show us that sometime in the not-so-distant future, AFTER the "great catching up" into heaven of believers in Christ, there is some kind of machine called "the Image of the Beast".

they made an image to the beast...the image of the beast was given life that it should both speak, and cause that as many as would not worship the image of the beast should be killed.
Rev. 13:14-15.

Nasty stuff. AI or some AI hybrid is in there somewhere.

44 posted on 06/17/2023 12:43:16 AM PDT by Jim W N (MAGA by restoring the Gospel of the Grace of Christ (Jude 3) and our Free Constitutional Republic!)
[ Post Reply | Private Reply | To 22 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson