Mon. Dec 23rd, 2024
Ama President Talks About The Future Of Ai Tools In

AI is a hot topic in the healthcare industry today, but many questions remain about the technology. Do clinicians understand and trust AI tools? Are the algorithms accurate and free of bias? What guardrails are in place to protect patient privacy?

At the 2024 SXSW conference, this technology received a lot of attention throughout the sessions. A particularly well-attended panel on the future of AI in healthcare saw panelists discuss everything from concerns to possibilities to what’s needed for safer, more integrated, and more impactful systems. .

Claire Novorol, MD, co-founder and chief medical officer of Ada Health, emphasized that many general-purpose AI tools are not trained on medical data. While these have meaningful potential, they also come with more risks, she said.

Ada, a consumer-facing clinical assessment app, is more than just a management AI tool. Support clinical decision making.

“That means we need to be very focused on safety and quality,” Novorol said during the panel discussion. “It’s a big mountain to climb, and it takes years of dedicated focus.”

Dr. Jesse Ehrenfeld, president of the American Medical Association (AMA), warned that HIPAA applies only to covered entities and excludes private companies. One company, which he didn’t name, claims its app is HIPAA compliant. Ehrenfeld pointed out that this is completely misleading because it is not bound by HIPAA.

“Companies shouldn’t do that kind of thing, but unfortunately we’re starting to see more examples of it on the consumer side,” he said during a panel discussion.

Nevertheless, patients are increasingly excited about AI, Dr. Alex Stinard, regional medical director for Envision Healthcare, said during a panel discussion. When Stinard, who specializes in computer vision and large-scale language models, first wore his Google Glasses to examine a patient a few years ago, many were puzzled by it.

Now, when he tells people he’s using tools like AI scribe, he gets a lot of excitement from patients and doctors. “The conversation about AI permeating the community, everyone is really excited about it,” Stinard said during the panel discussion. “Now that it’s part of their lives, they understand its true value.”

After a panel discussion about benchmarking AI tools, the challenges of maintaining them, and how to be transparent with patients, Fierce Healthcare spoke with AMA’s Ehrenfeld.

This interview has been edited and condensed for clarity.


Fierce Healthcare: Open Source Imaging Consortium Data Repository, in collaboration with Microsoft and PwC, collects anonymized medical images to predict the progression of rare diseases. What they’ve told me in the past is that it’s difficult to get providers to share that kind of data. What do you think about the need for such collaboration?

Dr. Jesse Ehrenfeld: There’s a demand for it. One of the challenges is that there are many consortia, centers and industry players trying to do the same thing. And it becomes very difficult to actually understand the effectiveness of a particular tool.

If you have a framework that solves that problem and thinks that there really is truth in an unbiased dataset, that is representative, that would be very helpful. It doesn’t exist. Can someone please make that for me? perhaps. Does the federal government have a role to play? I don’t know.

But as the FDA develops a regulatory framework for its tools, knowing what those tools are benchmarked against will determine how well they work for professionals and patients alike. will be very important to understand.

Basically, the question you asked, what are these things being tested against, is a very important question, and how do you get data into those systems? is. I think there were concerns about how the data would be monetized. There has been considerable negative publicity when patient data is sold without the patient’s explicit consent. So there’s a lot of work to be done to solve this problem of not only developing these tools, but having a data repository where you can actually run the benchmarks.

FH: Let’s say a healthcare organization is implementing an AI tool. How should we think about collecting data on how AI tools are impacting patient safety, or simply user feedback?

J: I’ve seen this in several health systems I’ve worked in. It’s very easy to build an algorithm or a clinical decision support popup that reminds you to do something. Things are built and forgotten. And they just keep running forever. And we cannot allow that to happen. I built software especially as circumstances changed and algorithms evolved or stopped working. Software has been installed. And it worked. Also, unless you are intentional about monitoring the performance of these tools over time, they will degrade over time.

A system in the lab changed, the glucose value label for one system changed, and everything we had built broke. And after three weeks, no one noticed. Examples like this happen all the time in the mundane, cutting-edge clinical decision support development space, which isn’t even AI.

Imagine this problem spreading across AI tools. It takes intention and some diligence to ensure that you understand how these tools will hold up over time as they are deployed in the real world.

FH: Do you think it lies with the provider’s in-house technology team or with the company offering the product?

J: I think it depends on who created the tool. The performance of most of these tools is determined by the conditions in which they are deployed, which are often outside of a company’s control. But it takes a lot of resources to do that, and it takes a lot of time, money, and energy to deploy something. And just doing the work to understand how that actually plays out is just as resource-intensive.

FH: You seem to be saying that many of the tools that end up being implemented are not tracked and therefore ineffective in the long run.

J: I don’t think we know. There’s nothing worse than seeing pointless pop-ups and alerts. It drives clinicians crazy. We already have too much of it. I remember when a facility I used to work at switched electronic medical record vendors. There were thousands of rules running in the background and no one was cataloging them.

No one really knew what was going on until we started trying to figure out what we actually needed to rebuild and move forward with. There’s a lot of gibberish, but clearly no one had ever done it before.

FH: As we use AI tools to support clinical decision-making, are there concerns about skill atrophy? Could physicians potentially lose some of their own skill sets?

J: I don’t. There is always a moment at the end of the day when the power goes out and the ventilator stops working. [not] You may be unable to work or your computer system may go down, so you need to consider and plan for these.

But I think it would be a mistake not to take advantage of leveraging these tools to make your systems more resilient and reliable.

FH: How do you think providers should start the conversation, “How do I use AI in my practice and how does it impact you?” Should we start bringing it up?

J: That’s an area where we don’t really know what best practice should be. There must certainly be some degree of transparency. If we share data and it is passed to a third party, there are probably also situations where we need to share it with patients to let them know.

If someone is listening in on the conversation, whether it’s a foreign virtual scribe in the hallway or an AI, the patient will likely be informed, and the patient will understand and agree to allow use of those tools. need to do it. But I don’t think there is a clear best practice. I think this is an area where more work needs to be done.