Health

Enabling the Next Wave of Artificial Intelligence in Medical Imaging

Artificial Intelligence in medical imaging is having more real-world impact on health care than most industries.

In the span of a decade, AI and machine-learning softwares have gone from a back-end research capability to promising clinical tools. Studies show software applications already have the ability to improve physician efficiency, predict and identify strokes and help radiologists better detect breast cancer.

The next wave of AI algorithms in medical imaging promise even more.

The adoption of AI in radiological imaging must support a Quadruple AIM of improving public health; enhancing patient experience; driving cost efficiency; and improving clinician and health care staff experience. As more advanced AI solutions come to market, there are important topics that need to be considered, including how algorithms are trained; validating AI performance; patient data security; and implications for patient access. The opportunity for AI innovation in medical imaging is great, but if implemented inappropriately and inconsistently, the use of AI may have unintended consequences on care, cost and risk.

Today, the Food and Drug Administration will host a public workshop to examine the evolving role of AI in medical imaging. Feedback gathered from workshop participants will help inform the FDA of the benefits and risks associated with use of AI in radiological imaging and the best practices for the validation of AI-automated radiological imaging software and image acquisition devices.

As the chair of an industry association whose members first pioneered machine learning three decades ago, I would urge the FDA to consider a simple, overriding philosophy as it works to address the wave of new AI applications: Don’t fix what isn’t broken.

To take advantage of the rapid development cycles that AI offers while supporting a Quadruple AIM, it’s important to create consistent regulatory oversight rooted in industry standards and best practices.

Today, FDA regulatory structures certify that new AI solutions are safe and effective. Current FDA regulatory pathways, such as 510(k) and De Novo classification, are being used to clear emerging AI technologies demonstrating these pathways to adaptability. In fact, over just a five-year period, the number of AI algorithms cleared by the FDA in medicine has increased dramatically. Something is going right.

This doesn’t mean that the FDA should not take proactive steps, when necessary and while ensuring patient safety and efficacy, to update process or develop guidance to allow for greater flexibility and efficiencies in AI technology development. Unnecessary clinical data and regulatory review burdens must be avoided, as this would restrict patient and provider access and disincentivize the creation of new AI-related technologies, slowing their deployment.

The promise of AI in medical imaging to organize and analyze data quickly is enormous, but not without its challenges. The FDA plays a pivotal role navigating how much physicians and patients ultimately benefit from AI technology in medical imaging and the degree to which AI will ultimately advance Quadruple AIM goals.

I am hopeful that with the baseline of existing regulatory pathways, clear guidance and transparent and flexible thinking, the FDA can effectively regulate next-generation AI software, ensuring a smooth delivery from the lab to the patients’ bedside.

 

Dennis Durmis is senior vice president – radiology head of Americas region for Bayer and chair of the Medical Imaging & Technology Alliance board of directors.

Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.

Morning Consult