FDA Outlines Plans for Medical Devices with AI

We frequently hear the buzz word “AI” (artificial intelligence), usually coupled with how it’s coming for our jobs and how different the world will look with artificial intelligence in many facets of our lives. One area where there is little disagreement that it can make a big difference, however, is in health care. According to the United States Food and Drug Administration (FDA), AI has the “potential to transform health care by deriving new and important insights from the vast amount of data generated during the delivery of health care every day.” The FDA also recognizes that medical device manufacturers can use AI technology to innovate their products and better assist health care providers and improve patient care, but that changes made to a medical device driven by AI would likely be subject to a premarket review and require additional framework beyond what is currently in existence.

To that end, the FDA is considering a total product lifecycle-based regulatory framework for the technology that would permit modifications and adaptations to be made from real-world learning, while still ensuring the safety and effectiveness of the software as a medical device is maintained. On April 2, 2019, the FDA published a discussion paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) – Discussion Paper and Request for Feedback” that outlines the FDA’s foundation for a possible approach to premarket review for AI software modifications.

The Discussion Paper

The discussion paper proposes to apply a “focused review” approach to premarket assessments of SaMD technologies that are powered by AI/ML, centered on adapting to the iterative and autonomous nature of SaMD tools to better leverage the Agency’s ability to continuously learn from, and improve performance based on, real-world experiences.

Under the proposed framework, AI/ML-based SaMD would require a premarket submission when a software change or modification “significantly affects device performance or safety and effectiveness; the modification is to the device’s intended use; or the modification introduces a major change to the SaMD algorithm.”

In the framework, the FDA introduces a “predetermined change control plan” in premarket submissions. This plan would include the types of anticipated modifications (the “Software as a Medical Device Pre-Specifications”) and the associated methodology being used to implement those changes in a controlled manner that manages risks to patients (the “Algorithm Change Protocol”).

The FDA would expect a commitment from manufacturers on transparency and real-world performance monitoring for AI/ML-based SaMD, as well as periodic updates to the FDA on what changes were implemented as part of the approved pre-specifications and the algorithm change protocol.

The proposed regulatory framework could enable the FDA and manufacturers to evaluate and monitor a software product from its premarket development to postmarket performance. This potential framework allows for the FDA’s regulatory oversight to embrace the iterative improvement power of AI/ML-based SaMD, while continuing to ensure patient safety.

The ideas found in the discussion paper were chosen based on International Medical Devices Regulators Forum (IMDRF) risk categorization principles, the FDA’s benefit-risk framework, risk management principles in FDA’s 2017 guidance on submitting new 510(k)s for software changes to existing devices, PreCert’s envisioned organizational-based TPLC approach, as well as the 510(k), de novo classification request and premarket application pathways.

Commissioner Gottlieb’s Remarks

On the same day as the release of the discussion paper, FDA Commissioner Scott Gottlieb also released a statement, noting that AI is already being used in devices to detect diabetic retinopathy and in a second AI-based device to alert providers of a potential stroke in patients.

Those two approved uses of AI, however, are based on “locked” algorithms, which are highly supervised models that can only be altered or trained to use new data at pre-approved intervals. Gottlieb and the FDA are aiming to one day be able to approve “adaptive” or “continuously learning” AI/ML algorithms, which do not require manual modifications to incorporate learning or make updates.

Gottlieb stated that the FDA plans to issue draft guidance that will be crafted, at least in part, based on input received in response to the discussion paper. He plans to apply current authorities granted to the Agency in new ways “to keep up with the rapid pace of innovation and ensure the safety of these devices.”

He also focused on the importance of collaboration, saying, “As with all of our efforts in digital health, collaboration will be key to developing this appropriate framework.”

All comments are to be submitted no later than June 3, 2019.

NEW
Comments (0)
Add Comment