FDA is playing catchup, releases an “action plan” for AI/ ML in medical devices
Tiger’s take-home message:
The FDA is late to the software world. As Peter Verrillo has shared in previous posts, the computational power in medical devices has grown exponentially (100X increase in 5 years), whereas the FDA has remained flat (changed at 1X in the last 5 years). They are outmatched. You can imagine that the FDA is struggling to answer a myriad of questions such as, “How does a medical device maintain Change Control for an algorithm that is changing constantly? What type of reference data are appropriate to utilize in measuring the performance of AI/ML software devices in the field? How much data should be provided to the Agency, and how often? How can the algorithms, models, and claims be validated and tested? How can feedback from end-users be incorporated into the training and evaluation of AI/ML-based SaMD?“
FDA has written a 5-part “action plan” for device companies with AI / ML software. It’s mostly alot of words about what the FDA “wants to do”, but no concrete guidelines or actionable guidance for the medical device companies. The title really should not say, “Action Plan”. The FDA wants to maintain transparency with patients. The FDA wants to ensure that bias is eliminated when device companies feed the software historical data sets of their choice. There is no timeline provided by the FDA.
If you have an AI / ML component in your medical devices, good luck.
download the FDA PDF here
Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan
Artificial intelligence (AI) and machine learning (ML) technologies have the potential to transform health care by deriving new and important insights from the vast amount of data generated during the delivery of health care every day. Medical device manufacturers are using these technologies to innovate their products to better assist health care providers and improve patient care. One of the greatest benefits of AI/ML in software resides in its ability to learn from real-world use and experience, and its capability to improve its performance. FDA’s vision is that, with appropriately tailored total product lifecycle-based regulatory oversight, AI/ML-based Software as a Medical Device (SaMD) will deliver safe and effective software functionality that improves the quality of care that patients receive. Consistent with FDA’s longstanding commitment to develop and apply innovative approaches to the regulation of medical device software and other digital health technologies, in April of 2019, FDA published the “Proposed Regulatory Framework for Modifications to Artificial Intelligence/MachineLearning (AI/ML)-Based Software as a Medical Device (SaMD) – Discussion Paper and Request forFeedback.” This paper described the FDA’s foundation for a potential approach to premarket review for artificial intelligence and machine learning-driven software modifications. The ideas delineated in the discussion paper leveraged practices from our current premarket programs and relied on theInternational Medical Device Regulators Forum’s risk categorization principles, the FDA’s benefit-risk framework, risk management principles described in the software modifications guidance, and the organization-based total product lifecycle approach also envisioned in the Digital Health Software Precertification (Pre-Cert) Pilot Program.As part of this proposed framework, FDA described a “Predetermined Change Control Plan” in premarket submissions. This plan would include the types of anticipated modifications—referred to as the “SaMD Pre-Specifications”—and the associated methodology being used to implement those changes in a controlled manner that manages risks to patients —referred to as the “Algorithm ChangeProtocol.” In this approach, FDA expressed an expectation for transparency and real-world performance monitoring by manufacturers that could enable FDA and manufacturers to evaluate and monitor a software product from its premarket development through post market performance. This framework would enable FDA to provide a reasonable assurance of safety and effectiveness while embracing the iterative improvement power of artificial intelligence and machine learning-based software as a medical device.In addition to describing a proposed framework, the discussion paper asked for stakeholder feedback, both generally and specifically in response to eighteen questions it raised on this topic. The paper inspired significant discussion and other activities in this area, and generated hundreds of comments from a wide array of stakeholders through the public docket. A high-level summary of this feedback is1 www.fda.gov 11provided below. Additionally, there have been numerous articles1 in peer-reviewed journals that discuss or reference the framework proposed in the discussion paper. On February 25-26, 2020, FDA held aPublic Workshop on the Evolving Role of Artificial Intelligence in Radiological Imaging to discuss emerging applications of artificial intelligence in radiological imaging, including AI/ML-based devices intended to automate the diagnostic radiology workflow as well as guided image acquisition. At this workshop, the agency worked with interested stakeholders to identify the benefits and risks associated with use of AI in radiological imaging and discussed best practices for the validation of AI-automated radiological imaging software and image acquisition devices.FDA continues to receive a high volume of marketing submissions and pre-submissions for products leveraging artificial intelligence/machine learning technologies, and we expect this to increase overtime. Moreover, since the discussion paper’s publication, there has been a strong interest in utilizing aPredetermined Change Control Plan for AI/ML-based medical products as it was described in the paper.On February 7, 2020, FDA announced its marketing authorization, through the De Novo pathway, of the first cardiac ultrasound software that uses artificial intelligence to guide users. This breakthrough device is notable not only for its pioneering intended use but also for the manufacturer’s utilization of aPredetermined Change Control Plan to incorporate future modifications.In response to stakeholder feedback on the discussion paper, in light of the public health need to facilitate innovation through AI/ML-based medical software while providing appropriate oversight for it, and consistent with the mission of the newly launched Digital Health Center of Excellence, the Agency is presenting this AI/ML-Based Software as a Medical Device Action Plan. In this document, we will briefly summarize the feedback we have received from stakeholders in this area, and we will briefly describe a five-part Action Plan to advance this work. Each of the five parts of this Action Plan is intended to address specific stakeholder feedback. Although this Action Plan focuses on SaMD, we expect some of this work may also be relevant to other medical device areas including Software in a Medical Device(SiMD).
AI/ML SOFTWARE AS A MEDICAL DEVICE ACTION PLAN
This AI/ML-Based Software as a Medical Device Action Plan was developed in direct response to the stakeholder feedback described herein, and it builds on the Agency’s longstanding commitment to support innovative work in the regulation of medical device software and other digital health technologies. In order to continue to advance the concepts from the AI/ML discussion paper toward a practical oversight of AI/ML-based SaMD and of the field in general, the Agency has identified the following actions: 1 For example: Gerke S et al., “The need for a system view to regulate artificial intelligence/machine learning-based software as medical device,” NPJ Digit Med 3, 53 (2020); Harvey et al., “How the FDA Regulates AI,” Academic Radiology 27, 58-61 (2020); Subbaswamy et al., “From development to deployment: dataset shift, causality, and shift-stable models in health AI,” Biostatistics 21, 345-352 (2020). 2 www.fda.gov 11provided below. Additionally, there have been numerous articles1 in peer-reviewed journals that discuss or reference the framework proposed in the discussion paper. On February 25-26, 2020, FDA held aPublic Workshop on the Evolving Role of Artificial Intelligence in Radiological Imaging to discuss emerging applications of artificial intelligence in radiological imaging, including AI/ML-based devices intended to automate the diagnostic radiology workflow as well as guided image acquisition. At this workshop, the agency worked with interested stakeholders to identify the benefits and risks associated with use of AI in radiological imaging and discussed best practices for the validation of AI-automated radiological imaging software and image acquisition devices.FDA continues to receive a high volume of marketing submissions and pre-submissions for products leveraging artificial intelligence/machine learning technologies, and we expect this to increase overtime. Moreover, since the discussion paper’s publication, there has been a strong interest in utilizing aPredetermined Change Control Plan for AI/ML-based medical products as it was described in the paper.On February 7, 2020, FDA announced its marketing authorization, through the De Novo pathway, of the first cardiac ultrasound software that uses artificial intelligence to guide users. This breakthrough device is notable not only for its pioneering intended use but also for the manufacturer’s utilization of aPredetermined Change Control Plan to incorporate future modifications.In response to stakeholder feedback on the discussion paper, in light of the public health need to facilitate innovation through AI/ML-based medical software while providing appropriate oversight for it, and consistent with the mission of the newly launched Digital Health Center of Excellence, the Agency is presenting this AI/ML-Based Software as a Medical Device Action Plan. In this document, we will briefly summarize the feedback we have received from stakeholders in this area, and we will briefly describe a five-part Action Plan to advance this work. Each of the five parts of this Action Plan is intended to address specific stakeholder feedback. Although this Action Plan focuses on SaMD, we expect some of this work may also be relevant to other medical device areas including Software in a Medical Device(SiMD).
Part 1 – Tailored Regulatory Framework for AI/ML-based SaMD
What we heard: Stakeholders provided many suggestions for further development of the proposed regulatory framework for AI/ML-based SaMD, including for the Predetermined Change Control Plan described in the discussion paper.
What we’ll do: Update the proposed framework for AI/ML-based SaMD, including through issuance of Draft Guidance on the Predetermined Change Control Plan The discussion paper proposed a framework for modifications to AI/ML-based SaMD that relies on the principle of a “Predetermined Change Control Plan.”
As discussed above, the SaMD Pre-Specifications (SPS) describe “what” aspects the manufacturer intends to change through learning, and the Algorithm Change Protocol (ACP) explains “how” the algorithm will learn and change while remaining safe and effective. Stakeholders supported the Agency’s facilitation of algorithms that changed and improved over time, and many developers were enthusiastic about proactively engaging with the Agency on plans for future modifications to their devices. Stakeholders provided specific feedback about the elements that might be included in the SPS/ACP to support safety and effectiveness as the SaMD and its associated algorithm(s) change over time. In addition to comments received related to the SPS and ACP, there was also feedback in other areas. There was general agreement that the types of modifications to AI/ML software devices proposed in the discussion paper were relevant and appropriate; however, there were suggestions for additional types of modifications to be called out as types of modifications that should fall under this framework. Additionally, the agency received questions about and suggestions for the content, process and timeframe for a “Focused Review” of a Predetermined Change Control Plan. FDA is committed to further progress in the development of the framework proposed in the discussion paper. Based on the strong community interest in the Predetermined Change Control Plan, the Agency intends to issue a draft guidance for public comment in this area. This draft guidance will include a proposal of what should be included in an SPS and ACP to support the safety and effectiveness of AI/ML SaMD algorithms. The Agency will leverage docket input received on the AI/ML-based SaMD discussion paper as well as recent submission experience. Our goal is to publish this draft guidance in 2021. Other areas of development will include refinement of the identification of types of modifications appropriate under the framework, and specifics on the focused review, including the process for submission/review and the content of a submission. Continued community input will be essential for the development of these updates.
Part 2 – Good Machine Learning Practice (GMLP)
What we heard: Stakeholders provided strong general support for the idea and importance of Good Machine Learning Practice (GMLP), and there was a call for FDA to encourage harmonization of the development of GMLP through consensus standards efforts and other community initiatives.
What we’ll do: Encourage harmonization of Good Machine Learning Practice development The discussion paper used the term Good Machine Learning Practice, or GMLP, to describe a set of AI/ML best practices (e.g., data management, feature extraction, training, interpretability, evaluation and documentation) that are akin to good software engineering practices or quality system practices.
Development and adoption of these practices is important not only for guiding the industry and product development, but also for facilitating oversight of these complex products, through manufacturer’s adherence to well established best practices and/or standards. There have been many efforts to date to describe standards and best practices that could comprise GMLP, including those mentioned below. Stakeholders generally provided strong support for the idea and importance of GMLP. Additionally, there was a request for FDA to encourage harmonization of the numerous efforts to develop GMLP, including through consensus standards efforts, leveraging already-existing workstreams, and involvement of other communities focused on AI/ML. Given the need for GMLP, the Agency has been an active participant in numerous efforts related to their development, including standardization efforts and collaborative communities. For example, FDA maintains liaisons to the Institute of Electrical and Electronics Engineers (IEEE) P2801 Artificial Intelligence Medical Device Working Group and the International Organization for Standardization/ Joint Technical Committee 1/ SubCommittee 42 (ISO/ IEC JTC 1/SC 42) – Artificial Intelligence; and it participates in the Association for the Advancement of Medical Instrumentation (AAMI)/ British Standards Institution (BSI) Initiative on AI in medical technology. This year, FDA officially became a member of the Xavier AI World Consortium Collaborative Community and the Pathology Innovation Collaborative Community, in addition to its participation in the Collaborative Community on Ophthalmic Imaging. The Agency is also participating in the International Medical Device Regulators Forum (IMDRF) Artificial Intelligence Medical Devices (AIMDs) Working Group. As part of this Action Plan, FDA is committing to deepening its work in these communities in order to encourage consensus outcomes that will be most useful for the development and oversight of AI/ML based technologies. In keeping with FDA’s longstanding commitment to a robust approach to cybersecurity for medical devices, these GMLP efforts will be pursued in close collaboration with the Agency’s Medical Device Cybersecurity Program.
Part 3 – Patient-Centered Approach Incorporating Transparency to Users
What we heard: Stakeholders called for further discussion with FDA on how AI/ML-based technologies interact with people, including their transparency to users and to patients more broadly.
What we’ll do: Following up on the Agency’s recent Patient Engagement Advisory Committee meeting on AI/ML-based devices, our next step will be to hold a public workshop on how device labeling supports transparency to users and enhances trust in AI/ML-based devices.
The Agency acknowledges that AI/ML-based devices have unique considerations that necessitate a proactive patient-centered approach to their development and utilization that takes into account issues including usability, equity, trust, and accountability. One way that FDA is addressing these issues is through the promotion of the transparency of these devices to users, and to patients more broadly, about the devices’ functioning. Promoting transparency is a key aspect of a patient-centered approach, and we believe this is especially important for AI/ML-based medical devices, which may learn and change over time, and which may incorporate algorithms exhibiting a degree of opacity.Numerous stakeholders have expressed the unique challenges of labeling for AI/ML-based devices and the need for manufacturers to clearly describe the data that were used to train the algorithm, the relevance of its inputs, the logic it employs (when possible), the role intended to be served by its output, and the evidence of the device’s performance. Stakeholders expressed interest in FDA proactively clarifying its position on transparency of AI/ML technology in medical device software.The Agency is committed to supporting a patient-centered approach including the need for a manufacturer’s transparency to users about the functioning of AI/ML-based devices to ensure that users understand the benefits, risks, and limitations of these devices. To this end, in October 2020, the Agency held a Patient Engagement Advisory Committee (PEAC) meeting devoted to AI/ML-based devices in order to gain insight from patients into what factors impact their trust in these technologies. The Agency is currently compiling input gathered during this PEAC meeting; our proposed next step is to hold a public workshop to share learnings and to elicit input from the broader community on how device labeling supports transparency to users. We intend to consider this input for identifying types of information that FDA would recommend a manufacturer include in the labeling of AI/ML-based medical devices to support transparency to users. These activities to support the transparency of and trust inAI/ML-based technologies will be informed by FDA’s participation in community efforts, referenced above, such as standards development and patient-focused programs. They will be part of a broader effort to promote a patient-centered approach to AI/ML-based technologies based on transparency to users.
Part 4 – Regulatory Science Methods Related to Algorithm Bias & Robustness
What we heard: Stakeholders described the need for improved methods to evaluate and address algorithmic bias and to promote algorithm robustness.
What we’ll do: Support regulatory science efforts to develop methodology for the evaluation and improvement of machine learning algorithms, including for the identification and elimination of bias, and for the evaluation and promotion of algorithm robustness.
Bias and generalizability is not an issue exclusive to AI/ML-based devices. Given the opacity of the functioning of many AI/ML algorithms, as well as the outsized role we expect these devices to play in health care, it is especially important to carefully consider these issues for AI/ML-based products.Because AI/ML systems are developed and trained using data from historical datasets, they are vulnerable to bias – and prone to mirroring biases present in the data. Health care delivery is known to vary by factors such as race, ethnicity, and socio-economic status; therefore, it is possible that biases present in our health care system may be inadvertently introduced into the algorithms. The Agency recognizes the crucial importance for medical devices to be well suited for a racially and ethnically diverse intended patient population and the need for improved methodologies for the identification and improvement of machine learning algorithms. This includes methods for the identification and5 www.fda.gov 11elimination of bias, and on the robustness and resilience of these algorithms to withstand changing clinical inputs and conditions.FDA is supporting numerous regulatory science research efforts to develop these methods to evaluateAI/ML-based medical software. This work is being conducting through collaborations with leading researchers including at our Centers for Excellence in Regulatory Science and Innovation (CERSIs) at theUniversity of California San Francisco (UCSF), Stanford University, and Johns Hopkins University. We will continue to develop and expand these regulatory science efforts and share our learnings as we continue to collaborate on efforts to improve the evaluation and development of these novel products.
Part 5 – Real-World Performance (RWP)
What we heard: Stakeholders described the need for clarity on Real-World Performance (RWP) monitoring for AI/ML software.
What we’ll do: Work with stakeholders who are piloting the RWP process for AI/ML-based SaMD The discussion paper described the notion that to fully adopt a total product lifecycle (TPLC) approach to the oversight of AI/ML-based SaMD, modifications to these SaMD applications may be supported by collection and monitoring of real-world data.
Gathering performance data on the real-world use of the SaMD may allow manufacturers to understand how their products are being used, identify opportunities for improvements, and respond proactively to safety or usability concerns. Real-world data collection and monitoring is an important mechanism that manufacturers can leverage to mitigate the risk involved with AI/ML-based SaMD modifications, in support of the benefit-risk profile in the assessment of a particular marketing submission. Stakeholders raised numerous questions, including: What type of reference data are appropriate to utilize in measuring the performance of AI/ML software devices in the field? How much of the oversight should be performed by each stakeholder? How much data should be provided to the Agency, and how often? How can the algorithms, models, and claims be validated and tested? How can feedback from end-users be incorporated into the training and evaluation of AI/ML-based SaMD? Overall, stakeholder feedback expressed the need for clarity and direction in this area. As part of this Action Plan, the Agency will support the piloting of real-world performance monitoring by working with stakeholders on a voluntary basis. This will be accomplished in coordination with other ongoing FDA programs focused on the use of real-world data. This work aims to help FDA develop a framework that can be used for seamless gathering and validation of relevant RWP parameters and metrics for AI/ML-based SaMD in the real-world. Additionally, evaluations performed as part of these efforts could be used to determine thresholds and performance evaluations for the metrics most critical to the RWP of AI/ML-based SaMD, including those that could be used to proactively respond to safety and/or usability concerns, and for eliciting feedback from end users. These efforts will include engagement with the public.
FDA very much appreciates the feedback provided related to regulatory approaches to AI/ML-based medical software, including through the public docket, workshops and other community events, peer reviewed publications, and marketing submissions. The AI/ML Software as a Medical Device Action Plan described in this document was developed in direct response to this feedback and is intended as a multi pronged approach to further advance the Agency’s oversight of these technologies. Continued
stakeholder engagement is crucial for the success of this work, which will be coordinated through the Center for Devices and Radiological Health’s newly announced Digital Health Center of Excellence.
In summary, as part of this Action Plan, the Agency is highlighting the following intended actions and goals:
• Develop an update to the proposed regulatory framework presented in the AI/ML-based SaMD discussion paper, including through the issuance of a Draft Guidance on the Predetermined Change Control Plan.
• Strengthen FDA’s encouragement of the harmonized development of Good Machine Learning Practice (GMLP) through additional FDA participation in collaborative communities and consensus
standards development efforts.
• Support a patient-centered approach by continuing to host discussions on the role of transparency to users of AI/ML-based devices. Building upon the October 2020 Patient Engagement Advisory Committee (PEAC) Meeting focused on patient trust in AI/ML technologies, hold a public
workshop on medical device labeling to support transparency to users of AI/ML-based devices.
• Support regulatory science efforts on the development of methodology for the evaluation and improvement of machine learning algorithms, including for the identification and elimination of bias, and on the robustness and resilience of these algorithms to withstand changing clinical inputs and conditions.
• Advance real-world performance pilots in coordination with stakeholders and other FDA programs, to provide additional clarity on what a real-world evidence generation program could look like for AI/ML-based SaMD.
We acknowledge that AI/ML-based SaMD is a rapidly progressing field, and we anticipate that this Action Plan will continue to evolve as we pursue these activities and seek to provide additional clarity in
this space. We welcome your continued feedback through the public docket (FDA-2019-N-1185) at www.regulations.gov, and we look forward to engaging with you on these efforts.