The FDA released an action plan for its regulatory approach to artificial intelligence (AI), which provides a few insights into the agency’s intentions. Despite the newsworthiness of the action plan, it represents little more than a roadmap for the FDA’s ambitions in this area rather than an actionable regulatory document.
The Jan. 12, action plan was unveiled with a press release which states that the plan constitutes a “holistic approach” to product lifecycle management for AI and machine learning (ML) algorithms. The plan is in part a response to feedback obtained from the April 2019 discussion paper, which is characterized as a proposed regulatory framework. As we discussed previously, the FDA is not working entirely in isolation in its development of a regulatory framework for AI, thanks to the interest demonstrated by the International Medical Device Regulators Forum (IMDRF).
However, the 2019 FDA paper suggested that new statutory authorities may be needed to implement a framework for AI and ML, despite that then-FDA commissioner Scott Gottlieb seemed to indicate there is no such need in an April 2, 2019, statement.
The action plan lists several objectives, starting with a promise to publish a draft guidance for predetermined change control plans for adaptive algorithms. This is perhaps the most critical of all regulatory considerations in the long run, given the widespread hope that these algorithms may one day automatically incorporate the latest science and thus offer diagnoses that provide maximum sensitivity and specificity. The agency states that its objective is to publish the draft at some point in 2021, without making clear whether that is intended to mean the fiscal or the calendar year.
In either case, a final guidance is unlikely to be made publicly available until calendar year (CY) 2022 unless the draft is available substantially earlier than the end of FY 2021. The question of how to manage a learning algorithm’s adaptations may strike some observers as sufficiently thorny to suggest that 90 days is the shortest feasible comment period.
Several other considerations are enumerated in the action plan, including:
- A patient-centered approach to transparency to users;
- Methods for ensuring that algorithms do not unintentionally fall prey to bias; and
- Methods of oversight and tracking of an algorithm’s real-world performance (RWP).
GMLP and the Part 820/ISO 13485 Question
Another consideration that is central to a developer’s premarket activities is the use of good machine learning practices (GMLPs). In our comments to the docket, we at Enzyme had emphasized several specific considerations for GMLPs. One of our concerns is the need to explain the interaction between the FDA’s Quality Systems Regulation (QSR) and IEC 62304:2006., which is endorsed by the International Standards Organization and appears on the FDA list of recognized standards.
One of the issues in this context is that an adaptive algorithm does not fit neatly into what we refer to as the “waterfall” product development cycle, and thus will require some finesse on the FDA’s part if this and other standards, including ISO 82304:2016, will remain relevant to the regulation of AI and ML algorithms.
We also recommended that the FDA create a public-private working group to revisit the current approach to quality systems for software as a medical device (SaMD), including GMLP, and we pointed out that software developers are at present left with the use of the QSR and ISO 13485 to incorporate GMLP into their quality practices. It may be relevant to note that the FDA’s device center is still examining a partial rewrite of the QSR to more closely align it with 13485, an objective that could complicate the GMLP question.
The action plan acknowledged the call for efforts to develop consensus standards for GMLPs, and that the FDA officially became a member of the Xavier AI World Consortium Collaborative Community just this year. The plan includes a commitment to continue the work with this and other groups that are developing consensus standards, but also that any FDA work in the area of GMLPs would be “pursued in close collaboration” with the agency’s cybersecurity program.
The IMDRF has a work item for artificial intelligence medical devices (AIMD), the scope of which includes machine learning. Precisely where this work item stands is difficult to determine inasmuch as there are no documents included in this work item’s webpage, and the report from the IMDRF’s most recent meeting, held in September 2020, does not mention any work done on this item.
However, the meeting did include a presentation that includes a proposal to adopt a harmonized regulatory framework despite that five regulatory authorities other than the FDA are working on their own internal regulatory proposals. There is also the question of the adoption of the AIMD acronym, which is also used to denote active implantable medical devices.
Our Perspectives on Change Control
An algorithm’s predetermined change control plan would consist of two principal parts, starting with a list of SaMD pre-specifications (SPS) that would be subject to adaptations as the algorithms learn. The algorithm change protocol (ACP) describes how the algorithm would change without detracting from the safety and effectiveness of its use. In our view, the SPS should indicate the ranges of possible evolution across all dimensions that would be subject to learnings.
We also responded to the 2019 discussion paper to the effect that ACP might not be the best way to characterize the underlying regulatory consideration because this function might more accurately be described as reflecting a verification and validation test plan that incorporates several considerations. Among these are:
- How a change incurred due to learnings would be evaluated;
- What the acceptance criteria for approving algorithm changes would be; and
- How testing would be accomplished under a validation/verification test plan.
The section of the action plan dealing with RWP monitoring suggests the agency will develop a pilot plan that will be voluntary for participants. The emphasis will be on methods of collecting and analyzing RWP data that would allow for a proactive response to any safety and/or usability concerns. However, this would be preceded by development of a set of parameters and metrics by which an algorithm’s RWP would be used to judge the function of that algorithm. Despite that the FDA’s progress on this and other areas of digital regulation are sure to be impeded by the COVID-19 pandemic, the AI/ML question bears watching, and we at Enzyme will keep the reader updated as these developments unfold.