Artificial intelligence (AI) has exerted only a modest effect on clinical radiology to date, but that will change rapidly in the months and years ahead. Stakeholders have raised a number of questions regarding the FDA’s discussion paper on regulation of AI, but the American agency will not have to go it alone in this endeavor, thanks to recent developments at the International Medical Device Regulators Forum (IMDRF).
Artificial intelligence (AI) has exerted only a modest effect on clinical radiology to date, but that will change rapidly in the months and years ahead. Stakeholders have raised a number of questions regarding the FDA’s discussion paper on regulation of AI, but the American agency will not have to go it alone in this endeavor, thanks to recent developments at the International Medical Device Regulators Forum (IMDRF).
The FDA’s April 2, 2019, discussion paper states that the capabilities of AI and machine learning (ML) algorithms are unique among products under the category of software as a medical device (SaMD), thanks to the ability to adapt on the basis of real-world feedback. The paper calls for an appropriately tailored mechanism of oversight, and states that the FDA was prompted by developments in this space “to reimagine an approach to premarket review” for modifications to these algorithms.
To date, the FDA has approved multiple AI and ML algorithms, although the agency said these were fixed rather than unlocked algorithms. The iterative and adaptive nature of unlocked algorithms calls for a novel total product lifecycle (TPLC) approach, the paper indicates, but the FDA nonetheless refers on more than one occasion to the software 510(k) changes guidance as a possible frame of reference for managing those changes.
Despite what might seem as a series of mixed messages about the need for a novel regulatory framework, the FDA statement that accompanied the paper suggests the FDA saw a need for a new approach to AI and ML. Scott Gottlieb, then the commissioner of the FDA, said the paper considers a new regulatory paradigm that would be specifically tailored for these adaptive algorithms, which would require development of a pre-determined change control plan. Gottlieb indicated that the FDA’s current statutory authorities would suffice to form this regulatory mechanism, a position that would alleviate any need for congressional intervention.
Work on Standards, Conformity Assessment Moving Forward
In September 2019, the IMDRF and the Global Diagnostics Imaging Healthcare IT & Radiation Therapy Trade Association (DITTA) convened in Yekaterinburg, Russia, which included a presentation on standardization activities in connection with AI in imaging. A slide deck prepared for the meeting lists a total of seven groups that are developing a variety of work products for a joint committee of the International Standards Organization and the International Electrotechnical Commission (the ISO/IEC committee).
One of the slides indicates that the ISO/IEC committee is “informally thinking about developing a QMS standard,” but little detail is offered. The discussion of definitions consumed some bandwidth as well, but the presenter states that bias is a feature that some clinicians are perhaps less than anxious about so long as that algorithm is biased toward that physician’s patient population.
Another slide deck from the Yekaterinburg meeting provides some details regarding the differences in conformity assessments between AI and non-AI premarket applications. Among the details provided in this slide deck are that a change plan should be included in the premarket dossier, but there is also mention of a precertification program for the developer, a characteristic which for the FDA is unique to the SaMD PreCert program (a technology certification mechanism is under consideration in the latest version of legislation for FDA regulation of lab-developed tests, which was described as a precertification mechanism in the previous iteration). The FDA’s AI discussion paper is listed as a marker of sorts for change management, although the slide that makes this reference reiterates the need for a precertification program.
Advocates of regulatory harmonization will be encouraged by a mention of ISO 13485 as the preferred approach to quality management system (QMS) regulation. It might be noted that health care facilities that assume the task of managing an algorithm’s adaptations might themselves be liable for compliance to a QMS, at least with regard to specification and data management.
No Room for ‘Glamour AI’
Several trade associations commented on the paper, including the Coalition for Healthcare AI, which said the FDA should be vigilant to ensure that “glamour AI” does not reach the marketplace. This group said collaborations are in the process of developing standards for best practices, one of which is credited with work on the ethical and legal foundations for AI.
The AI Startups in Health Coalition made note of several pre-market and post-market concerns, such as that an FDA requirement for large data sets fails to recognize:
- That the required volume of data might not be available in time for urgent needs, such as pandemics;
- That the connection between data volume and data quality is tenuous; and
- That such a stipulation could needlessly limit competition when smaller data sets will suffice, a particularly salient concern when small developers lack the resources to acquire larger, more expensive data sets.
The Coalition also questioned whether the FDA has the requisite data infrastructure to process the volumes of data that would land on its doorstep. This may prove especially pertinent in the context of Medical Device Reporting (MDR) requirements, particularly given the prospect that this requirement could trigger a data dump that might serve more to obfuscate than to clarify the question of whether a safety signal exists.
There is also a concern that a requirement that the developer disclose raw or annotated data to the FDA may be disclosed to outside parties after a petition under the Freedom of Information Act. The Coalition said the existing MDR requirements are sufficient for handling post-market reporting for these algorithms, but recommended that the FDA not forgo in-person inspections because those inspections would give the agency’s inspectional staff direct access to any data sets that are deemed relevant to MDRs.
Another set of stakeholders, the medical community, expressed a different set of reservations about regulation. For example, the Radiological Society of North America (RSNA) made the case that for the foreseeable future, AI should be applied to diagnostic imaging only when the algorithm is intended to “supplement or support, rather than replace, the interpretive skills of physician experts.” RSNA also said that autonomous algorithms should be subjected to rigorous review under the PMA program along with stringent post-market surveillance.
The association also advised that it has conducted AI challenges over the past four years, an effort that has generated large, annotated data sets that can be used to validate detection algorithms. This type of activity will continue to grow at RSNA, which has also launched an imaging and data repository drawn from patients diagnosed and treated for the SARS-CoV-2 virus. These are just two of the society’s activities applied toward development of a range of data libraries that can be used to validate AI algorithms, the group said.