The European Commission (EC) has released a draft legislative framework for regulation of artificial intelligence (AI), which is intended to span all commercial uses of AI. The key point for developers of software and hardware medical devices is that the Artificial Intelligence Act (AIA) would create a number of new regulatory functions that could make the path to market more difficult.
The EC said the draft AIA framework and an accompanying series of annexes are designed to position the European Union (EU) to play a leading global role in development and use of AI across health, finance and agriculture. This draft follows the Feb. 19, 2020, white paper on AI, but also responds to prodding from both the European Parliament and the European Council, both of which are said to have pressed the EC repeatedly on development of legislation.
The proposal seems to suggest that AI systems used in healthcare are necessarily high-risk products, stating that a high-risk designation applies to any systems intended to be used as a safety component of products that are subject to prospective conformity assessment. However, this is not necessarily the case if other legislation applies a different risk classification to the product. This passage specifically points to the Medical Device Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR) as providing exceptions to the general principle.
There are several areas in which AI systems would be subject to requirements that may overlap with the MDR and/or the IVDR. Among these considerations are:
- Training and validation sets (Article 10);
- Technical documentation (Article 11); and
- Record keeping for maintenance of logs for traceability (Article 12).
The extent to which the specific requirements for each will be additive to or in conflict with the MDR and the IVDR is not clear, and it should be noted that this legislation is still in draft form. The features of any enacting regulation are also unknown, and device makers and developers of digital health products will have to stay abreast of these and a number of other considerations for a European market that is already undergoing a considerable overhaul.
Risk Management, QMS Provisions Included
There are several overarching areas that will affect developers of these algorithms in ways that are difficult to predict. Article 17 of the AIA draft highlights the requirements of a quality management system (QMS), but does not specify a particular QMS standard. The assumption may be that ISO 9001 will be the default QMS standard for AI systems, which ISO 13485 largely mirrors. 13485 is more specific in several areas, such as the need for a designated management representative.
The AIA draft calls for the developer’s QMS to encode a compliance strategy that provides details on conformity assessment procedures, as well as procedures for management of modification to any high-risk AI systems. Given the lack of detail as to what may be required in the associated conformity assessment, developers may find themselves with two conformity assessments rather than one. There is also no indication whether a notified body (NB) would have to be dually certified to handle a review of an algorithm for both sets of requirements.
The question of modification management – or in the parlance of the U.S. FDA, change control – may also prove problematic, particularly given that the FDA’s approach to change control has not yet been addressed in draft guidance. As we explained recently, the FDA’s latest moves in the area of AI are limited to an action plan that calls for development of a guidance for change control, a guidance that may emerge before an enacted version of the AIA is translated into actionable guidance. It may be noted, however, that Article 43 of the AIA states any learning algorithms that undergo pre-specified changes once on the market are not deemed to have undergone a substantial modification, and thus would be subject to a modest regulatory requirement at worst.
Article 9 of the AIA describes risk management requirements, and is similarly limited to high-risk systems. Article 9 covers the total product life cycle (TPLC), and the associated requirements would include:
- Identification and analysis of known and foreseeable risks;
- Evaluation of risks that are suggested by data gathered in post-market monitoring; and
- Estimation and evaluation of risks that may emerge, including under conditions of “reasonably foreseeable misuse.”
Article 9 also invokes the notion of residual risk, which already has a definition in the context of medical technology. Under ISO 14971, residual risk is defined as the risks that remain after reasonable efforts to address known risks with the use of existing standards. As we discussed previously, however, there is a significant lack of alignment between the FDA’s approach to risk management and that which is spelled out in the ISO standard.
The draft AIA acknowledges the role of harmonized standards to demonstrate compliance, but industry already has a fair amount of uncertainty to deal with regarding standards. The European Commission recently requested that the two organizations in charge of the related standards come up with more than two dozen new standards and revise more than 200 existing standards over the next three years. The Medical Device Coordination Group also published a guidance regarding the use of standards, but there is no way to predict how the AI regulation might ultimately interact with these other activities.
Outside-EU Algorithms May be Regulated
Developers of high-risk algorithms that are used outside the EU may nonetheless find themselves with a compliance liability if those algorithms process data obtained from EU member nations. The EC said it was concerned about efforts to circumvent those compliance liabilities, although this requirement may be waived when that outside-EU entity is another national government or an international non-government agency.
This provision suggests that the developer of an algorithm that has obtained market authorization in a non-EU jurisdiction may have to obtain a CE mark, even if the developer seeks to do nothing more than process clinical trial data results from a multi-national study with sites located both within and outside of the EU.
Chapter 4 of the AIA takes up the questions of notifying authorities and NBs, with the former tasked with setting up state-specific requirements for certifying NBs that will evaluate AI systems. Applicant NBs will have to go through a certification process that includes documentation of expertise, but there is little detail as to whether certification under the regulations promulgated for the AIA will affect or overlap with the requirements for NBs under the MDR or the IVDR.
Chapter 5 of the AIA provides a number of details about conformity assessment procedures, one of which is essentially advisory in nature. The EC states that it reserves the right to adjust the terms of conformity assessment requirements when such changes “become necessary in the light of technical progress.” Precisely what might constitute such a degree of technical progress is not clear, and the net effect of the AIA is to introduce a high level of uncertainty to those in the med tech space, who are already dealing with the massively labor-intensive MDR and IVDR implementations.