The FDA’s approach to validation of computer systems used in device manufacturing IT systems hasn’t drawn the attention that software as a medical device has, but that doesn’t mean this requirement hasn't wanted for controversy.
The FDA’s regulatory approach to computer systems used in device manufacturing IT systems hasn’t drawn the same level of attention as that of software as a medical, but that doesn’t mean that validation of these computer systems hasn't wanted for controversy. The agency is working on a solution that abandons the traditional validation framework for a computer software assurance (CSA) approach that would allow device manufacturers to verify system functionality without generating reams of costly and sometimes irrelevant documentation.
Least Burdensome or Regulatory Beast of Burden?
For more than two decades, device makers have worked under an approach spelled out in the Jan. 11, 2002, FDA guidance that describes the general principles for computer systems validation (CSV). This guidance, a rework of a legacy document from 1997, calls out the least burdensome approach, and states that any software used in the production process or as part of the establishment’s quality system is subject to the same requirements as medical devices. Among these are design validation requirements under Part 820.30 and design control requirements under Part 820.70(i).
Interestingly, the 2002 guidance states that its provisions are harmonized with ISO 8402:1994, a now-defunct standard the International Standards Organization withdrew as ISO 9001 came into being. One of the key aspects of ISO 8402 was that it defined verification and validation as separate terms, although software engineering texts of the era apparently treated the terms interchangeably, or joined them into a vaguely defined approach to overall system evaluation, sometimes described as verification, validation and testing.
The 2002 FDA guidance requires evaluation of software in a simulated use environment, but the agency acknowledged that software verification and validation are difficult. This is because “a developer cannot test forever, and it is hard to know how much evidence is enough,” the guidance states.
Among the agency’s concerns at the time were that:
- Software, even that developed with relatively small portions of code, can be complex and difficult to understand;
- Changes to software can unintentionally introduce new hazards in the absence of exhaustive testing; and
- Software failures can occur without warning, especially given the prospect that program branching can hide defects until the software is in widespread use.
The guidance states that these considerations suggest that the development process for software “should be even more tightly controlled than for hardware,” a statement that seems to belie the agency’s least burdensome claim. Section 4.7 of the guidance states for instance that any changes to the software should be evaluated not just for the impact on the target software function, but also for the impact on the entire system. This would entail software regression testing and possibly evaluation by a third party, although this “independence of review” concept is also acknowledged to be impracticable at times.
Guidance Agenda for FY 2020 Includes CSA Draft
Among the items in the fiscal year 2020 guidance agenda for the FDA’s Center for Devices and Radiological Health is a draft guidance for CSA. Whether the agency will be able to push the draft out in the remainder of FY 2020 depends largely on the evolution of the COVID-19 pandemic, although the software assurance guidance does appear on the A list for draft guidances rather than the relatively lower priority B list.
In a May 29, 2020 series of videos, the FDA’s program manager for the Case for Quality (CfQ) program, Francisco Venty, said industry’s reluctance to move away from a documentation-driven approach was often based on the absence of an explicit FDA policy statement. Validation of a system could consume as many as six months, but Venty also pointed to several persistent issues with CSV. Among these is that CSV:
- Has become focused largely on a method of generating evidence for auditors rather than a means of ensuring the reliable performance of computer systems:
- Has become functionally synonymous with documentation and delay, and consequently is often seen as a necessary evil rather than as a value-added activity; and
- Has imposed enough drag on innovation in this space that life science industries now lag behind other industries in their use of modern CSA methods.
Stephen Cook, vice president for quality and computer compliance at the Compliance Group, said the focus should be on patient safety and product quality rather than documentation of exhaustive system testing. Cook also said a 3,500-page test case is an example of overkill, adding that CSA can cut 80% of the system testing paperwork if the manufacturer emphasizes unscripted and ad hoc testing, which is said to be more reflective of real-world usage of the system.
The emphasis is on critical thinking about the system rather than on the conduct of compliance-driven activities for the sake of demonstrating compliance. One of the differences between CSA and CSV is that in this new approach, a device maker can reduce its workload by citing any system testing performed by the vendor of the software. The focus for the device maker should be on the system’s intended use, and on the impact of the systems (or any changes thereto) on the patient/user.
Cook recommended that companies steer away from conventional failure mode and risk analysis (FMEA) risk assessment models because “they don’t add value. You’re writing the requirements in the negative,” Cook said, adding that a test case should be drafted so that those who are unfamiliar with the system could conduct the test case, including auditors.
The FDA has teamed up with device makers to establish how CSA would work, and the industry team for the shift from validation to assurance includes a number of major players, including Medtronic plc. One of the test cases used to develop the concepts that will appear in the draft guidance is for Zoll Medical’s LifeVest wearable defibrillator, which provides some insight on the impact of a switch from validation to assurance. Zoll said the switch to CSA cut the volume of paper production from more than 38,000 pages a week to 500 pages per week.
The company also indicated that the validation team often absorbs the brunt of complaints regarding extended implementation timelines, and consequently members of a company’s IT staff may become wary of engaging in CSV. With the leaner CSA approach, process validation time was cut from 38 days on average to seven days, while service defect tracking fell from 35 days to three.
CSV is not widely seen as particularly compatible with automation, which is one of the primary reasons for steering away from traditional validation. Whether the FDA will be able to get the CSA draft guidance out for comment by the end of the current fiscal year is perhaps secondary as the agency will surely act on the effort sometime in FY 2021 at the latest. In the meantime, device manufacturers may want to look at their IT installations and consider the cost savings that will accrue with CSA. The FDA’s participation in these pilots makes clear the agency is not interested in compliance by rote, providing an opening by which industry can jettison a large cost center that offers at best a meager return on investment.