WHO Enters AI Debate with Report on Guiding Principles

The World Health Organization (WHO) has published an extensive report on the use of artificial intelligence (AI) in health care, which sets forth a number of non-binding general principles. While the authors urge that stakeholders not get carried away with expectations of AI, they also raise several difficult questions, such as the principles of transparency and the hazards of regulation.

As readers of our blog know, the FDA is not alone in attempting to assemble a regulatory regime tailored for AI, but some of these efforts have stalled. This is certainly due in part to the COVID-19 pandemic, but the FDA concluded earlier this year that it needed to formulate a more comprehensive approach to the subject than it had previously proposed.


The WHO paper states that approximately 100 proposals for AI principles have been published just in the past 10 years, highlighting both the widespread scrutiny of the field and the probability that differentiated regulatory requirements will emerge. At 165 pages, the paper is organized into nine major themes, such as laws and policies, ethical principles, and the elements needed to regulate AI in health care uses.


Section 9.5 states that a WHO working group (WG) has been assembled to address several regulatory considerations, including the use of AI in drug development. This WG will issue a report on these questions at an unspecified date later in 2021.


Transparency, Explainability Requirements Carry Risks


WHO acknowledged that overly burdensome regulation could impose drag on innovation, but also pointed to the proliferation of algorithms that offer no therapeutic or other health benefit as a serious ethical problem. One of the thornier questions, that of explainability, is complicated by the fact that some therapies are introduced into medical practice before the mode of action is fully understood.


One of the issues associated with demands for transparency and explainability is that such disclosures could encourage off-label use, potentially including inappropriate off-label use. Another possible problem is that a requirement that the developer be able to explain each recommendation an algorithm makes could hamper adoption of algorithms that offer superior performance to algorithms that are already on the market.


While clinical trials might answer some of these questions – and can aid in the effort to root out any unintentional bias – such studies will not always adequately predict the performance of a learning algorithm, WHO stated. Clinical trials also may not suffice to address the increasing use of algorithms that are personalized toward smaller populations, an increasingly common practice among local hospital groups. Such geographically restricted uses of AI may complicate efforts to amass enough enrollment to provide the degree of statistical certainty ordinarily expected of a randomized, controlled trial (RCT).


WHO also stated that the demand for transparency could lead to a futile examination of the algorithm’s source code. Among the recommendations offered to address explainability and transparency are:


  • Development of transparency requirements that properly account for the developer’s intellectual property rights;
  • The use of RCTs to fully validate the algorithm’s utility; and
  • Establishment of regulatory incentives for developers to identify, monitor and remediate any safety concerns during product development, along with mandatory postmarket surveillance requirements.


While the paper is not prescriptive, WHO states that it is exploring a collaboration with the Organization for Economic Cooperation and Development (OECD) to review existing policies and laws across the globe. This kind of cooperation may be followed by other collaborations, such as with agencies of the U.N., which may in turn aid WHO’s interest in developing model legislation for use in nations with limited experience in this area of policymaking.


Section 6 of the paper offers several cautionary statements regarding the utility of AI, such as the risk that a failure to meet unrealistic expectations could trigger a backlash. Privacy is a concern as well, and the paper makes reference to inappropriate uses of data, such as contact tracing. One particular example of the limitations of patient data de-identification was highlighted by the lawsuit filed against Google of Mountain View, Calif., and the University of Chicago. A class-action lawsuit filed in 2019 was eventually decided in favor of the defendants in September 2020, although the outcome did not turn on the question of data re-identification. Re-identification of de-identified data is just one of a number of potential legal hazards associated with AI.


Liability a Largely Unexplored Question


Section 8 of the paper addresses liability regimes in general, and touches on five themes, such as:


  • Assignment of liability for the clinical use of AI;
  • The scope of liability for developers, including when the algorithm is used off-label; and
  • Preemption of liability by regulatory law.


The paper states that the question of the liabilities of users and developers of health care algorithms is very much in flux across the globe. Section 5.4 of the paper recommends a no-fault approach to accountability for any mishaps associated with the use of AI, an approach that is assumed to encourage all actors to conduct themselves responsibly in developing and using an AI product. Section 8.3 transposes this no-fault approach to liability, calling for developers to obtain product liability insurance or to pay into a national patient compensation fund.


New Zealand is said to provide government payment to patients who forsake the right to seek damages unless reckless conduct is alleged. However, WHO states that not all national legal systems have applied traditional product liability theory to software and algorithms despite that strict liability is practiced in a number of nations, including the U.S.


Section 8.2 addresses the intended use question, which has proven controversial with regard to the FDA, but the WHO paper also points to the predicament in which a machine learning algorithm has evolved at the site of use in a manner that does not reflect the developer’s initial design. The paper states that assigning liability to the developer in this instance might suppress innovation, although the learned intermediary doctrine could insulate the developer in some cases. WHO states that some European Union (EU) member states provide a “development risk defense” that allows developers to avert claims of strict liability in some scenarios.


Section 8.4 addresses the concept of preemption, which in the U.S. applies to class III devices routed through the PMA channel, but not to class II devices reviewed under the 510(k) regime. Policies related to preemption and liability are less well developed in low- and middle-income countries (LMICs), a particularly sensitive consideration given the general lack of resources in many of these nations. The paper states that WHO should offer assistance to such nations in assessing AI technologies and in evaluating a nation’s policy options where liability is concerned. However, WHO also suggests that it should take part in the establishment of international norms and legal standards for product liability, a development that could prove useful, at least where harmonization is concerned.


Get more of Enzyme

Sign up for the latest updates in your inbox
Ready to level up? Inquire about certification.
info@enzyme.com or

Ready to do more?