fbpx
 

FDA Drives ‘Stake In Ground’ With Glossary For AI, Digital Health Terms

October 31, 2024

By Shawn M. Schmitt
Communications Specialist, Enzyme

The US Food and Drug Administration (FDA) has taken a critical step toward ensuring that makers of medical devices are all on the same page when it comes to defining terms related to artificial intelligence (AI) and digital health, says an ex-agency official.

Posted online in late September, the FDA Digital Health and Artificial Intelligence Glossary is the agency’s way to “put a stake in the ground upon which the FDA can now build frameworks and industry guidance documents, among other regulatory and policy items related to AI and digital health,” says Vizma Carver, who in 2018 advised FDA staff and leaders as a Digital Health Expert. “It was vitally important for the FDA to begin, though, by establishing the AI nomenclature for medical products. This glossary is the beginning of the conversation to ask, ‘OK, what are the different ways that we use AI in medical devices at the conceptual level?’”

Carver, who has held leadership positions in Philips’ Connected Care unit and is currently Founder and CEO of her own Virginia-based consulting firm, added that establishing a “basic foundational nomenclature” was important as manufacturers work to develop novel devices and draft product submissions to the FDA. “This glossary allows companies to use the agreed-upon language to say, ‘This is how we came about creating our product,’ as they work their way through the agency’s review process.”

The FDA says it gathered glossary definitions from international consensus standards, published literature, and other “public sources.”

The MedTech industry shouldn’t be surprised that such a glossary wasn’t published by the agency long ago because, as Carver pointed out, AI and digital health have changed a lot in recent years. And she noted that generative AI tools like ChatGPT caused a tipping point for many who were perhaps in denial regarding how much of an impact AI would have on companies and products, as well as the role of regulatory professionals in general.

“Before, it was very much an attitude of, ‘Yeah, everyone knows AI is coming, we know it’s there. We’re looking at it and evaluating it,’ but roughly three years ago AI really started to pivot, with another very strong pivot coming with both the Presidential Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence [signed by President Joe Biden in October 2023], and the release of ChatGPT,” Carver said. “This is when agencies became required to put forth policies, and in general people started to realize that the value of AI can be good. But before that, the question had been, ‘What’s the intrinsic value of AI?’ But now enough people have experienced AI to get behind it, and that’s likely a big reason why the FDA has released the glossary now.”

It also probably took a long time for experts within the agency to agree on specific definitions, as well as which terms to include – and not include – in the glossary.

“Up until recently, industry was still debating what constitutes SaMD [Software as a Medical Device], because the reality is that software has been in medical devices since the 1960s,” Carver said. “So I can only imagine how long it took the FDA to get these terms together on one page. Within the agency are really smart people who are willing to question and scrutinize everything – in a good way – asking, ‘What are we doing? Why are we doing this? How do we define it? Has everything been taken into consideration?’

“So that’s why it likely took so long for this glossary to be released,” she added. “The FDA has set the foundation, and that’s what this glossary is.”

 

‘Digital Twin’ And ‘Federated Learning’

Two FDA glossary terms in particular – “digital twin” and “federated learning” – caught Carver’s eye. She surmises that manufacturers will find the agency’s definitions of these terms to be helpful.

“When it comes to digital twin, that conversation has gone around and around for a very, very long time,” Carver said. “What’s really important to the FDA is the bidirectional interaction of the virtual model and the physical twin. So it’s bidirectional, instead of, you create it, and then that digital twin is off on its own. The FDA wants a digital twin to be bidirectional, they want it to be continuous.”

Meanwhile, federated learning “is a very important entry in the glossary because for organizations that collaborate on a problem set, it allows them to ensure data privacy,” Carver said. “There are many hospitals and universities that work together – where their researchers are working together – so ensuring data privacy is of the utmost importance. So this glossary definition basically says, ‘As long as the data is local and you bring your models together, data privacy concerns will be addressed.’”

 

What Might the Future Hold?

Carver says she will be “shocked” if the FDA doesn’t eventually add to or update its glossary given that artificial intelligence and digital health are fluid topics that obviously will change over time. As for now, however, she said what’s included in the glossary is a great start.

“As the medical device industry evolves, then more terms will be put into this glossary,” she said. “But I admit, I don’t have a crystal ball. Who knows what the glossary will look like even in, say, a year’s time.”

And while she doesn’t see anything in particular that she believes should be removed from the list, Carver nevertheless noted that there are some definitions that could stand to be fleshed out a bit more, such as for “data drift,” which the FDA says in its glossary can “affect the accuracy and reliability of predictive models.”

“I really appreciated that the FDA acknowledged the need for data drift, but I didn’t see a lot beyond that data drift regarding conversation around the quality of data. So that would be one thing that I would want them to put a little bit of effort into,” she said. “It’s the quality of the curated data that really makes the difference between good models and bad models. We can take all the data we want and smoosh it together, but when you take great curated data is combined with not-that-great curated data, it becomes a problem.”

Carver went on: “I think the first step for the FDA in creating this glossary was defining a lot of basic terminology and making sure there is agreement on things like ‘model weight,’ ‘model fitting,’ ‘model robustness,’ et cetera. The agency set a framework in there, but it still could use a bit of massaging. But in the end, it’s just terminology, and the AI community is still working on a lot of these things, particularly when it comes to the data quality aspect.”