Breaking News

AI, ML, & Cybersecurity: Here’s What FDA May Soon Be Asking – Design News

FDA has released a number of documents that could help clarify its expectations for artificial intelligence, machine learning, and cybersecurity. These include Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan, published in January 2021; Good Machine Learning Practice for Medical Device Development: Guiding Principles, published in October 2021; and the just-released draft guidance, Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions.

Michelle Jump, vice president of security services for Medsec; and Yarmela Pavlovic, vice president of regulatory strategy for Medtronic, explored these documents during the IME West 2022 session on April 12, “Product Development Innovation in an Age of Regulatory Uncertainty.”

The AI/ML action plan provides a “more tailored regulatory framework for AI/ML,” explained Pavlovic. She referred to FDA’s 2019 discussion paper, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) – Discussion Paper and Request for Feedback, which laid out a “total product lifecycle approach to AI/ML regulations with the understanding that AI/ML products can be iterated much more efficiently and quickly than a typical medical device implant product or something that isn’t software based.” This is “because there is an opportunity to add additional data to training sets on which the products were originally formulated,” she said.

Key to the total product lifecycle approach is “a predetermined change control plan” (PCCP), which describes “all the types of changes the manufacturer intends to make to the product in the foreseeable future and the protocols and success criteria and data sets they will use for evaluating the performance of these products,” she explained.

In answering a question from the audience about PCCPs, Pavlovic explained that “when you think about formulating a plan, you have to start with what the testing package you plan to give to FDA looks like. The level of detail of the PCCP will match the level of detail in the protocol and the test reports in your starting package. What you do in the future to validate a product flows from what you did in the first instance. And then the types of changes that you describe may warrant additional or different types of testing, so be thoughtful about the key scientific questions that are posed by those changes and how can you layer on the right level of evidence to be comfortable.”

The Medical Device Innovation Consortium has a digital health vertical with a workstream looking to build out PCCP examples, she added.

The AI/ML action plan also includes good machine learning practices (GMLP), such as having “good hygiene around data sets and data management practices” as well as “processes to continue to learn about the product as it is used in commercial practice to make sure you are continuing to understand the performance of your product,” explained Pavlovic. 

Also, “algorithmic bias and robustness is an area for further regulatory science development,” she said. The goal would be to ensure the “performance of the product is representative of the intended use population and that we don’t have unintended consequences from the use of our products because of bias present due to the way they were trained or choices of data sets.”

Real-world performance is the last element of the plan, she said, which is the idea that companies would “monitor performance of the product in the wild, such as in clinical use.” But it is complex to have “info sharing agreements with customers,” she said, so there may be the need to find other ways to gather such data.

Answering an audience question on FDA’s stance toward AI, Pavlovic said that it is important for companies to make it clear to FDA that they are “taking a rigorous approach to evaluation of a product” and that there are “processes in place to ensure continued performance once the product is on the market.

“We have a responsibility to get these products right so we don’t undermine forward progress. All of us in this room take that responsibility seriously. A couple poorly performing products will set us back,” she added.

Jump, who has been in the digital health space for more than a decade, added that she has watched FDA become more comfortable with software. For instance, FDA wasn’t initially comfortable with remote updates, but now “they are saying, ‘Why aren’t you doing remote software updates?’ she said.

But what FDA may not be comfortable with could be the reasoning behind a product, she said. “If it’s new and innovative . . . get comfortable explaining what you are doing.” When it comes to AI, for instance, FDA is concerned “they see things going into a black box and coming out with an answer,” she explained. “They don’t understand what could change that would result in an unexpected decision that clinicians are trusting.”

Pavlovic and Jump also shared some initial feedback on the brand new April 2022 draft guidance on cybersecurity, which is much longer than the 2014 guidance. “It may scare people . . . but it is a blessing in disguise,” said Jump. “Follow what this guidance says, because it is what they are going to ask for . . . they are already asking for [it].”

For instance, just as medical device companies have to do risk management based on other regulation, the agency now expects threat modeling, she explained.

“Get your comments in now,” Jump added. “I don’t think they’re going to change a lot.” Said Pavlovic: “It is very consistent with what they’ve been saying.”

Jump pointed out a few new terms in the guidance, such as software product development framework (SPDF). “You should have a secure design process—which is a new name for an old concept. A good secure design is much more effective than constantly going in and patching things. FDA really wants to push better, secure design approaches.”

She also clarified the scope of FDA’s regulatory approach. “You do not have to be a connected product to be applicable to cybersecurity—if it has software or programmable logic, you are under scope of the guidance.” She also added that if a company is submitting a change to hardware, FDA may still ask about cybersecurity.
Source: https://www.designnews.com/artificial-intelligence/ai-ml-cybersecurity-heres-what-fda-may-soon-be-asking