Breaking News

Code Red: The FDA’s Artificial Intelligence/Machine Learning Action Plan Poses Potential Risks for Medical Device Makers – JD Supra

“The gathering and transmitting of personal data represents a major cyber threat to medical devices and must be extremely carefully thought through.”
Q: The FDA’s stance on a regulatory framework for artificial intelligence and machine learning (AI/ML) software as a medical device is continuously evolving. Could you explain the history?

A: Artificial intelligence (AI) is “adaptive,” meaning that it continuously learns algorithms. For this reason, it is sometimes referred to as Machine Learning (ML). Newly designed medical devices that incorporate AI/ML by definition do not have a final “locked” design capable of a single FDA review. In April 2019, the FDA issued a white paper, Artificial Intelligence and Machine Learning in Software as a Medical Device, that asked for stakeholder feedback and public comment on a proposed new regulatory approach called “Total Product Life Cycle.” This framework included four general principles to balance benefits and risk of medical devices that continuously change.

The framework proposed by the FDA raises interesting questions about potential impacts on traditional product liability defenses that presume a fixed design, notably preemption and duty to warn / learned intermediary. For example, some medical devices found by the FDA to be “safe and effective” enjoy legal preemption, or a bar, against state law tort claims to the contrary. If a design is constantly changing due to AI/ML, can courts rely on the FDA’s original determination and dismiss claims based on the traditional legal rules governing preemption? Similarly, a manufacturer’s duty to warn of known risks typically can be fulfilled by providing that warning not to the patient directly, but rather to a physician as “learned intermediary” between the patient and the product manufacturer. If a medical device is no longer controlled by the human “learned intermediary” physician, but instead by the AI/ML, does the manufacturer of the AI/ML now owe a duty to warn directly to the patient, thus eviscerating the traditional learned intermediary defense?

Q: In January 2021, the FDA announced its first Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. What does that entail?

A: The FDA’s Artificial Intelligence and Machine Learning (AI/ML) Software as a Medical Device Action Plan outlines the five actions the agency intends to take in response to stakeholder feedback to its April 2019 white paper. This approach includes: 1) further developing the proposed regulatory framework, including through issuance of draft guidance on a predetermined change control plan (for software’s learning over time); 2) supporting the development of good machine learning practices to evaluate and improve machine learning algorithms; 3) fostering a patient-centered approach, including device transparency to users; 4) developing methods to evaluate and improve machine learning algorithms; and 5) advancing real-world performance monitoring pilots.

Q: Why could the FDA’s Action Plan matter to medical device makers?

A: As we previously discussed in Code Blue: Cybersecurity Vulnerabilities for Medical Device Makers Require Urgent Care, medical device makers must increasingly guard against cyber attacks. AI/ML medical devices will need to be especially careful about ensuring proper security, especially if the devices are sharing health data remotely, as may be necessary to fill the FDA’s fifth point – advancing real-world performance monitoring. The gathering and transmitting of personal data represents a major cyber threat to medical devices and must be extremely carefully thought through.

Source: https://www.jdsupra.com/legalnews/code-red-the-fda-s-artificial-9430606/