FDA’s New Playbook for AI in Medical Devices: What Developers Need to Know
Artificial intelligence (AI) is transforming healthcare by enhancing diagnostics, personalizing treatments, and improving patient engagement. But AI’s ability to learn, adapt, and evolve over time poses unique challenges for regulators. In response, the U.S. Food and Drug Administration (FDA) is building a flexible, forward-looking framework to govern AI/ML-enabled software as a medical device (SaMD).
For device developers, this means success isn’t just about building great tech—it’s about ensuring patient safety and consumer confidence by building an AI device with regulatory alignment and FDA compliance top of mind from the very start. This article breaks down the FDA’s latest AI guidance for 2025 and what it means for innovators navigating the complex journey from concept to commercialization.
A Shift Toward Lifecycle Oversight: The FDA’s TPLC Approach
The FDA has moved away from regulating AI/ML devices as static tools. Instead, its Total Product Lifecycle (TPLC) framework enables more dynamic oversight—recognizing that modern algorithms evolve with new data.
Key features of the TPLC model:
Pre-Market Expectations: Clear protocols for model training, validation, and clinical context.
Post-Market Monitoring: Systems to detect and respond to real-world performance changes.
Predetermined Change Control Plans (PCCPs): Structured pathways for making post-clearance software updates without triggering a new 510(k), De Novo, or PMA.
This approach allows regulators to support innovation—without compromising on safety, transparency, or accountability.
Guidance Shaping the Future of AI-Enabled SaMD
The FDA’s regulatory roadmap has evolved over time. Here’s a look at the most relevant guidance documents to date.
1. AI/ML-Based SaMD Action Plan (Jan. 2021)
This foundational plan outlined the FDA’s five key priorities for AI regulation:
Good Machine Learning Practice (GMLP)
PCCPs for regulatory flexibility
Real-world performance monitoring
Transparency to users and clinicians
International standards collaboration
Takeaway: Developers should design AI/ML-enabled medical devices with adaptability in mind—and have governance structures in place that facilitate updates, retraining, and lifecycle change.
2. Good Machine Learning Practice for Medical Device Development: Guiding Principles (Oct. 2021)
Co-released with FDA, Health Canada, and the UK’s Medicines and Healthcare Regulatory Agency (MHRA), this document introduced 10 guiding principles for AI/ML-enabled SaMD. The principles promote safety and efficacy, serving as a foundation for developing Good Machine Learning Practices (GMLP).
The principles include:
Lifecycle Integration: Emphasizes the importance of multidisciplinary expertise throughout the Total Product Life Cycle (TPLC), ensuring that AI/ML models are integrated effectively into clinical workflows and address clinically meaningful needs.
Data Integrity and Security: Highlights the necessity of implementing good software engineering practices, data quality assurance, data management, and robust cybersecurity measures to maintain data authenticity and integrity.
Representative Data: Stresses the need for clinical study participants and datasets to be representative of the intended patient population, promoting generalizable performance and identifying potential underperformance scenarios.
Independent Testing: Advocates for training and test datasets to be independent of each other, ensuring unbiased evaluation of model performance.
Reference Standards: Encourages the use of best available methods for developing reference datasets, ensuring that clinically relevant and well-characterized data are collected and understood.
Tailored Model Design: Suggests that model design should be suited to the available data and support the active mitigation of known risks, such as overfitting and performance degradation.
Human-AI Collaboration: Focuses on the performance of the human-AI team, ensuring that human factors considerations and interpretability of model outputs are addressed.
Clinically Relevant Testing: Recommends that testing demonstrates device performance under clinically relevant conditions, including consideration of intended patient populations and clinical environments.
User Transparency: Advocates for providing users with clear, essential information about the product's intended use, performance, limitations, and integration into clinical workflows.
Ongoing Monitoring: Emphasizes the need for monitoring deployed models for performance and managing risks associated with re-training, such as dataset drift and unintended bias.
Takeaway: These GMLP principles form the bedrock of credible adaptive AI model development, providing a comprehensive framework guiding design, development, and deployment. Embedding these principles early in the device development process can facilitate smoother regulatory approvals and enhance credibility/trust among users.
3. Transparency for Machine Learning-Enabled Medical Devices: Guiding Principles (June 2024)
Released in collaboration with Health Canada and the UK’s MHRA, this guidance zeroes in on one of the most critical—and often most challenging—aspects of AI in healthcare: transparency throughout the product lifecycle, from development and deployment to post-market monitoring.
Highlights include:
Audience-Specific Communication: Transparency should be tailored to the knowledge and needs of different stakeholders, including clinicians, patients, and regulators.
Explainability of Outputs: Manufacturers should design AI tools that produce interpretable outputs or include mechanisms to explain decision-making, especially when outputs influence clinical care.
Clear Labeling and Instructions: Product labeling should reflect the capabilities, limitations, and appropriate use of the device, including whether and how it adapts over time.
Ongoing Transparency Post-Market: Real-world performance, changes made under a PCCP, and emerging risks should be communicated in a timely and accessible manner.
Takeaway: Transparency is no longer optional; it’s an essential design and regulatory requirement. Developers should prioritize user-focused explainability and proactive communication strategies from the outset.
4. Marketing Submission Recommendations for a Predetermined Change Control Plan – Final Guidance (Dec. 2024)
This final guidance outlines how developers should incorporate a Predetermined Change Control Plan (PCCP) into premarket submissions for AI/ML-enabled SaMD functions. A PCCP allows manufacturers to implement certain post-market software modifications without submitting a new 510(k), De Novo, or PMA as long as the changes remain within the scope of the plan.
Key components of a compliant PCCP include:
A detailed description of anticipated modifications
A Modification Protocol for validating and deploying the changes
A risk-based Assessment of Impact ensuring ongoing safety and effectiveness
Takeaway: A well-structured PCCP can accelerate your product’s iterative improvements without regulatory delays, making it a strategic imperative to support innovation.
This draft guidance builds on prior FDA policies, setting forth detailed recommendations on managing AI-enabled device software functions across the Total Product Life Cycle.
Areas of focus include:
Lifecycle-Based Oversight and proactive risk management
PCCP Integration in 510(k), De Novo, and PMA submissions
Design and Validation Expectations for data representativeness, model training, testing protocols, and clinical relevance
Performance Monitoring and Modification Protocols, providing additional guidance on tracking device behaviors such as drift detection in deployment
Takeaway: This draft guidance may serve as a blueprint for aligning product development, regulatory submissions, and quality systems with FDA’s evolving expectations for safe, learning-capable devices.
6. Draft Guidance on AI for Regulatory Decision-Making in Drugs/Biologics (Jan. 2025)
While directed at pharma sponsors, this guidance offers cross-cutting insights for AI-enabled device developers—especially those using AI to generate evidence in support of regulatory decisions.
Key insights include:
Model Credibility Planning that defines the purpose, limitations, and context of use for any AI model.
Robust Documentation ensuring complete documentation of data sources, model training, performance metrics, and evaluation methods.
Lifecycle Monitoring that includes post-deployment oversight.
Takeaway: This guidance for pharma provides a useful lens into how we could expect FDA to apply similar standards to AI tools.
Practical Steps for Device Innovators
If you’re developing an AI/ML-enabled medical device, these steps will help you align with FDA’s current expectations:
Start with a Strong PCCP: Treat your PCCP as a regulatory blueprint. Be precise, risk-aware, and conservative in scope. Vague or overly broad plans will trigger pushback from FDA reviewers.
Build for Transparency: Be prepared to explain how your algorithm works, how it was trained, and what data limitations exist. Black-box systems face heightened scrutiny.
Commit to Real-World Monitoring: Set up clear systems to detect and respond to performance degradation, safety issues, or changes in clinical populations.
Maintain Robust Documentation: Traceability from data lineage to software version control should be a core part of your quality management system. Assume every update could require validation.
The Big Takeaway: Innovation with Compliance
The FDA’s evolving approach is reshaping how innovation happens in healthcare. It reflects a growing recognition that thoughtful design, proactive planning, and ongoing monitoring are essential to building a safe, sustainable, successful product.
At Nixon Law Group, we help medical device developers and health tech companies build future-ready strategies for FDA compliance. Whether you’re developing a SaMD product, preparing a PCCP, or seeking post-market compliance guidance, our team is ready to support you in aligning innovation with regulation.
Need help applying FDA’s evolving AI guidance to your product?
Contact Nixon Law Group for strategic legal and regulatory counsel now.