FDA's draft AI guidance for medical device software: what validators need to know

The FDA has published draft and final guidance on AI in medical device software faster than most quality teams can update their SOPs. The window to influence how your QMS handles AI is narrow. The regulatory direction is set.

If you're a Validation Engineer, the practical question isn't "what will change." It's "what do I need to start documenting now to avoid rewriting my DHF in 18 months."

Where the guidance currently sits

The FDA has issued several converging documents. The 2023 Marketing Submission Recommendations for a Predetermined Change Control Plan for AI/ML-Enabled Device Software Functions finalized in late 2024. The 2025 draft Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products. The 2024 Lifecycle Management of AI-Enabled Device Software Functions guidance.

The shared throughline is lifecycle management. The FDA does not view AI as a one-time validation event. It expects continuous monitoring, predetermined change control and ongoing performance verification.

That's a different mental model from traditional medical device software, where the validated state is the released state. AI doesn't sit still. The validation framework can't either.

The Predetermined Change Control Plan (PCCP)

The PCCP is the most consequential development for validators. It allows manufacturers to pre-specify the types of model updates they intend to make, the verification methods for those updates and the impact assessment criteria, all upfront in the original submission.

Once approved, the manufacturer can deploy changes within the PCCP scope without resubmitting to FDA. That's a meaningful shift. The traditional medical device path treated every consequential change as a new submission event.

What this means in practice for your QMS:

If your QMS treats every model update as a full design change, you're leaving the value of the PCCP on the table.

What "validation" means for AI-enabled software

Traditional IEC 62304 validation assumes deterministic behavior. The same input produces the same output. Test once, document once, validated.

AI breaks that assumption. A trained model produces probabilistic outputs. Performance on new data drifts. Edge cases emerge in production that weren't in the validation dataset.

The FDA's expectation, made explicit in the lifecycle guidance, is that validation extends into post-market. You need:

This is closer to a clinical trial than a traditional software validation. The skills your team needs are statistical, not just procedural. That's the part nobody's quite ready for.

The bias and representativeness problem

Auditors are asking harder questions about training data. "How do you know your model performs equivalently across age groups, sex, race and clinical site" is now a standard inspection question for AI-enabled SaMD.

The expected evidence:

Most quality teams I've seen haven't built this evidence. Their AI engineering teams have. But the evidence sits in Jupyter notebooks instead of the DHF. That's a procedural fix. Pull the analyses out of engineering and into a controlled record. Add subgroup performance to the DMR. The work is mostly already done; it's just in the wrong filing cabinet.

The IEC 62304 mapping question

IEC 62304 was written before modern AI was a regulatory concern. The standard is being updated. The timeline for ratification is uncertain. In the meantime, the FDA expects manufacturers to apply IEC 62304 to AI-enabled software with reasonable adaptations.

The practical mapping I've seen accepted by FDA reviewers:

This mapping isn't in any standard. It's industry practice that has emerged from FDA Q-Subs and feedback letters. Your regulatory affairs team should be tracking it.

The time-bound opportunity

The PCCP framework rewards manufacturers who file early. The first manufacturers in each clinical area get to shape what the FDA considers reasonable change control scope. Later filers get compared against the early precedents.

If you have an AI-enabled device in development, getting a PCCP into your submission this cycle is materially more valuable than next cycle.

To position for that:

What's worth doing now

Read the four current guidance documents end to end. Most quality teams have read summaries. Summaries miss what the guidance actually requires.

Audit the QMS for AI-readiness. Look for SOPs that assume deterministic software behavior. Flag every place a model would break the assumption. The list is usually longer than people expect.

Engage regulatory affairs on PCCP planning if you have AI in your pipeline. The submission strategy decisions you make now determine your validation cost for the next decade of the product.

The FDA is being relatively flexible with manufacturers who demonstrate they understand the framework. That flexibility will narrow as more devices come through. The early filers will set precedent. The late ones will inherit it.


VibeVal validates IEC 62304 software changes against the rules that apply today. Submit a code diff and a risk class, get a CSA-aligned verdict, a written rationale and a SHA-256 attestation hash for the DHF. AI/ML-aware checks for PCCP-scope changes are on the roadmap; tell us what you'd want them to do.

Ship validated changes faster.
Submit a code diff, get a CSA-aligned compliance verdict in seconds. Pay per check.
Get started

← All articles