AI is already writing code, generating requirements, and suggesting test cases, but how do RA/QA teams maintain oversight when these tools operate inside regulated workflows? This session explores how to ensure traceability, enforce QMS requirements, and validate AI-generated outputs using a risk-based approach. We’ll cover common mistakes teams make when introducing AI into their development lifecycle, share best practices for maintaining control, and show how to embed compliance directly into your tooling. Finally, we’ll look at how frameworks like PCCPs can support change management in AI-assisted environments—and how the right infrastructure can help you automate documentation and be prepared for submissions.
Learning Objectives:
Put new FDA guidances like PCCPs into practice by reducing manual validation efforts
Take a risk-based approach to using AI for quality activities by giving AI agents context of your QMS
Spend more time ensuring product quality by reducing documentation burden caused by manual, disparate systems
Overcome challenges of validating non-deterministic AI in GxP applications