Description AI adoption in regulatory affairs are as much cultural as technical. What does rigorous, regulatorily defensible AI operation actually require? And what will regulators and notified bodies expect to find?
Drawing directly on the Pre-conference workshop on Driving Change: How to Successfully Implement AI and Digital Transformation in Regulatory Affairs, this session examines the engineering and governance layers that separate a compliant AI deployment from liability. Participants will hear a concise synthesis of the day's workshop outputs through a medical device lens, followed by a discussion on continuous assurance frameworks for AI tooling in regulated environments.
The session then opens into a panel bringing together an AI implementation and validation specialist, a clinical research and regulatory oversight expert, and a notified body assessor to address questions manufacturers consistently raise but rarely hear answered directly: what does a notified body actually expect to find in a submission involving AI components, where does current industry practice fall short - and how will AI itself be used to validate AI generated content?
Learning Objectives:
Recognise what continuous assurance for AI systems requires in medical device development and post-market surveillance contexts
Identify the key risk dimensions regulators and notified bodies consider when AI is embedded in regulatory processes or submissions
Assess the gap between current industry practice and regulatory expectation when submitting AI-assisted or AI-generated documentation