Utilizing AI in Compliance-Bound Industries: Strategies and Techniques
In the rapidly evolving world of artificial intelligence (AI), the Food and Drug Administration (FDA) is playing a pivotal role in shaping its use within the pharmaceutical industry. The focus is on ensuring AI tools are compliant, secure, and effective, particularly in GxP-regulated environments.
The FDA's guidance is centred around a risk-based, credibility assessment framework, primarily applicable to the pre-commercial regulatory decision-making phase. This framework requires defining the AI model's question of interest and context of use, assessing its risk, establishing and executing a credibility plan, documenting results, and determining model adequacy for the intended use.
AI tools can offer numerous benefits to commercial pharmaceutical organisations, from performing repetitive tasks to optimising manufacturing operations. For instance, AI can notify manufacturers of attributes out of specification, prompting a Corrective Action or Preventive Action (CAPA) workflow. It can also create trending reports by aggregating manufacturing data from multiple batches, aiding in data traceability.
Moreover, AI can review business documentation against FDA regulations, indicating missing items, and even develop a Certificate of Analysis (CoA) for production batches. AI's potential to transform the drug approval process is evident as the FDA leverages it to optimise operations.
However, AI's use in highly regulated environments requires human oversight. The FDA's own AI tool, Elsa, summarises adverse events data, but the output must be reviewed by scientists to properly evaluate and verify the safety of a product. AI tools must also be periodically audited to ensure consistent quality.
The FDA's current guidance on AI-enabled tools is complemented by new industry frameworks like ISPE's GAMP Guide for AI and emerging regulations such as “Annex 22 – Artificial Intelligence” for GMP contexts. These mandate formal requirements for AI design, validation, oversight, data governance, human review of AI outputs influencing critical decisions, adaptive model change control, and risk-based validation.
As the FDA continues to develop guidance for AI in regulatory review, broader formal regulatory guidelines are still under development. The federal AI policy, via the July 2022 White House AI Action Plan, involves FDA participation in regulatory sandboxes and AI Centers of Excellence, fostering innovation under oversight, plus improvements in AI data standards and evaluation guidelines.
In light of these developments, organisations using AI in GxP settings should implement strong governance, validation, human oversight, and compliance tracking, aligned with FDA and industry frameworks. For those seeking expert guidance in developing a GxP-compliant AI strategy, contacting Clarkston's quality and compliance experts may be necessary.
[1] FDA (2022). Artificial Intelligence and Machine Learning (AI/ML) in FDA-regulated Products. Retrieved from https://www.fda.gov/science-research/developing-new-drugs-and-biological-products/artificial-intelligence-and-machine-learning-aiml-fda-regulated-products
[2] ISPE (2020). GAMP® Guide for Artificial Intelligence and Machine Learning Applications in Pharmaceutical Manufacturing. Retrieved from https://www.ispe.org/-/media/files/quality-systems/gamp-guide-for-artificial-intelligence-and-machine-learning-applications-in-pharmaceutical-manufacturing.ashx
[3] ICH (2021). ICH Q12: Pharmaceutical Quality System. Retrieved from https://www.ich.org/page/ich-q12
[4] FDA (2021). FDA's 21 CFR Part 11 - Electronic Records; Electronic Signatures - Scope and Application. Retrieved from https://www.fda.gov/regulatory-information/search-fda-guidance-documents/electronic-records-electronic-signatures-scope-and-application
[5] White House (2021). Executive Order on Promoting Competition in the American Economy. Retrieved from https://www.whitehouse.gov/briefing-room/presidential-actions/2021/07/09/executive-order-on-promoting-competition-in-the-american-economy/
- In the life sciences sector, strategic implementation of technology, such as AI, is crucial for compliant and secure operations, particularly in GxP-regulated environments.
- The FDA's focus on AI tools is not only to ensure their effectiveness but also to define their question of interest, assess risk, and establish a credibility plan within the pre-commercial regulatory decision-making phase.
- AI can optimize manufacturing operations in retail consumer products, identify out-of-specification attributes, and initiate Corrective Action or Preventive Action (CAPA) workflows.
- AI can also create trending reports for manufacturing data, aid in data traceability, and review business documentation against FDA regulations.
- However, human oversight is essential in highly regulated environments to properly evaluate and verify the safety of products, as evident with the FDA's AI tool, Elsa, which summarizes adverse events data.
- In addition to the FDA's guidance, new industry frameworks like ISPE's GAMP Guide for AI mandate formal requirements for AI design, validation, oversight, data governance, and human review of AI outputs.
- Emerging regulations like “Annex 22 – Artificial Intelligence” for GMP contexts also dictate adaptive model change control and risk-based validation.
- The federal AI policy, via the July 2022 White House AI Action Plan, includes FDA participation in regulatory sandboxes and AI Centers of Excellence, aiming to foster innovation under oversight.
- As AI use expands, organizations must embrace strong governance, validation, human oversight, and compliance tracking, aligning with FDA and industry frameworks.
- For expert assistance in developing a GxP-compliant AI strategy in the health-and-wellness, pharmaceutical, or fintech sectors, considering consultation with quality and compliance experts may be beneficial.