In other words, the problem we set out to solve at UBC: how do we harness AI’s efficiency while ensuring we never compromise on patient safety?
Balancing Innovation with Safety in Pharmacovigilance
Pharmacovigilance has reached a tipping point. The sheer volume of individual case safety reports, scientific literature, social media chatter, and real-world evidence threatens to overwhelm even the most capable teams. Artificial intelligence promises a way forward—faster case processing, smarter signal detection, and relief from repetitive, and often time-consuming, manual work. But here’s the challenge: in drug safety, the stakes are too high for AI to operate without guardrails. A missed adverse event or a false signal could delay identification of real drug risks, putting patients in harm’s way.
That’s why at UBC, we’ve embraced a risk-based approach to AI integration — essentially putting a seatbelt on innovation. This framework ensures AI helps us work faster and smarter while built-in safeguards prevent mistakes from jeopardizing patient safety or regulatory compliance.
The Challenge: When Data Overwhelms Vigilance
Traditional pharmacovigilance wasn’t designed for the data deluge we face today. Every year, millions of case reports flow in from healthcare professionals, patients, clinical trials, and now social media. Meanwhile, regulatory expectations continue to tighten, and the pressure to detect safety signals faster has never been greater.
AI offers tantalizing solutions: algorithms that can triage thousands of cases in seconds, natural language processing that extracts adverse events from unstructured narratives, and machine learning models that spot patterns humans might miss. But there’s a catch. AI systems, like humans, are imperfect. They can misclassify events, miss critical seriousness criteria, generate false positives, or worse — produce “hallucinations” that sound plausible but are entirely fabricated. For instance, an AI summarizing a case report might confidently add clinical details or severity classifications that never appeared in the original narrative, simply because they’re statistically common patterns in its training data. Such fabrications are dangerous because they sound medically credible and could trigger false signals or obscure real safety concerns.
In pharmacovigilance, these imperfections aren’t just technical glitches. An unreliable algorithm that fails to flag a serious cardiac event could contribute to delayed signal detection. A biased training dataset that under-represents certain patient populations could perpetuate health inequities. Without a robust risk management framework, AI becomes a liability rather than an asset.
Our Approach: Risk-Based AI Integration
Drawing inspiration from regulatory frameworks like the EMA’s Reflection Paper on AI in the medicinal product lifecycle and FDA guidance on AI credibility assessment[CH1] , UBC has built a comprehensive risk-based framework for evaluating and deploying AI in pharmacovigilance.
Assessing Risk Before Deployment
Every AI use case we consider undergoes rigorous risk assessment before a single line of code is written. We evaluate three critical dimensions:
- The AI Technology Itself: What type of model are we using? Is it a static algorithm with explainable outputs, or a generative AI model prone to non-deterministic behavior? Newer, less mature technologies naturally carry higher risk and require more extensive validation.
- Context of Use (COU): Where will this AI tool sit within our pharmacovigilance workflow? Is it performing a preliminary triage step with full human oversight downstream, or is it directly supporting a critical PV process? The more influence an AI system has on final decisions, the higher the risk.
- Data Sensitivity and Access Controls: The type and sensitivity of data that LLMs will access fundamentally shapes our risk mitigation strategy. Where AI systems require access to identifiable patient information, confidential case narratives, or proprietary safety data, we would implement a hybrid human-AI environment with heightened controls (and as geographic regulations allow). This means more stringent quality checks, enhanced human oversight, managed data access permissions, and greater transparency requirements for AI-generated outputs. As emphasized in the CIOMS Working Group XIV [MA2] guidance, data privacy principles must be embedded throughout the AI lifecycle, with particular attention to protecting sensitive health information while maintaining the utility needed for pharmacovigilance objectives.
- Impact and Likelihood: Not all AI failures are created equal. A duplicate detection tool that misses a few records in a massive database poses minimal patient safety risk. But an AI system that fails to detect a serious safety signal in the context of widespread patient exposure? That’s a potential black swan event with dramatic public health consequences.
This multidimensional assessment helps us categorize each AI application and tailor our oversight, validation requirements, and documentation accordingly. Low-risk applications move forward faster with lighter controls. For instance, using AI to generate work instructions and process documentation, or to perform initial literature screening where human experts review all flagged articles, poses minimal direct impact on patient safety decisions. These tools improve efficiency without directly influencing critical pharmacovigilance outcomes.
High-risk applications require extensive validation, continuous monitoring, and significant human-in-the-loop involvement. Examples include AI systems that autonomously assess case seriousness, perform causality evaluations, or detect safety signals without human oversight—where errors could result in missed adverse events, delayed regulatory actions, or false safety conclusions. In these scenarios, we would implement robust validation protocols, continuous performance monitoring, and maintain substantial human review before any AI output influences safety decisions or regulatory filings.
Building in Safeguards
Risk mitigation starts at the design phase and continues throughout the AI lifecycle. For every use case, we define key performance indicators, acceptance criteria, and clear thresholds for when human intervention is required.
During initial deployment, we take a deliberately conservative approach. High-risk AI applications begin with extensive human review — sometimes 100% quality control — until we’ve built confidence in the system’s real-world performance. As evidence accumulates and performance remains within acceptable bounds, we gradually reduce the intensity of human oversight based on pre-defined criteria.
When issues arise, our risk-based approach guides our response. Mitigation measures might include increasing human review, retraining models on better data, implementing hallucination prevention strategies for generative AI, or in extreme cases, decommissioning a tool entirely.
Governance and Accountability
To formalize this approach, UBC is establishing an AI Governance Committee. This cross-functional group brings together representatives from PV, Clinical, IT Security, Data Privacy Team, Quality Assurance, and Legal to oversee risk assessments, validation standards, and compliance alignment. The committee serves as a central accountability hub, ensuring that every AI use case is reviewed not just for efficiency gains, but also for safety, fairness, transparency, and regulatory readiness.
The committee also monitors AI performance in production, updates risk classifications as technologies evolve, and ensures that our preventive measures—staff training, clear documentation, and transparent communication about AI limitations—remain effective.
Inside Our Initiative: From Vision to Validation
At UBC, our AI journey began with a simple principle: start small, validate thoroughly, scale carefully. We focus first on low-risk applications where AI can deliver clear value without introducing unacceptable safety risks.
Low-Risk Wins
One early success has been deploying Scribe for work instructions and documentation (more to come about this great use case!). Classified as low-risk because it doesn’t directly influence safety decisions, this tool allowed us to quickly improve documentation consistency and efficiency without regulatory concerns. It also gave our teams valuable experience working with AI in a controlled environment, building the organizational literacy and confidence we’ll need for more complex applications.
Medium-Risk Pilots
We’re now piloting AI in areas like narrative generation, literature screening, and case processing support. These applications leverage large language models (LLMs) orchestrated as AI Agents, drawing on curated safety data sources and embedded within our risk-based governance framework.
For narrative generation, AI drafts case narratives that human medical reviewers then validate and finalize. For literature screening, AI triages thousands of scientific articles, flagging potentially relevant safety information for expert review. In both cases, the AI accelerates the process while humans retain final authority over critical decisions.
Technology Stack
Our AI infrastructure is built primarily on Microsoft technologies—including Copilot, Azure AI, and Copilot Studio—which provide flexible, scalable workflows tailored for pharmacovigilance. In particular, those advantages include enterprise-grade security and compliance, data residency and geographic compliance (for example to store and process data within specific jurisdictions to meet local data protection laws like GDPR), responsible AI and explainability (interpretable outputs, showing which factors influenced predictions), which are essential for regulators compliance with agencies like the FDA and EMA. We integrate these tools with Scribe for documentation and transparency, ensuring every AI-assisted step is traceable and well-documented.
Critically, all our AI models operate within a risk-based governance framework aligned with regulatory expectations from EMA, FDA, and other authorities. We verify thoroughly before engaging, and we only deploy tools that have passed rigorous validation against pre-defined performance criteria.
Under the Hood: Technical Rigor Meets Domain Expertise
Building trustworthy AI for pharmacovigilance requires more than just technical prowess. It demands a deep understanding of the domain, the regulatory environment, and the real-world consequences of getting things wrong.
Validation and Performance Monitoring
Every AI system we deploy undergoes extensive validation during development. We establish key performance indicators—case processing time, narrative accuracy, literature screening recall—and benchmark AI performance against human baselines. We also test for edge cases, adversarial inputs, and scenarios where the AI might fail.
But validation doesn’t end at deployment. We continuously monitor AI performance in production, tracking concordance with human reviewers, audit outcomes, and documentation completeness. If performance drifts below acceptable thresholds, automated alerts trigger investigation and remediation.
Human-in-the-Loop by Design
Even our most reliable AI systems include human oversight checkpoints. The level and frequency of human review depend on the risk assessment—high-risk applications maintain extensive human involvement, while low-risk tools operate with lighter monitoring.
Importantly, we train our pharmacovigilance professionals not just to use AI tools, but to critically evaluate their outputs. This means recognizing when an AI-generated narrative sounds plausible but contains subtle errors, understanding the limitations of generative models, and knowing when to escalate concerns to the AI Governance Committee.
Preventive Culture
Perhaps our most important technical safeguard isn’t technical at all — it’s cultural. We’ve invested heavily in staff training, clear usage guidelines, and transparent communication about when and how to use AI responsibly. Regular talks and workshops help employees understand not just what AI can do, but what it shouldn’t be used for. This proactive culture of awareness ensures that AI adoption at UBC is both innovative and safe.
Real-World Impact: What This Means for Pharmacovigilance
Most of our AI technologies are still in development and validation phases, reflecting our deliberately cautious approach. But the early results are promising.
Externally, our clients stand to benefit from faster case processing, more consistent documentation quality, and ultimately more timely safety insights. By automating low-value tasks and augmenting human expertise with AI, we can scale our pharmacovigilance operations without sacrificing quality or compliance. Internally, these future changes have the potential to translate to quality-of-life improvements, where professionals can focus on intellectually stimulating work rather than administrative drudgery.
Importantly, our risk-based approach means clients can trust that our AI solutions are compliant, validated, and built to last. When regulators ask questions—and they will — we have the documentation, validation data, and governance structures to demonstrate that patient safety was never compromised.
Measuring Success: KPIs That Matter
To prove value, we track metrics across four dimensions:
- Performance: Based on pilot tests, these could include case processing time reductions, narrative drafting accuracy, literature review turnaround, all benchmarked against pre-AI baselines.
- Safety: Concordance with human reviewers, zero tolerance for missed critical safety signals, and continuous monitoring for model drift.
- Compliance: On-time submission rates, audit and inspection outcomes, documentation completeness — all essential in a regulated environment.
- User Adoption: Surveys of PV staff and customers confirm that AI increases productivity without introducing workflow friction or new risks.
These KPIs aren’t just about proving ROI. They’re about ensuring our risk controls are working as intended, and that AI truly augments and streamlines our pharmacovigilance capabilities rather than creating new headaches.
Next Horizons: The Future of Risk-Based PV
As AI performance improves and our organizational confidence grows, the possibilities expand. We envision a balanced pharmacovigilance ecosystem where humans and AI complement each other: humans bring medical judgment, scientific expertise, and contextual understanding, while AI handles repetitive load with speed and consistency. This vision aligns with what Ethan Mollick calls the “centaur model” in his book Co-Intelligence[CH3] —a synergistic partnership where humans and AI each contribute their unique strengths rather than competing for the same tasks. Instead of replacing humans, AI extends their capabilities, enabling faster, more accurate safety decisions with fewer blind spots.
Tasks that demand nuanced clinical reasoning—complex causality assessments, signal prioritization, regulatory strategy—will remain firmly in human hands. Meanwhile, routine documentation, structured data entry, preliminary literature screening, and case narrative drafting will increasingly shift to AI agents operating under strict oversight.
Our next moves are clear: expanding low-risk deployments where we’ve proven value, validating medium-risk use cases step by step, and invite customers to partner with us on where AI can safely add value next. We’re also watching regulatory developments closely, engaging proactively with authorities, and contributing to industry discussions about best practices for AI in pharmacovigilance.
Meet the Innovator: Marios Abatis
Marios Abatis combines neuroscience expertise in learning and memory mechanisms (PMID: 38871992) [CH4] with self-taught machine learning and coding skills. His research background helps him understand how AI systems process and recall information like biological neural networks—including their failure modes. This dual perspective enables hands-on assessment of AI reliability and technical risk mitigation in drug safety contexts.
Why Now? A Conversation between Christopher Henry and Marios Abatis
Christopher Henry (CH): Marios, why is UBC prioritizing a risk-based AI approach right now? Why not wait until the technology matures further?
Marios Abatis (MA): “The rationale isn’t just theoretical—it’s a regulatory and competitive imperative. We’re at an inflection point where waiting could actually put us behind.”
CH: What do you mean by regulatory imperative?
MA: “Look at what’s happening globally. The EMA’s Reflection Paper on AI, FDA guidance on AI credibility, and the EU AI Act all explicitly call for risk-based approaches to AI in healthcare and pharmaceutical settings. These aren’t suggestions—they’re signposts of where regulation is heading. Organizations that lead with cautious, risk-aware frameworks will be better positioned when these requirements solidify. We’d rather be ahead of the curve than attempting to catch up later.”
CH: What about the business case? Are your clients actually asking for this?
MA: “Absolutely. Pharmacovigilance teams everywhere are drowning in data. Case volumes are rising, regulatory timelines are tightening, and compliance costs are escalating. Our clients need relief, but they’re also sophisticated enough to know that poorly deployed AI could create more problems than it solves. AI offers tremendous potential, but only if it’s deployed responsibly. That’s what makes this urgent—the issues are real, and the solution needs to be trustworthy.”
CH: How does this approach give UBC a competitive edge?
MA: “Here’s the thing—some competitors are racing to deploy AI without thoroughly proving its safety or establishing proper governance. That might look impressive in a sales deck, but in a regulated industry, it’s a house of cards. Our risk-based approach reassures both clients and regulators that our solutions are innovative, compliant, and trustworthy. In pharmacovigilance, trust is everything. When a regulator asks about our AI validation, we have answers. When a client asks how we handle edge cases or prevent bias, we have documented processes. That trust is our competitive advantage, and it’s becoming more valuable as the market matures.”
The Takeaway: Innovation with Guardrails
As the recent CIOMS Working Group XIV draft report [CH5] on AI in pharmacovigilance reminds us, the future of drug safety depends on balancing innovation with responsibility. A risk-based approach isn’t just regulatory language — it’s the key to ensuring AI improves pharmacovigilance without compromising patient safety, trust, or compliance.
At UBC, we’re proving that this balance is achievable through cautious, stepwise deployment, rigorous validation, continuous monitoring, and a governance structure that holds us accountable at every stage. We’re not rushing AI into production. We’re building it right.
The result? AI that truly augments our pharmacovigilance teams instead of creating new risks. AI that helps us work smarter while ensuring vigilance never falters. AI with a seatbelt.
Ready to explore what risk-based AI can do for your pharmacovigilance operations? Contact UBC today to discuss how our cautious, validated, and compliance-ready AI solutions can support your drug safety needs. As we integrate your needs in our AI research & development, let’s build the future of pharmacovigilance—responsibly.
Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products | FDA
CIOMS-WG-XIV_Draft-report-for-Public-Consultation_1May2025.pdf
[CH3]Mollick, E. (2024). Co-intelligence. Penguin.
[CH5]CIOMS-WG-XIV_Draft-report-for-Public-Consultation_1May2025.pdf
About UBC
United BioSource LLC (UBC) is the leading provider of evidence development solutions with expertise in uniting evidence and access. UBC helps biopharma mitigate risk, address product hurdles, and demonstrate safety, efficacy, and value under real-world conditions. UBC leads the market in providing integrated, comprehensive clinical, safety, and commercialization services and is uniquely positioned to seamlessly integrate best-in-class services throughout the lifecycle of a product.
About the Authors

Marios Abatis, PhD Safety Scientist, Global Case Processing Team, Pharmacovigilance
Marios Abatis holds a PhD in Neuroscience from the University of Lausanne, where his research focused on the mechanisms of learning and memory. Since joining UBC in 2022, he has been contributing to Pharmacovigilance operations with a focus on case processing, global literature surveillance, and signal detection. Dr. Abatis is also pioneering the integration of automation and AI-driven solutions to enhance efficiency and quality across pharmacovigilance workflows.

