16 min read

Redefining PV Skillsets in an AI-Driven World: Training the Next Generation of Safety Scientists

AI is rapidly reshaping pharmacovigilance, shifting the balance between automated support and human judgment. This article explores the skills safety scientists will need as AI becomes embedded in case processing, signal management, and decision-making workflows. It introduces the concept of the hybrid safety scientist—professionals who blend medical expertise with AI literacy, data fluency, and ethical awareness—and outlines practical steps organizations can take to build these competencies and prepare their teams for the future of PV.
As AI becomes embedded across pharmacovigilance, the skills required of safety scientists are rapidly evolving. This article examines the shifting balance between human expertise and AI-enabled tools, outlining the competencies, training pathways, and cultural changes needed to prepare tomorrow’s hybrid PV workforce.

Christopher Henry

Safety Scientist, Global Safety Writing

This blog is part of a series on AI in pharmacovigilance (PV). Each post takes a PV-centric perspective, looking at how AI’s diverse capabilities could inspire innovation and reshape the way we work. These are conceptual explorations; it does not utilize UBC clients’ use cases or deliverables. Instead, the use cases are meant to spark discussion, share insights, and open possibilities for the future of PV.

Why AI Reshapes PV Skill Requirements

A few years ago, AI started sparking novelty and it has now pivoted into a great tool for many industries. A major focus of the pharma industry, including pharmacovigilance, is now to understand where AI and humans should meet. Regulatory entities including across the globe, including the European Medicines Agency (EMA) and Food and Drug Administration (FDA), are converging on keeping humans in the loop for the industry, considering its significant impact on human life.

At the present time, the PV industry is trying to implement case processing automation with AI component to read and translate source documents or prepare case narratives. Even more, we start seeing how AI can support signal management by analyzing a wide range of data sources or how it can generate aggregate reports; illustrating that many aspects of PV are being touched by the AI transformation.

It becomes central to understand how we should work together with AIs. What do we need to learn to accomplish this transition, going from doing most, when not all the work ourselves, to effectively delegating some activities to AI? How should new safety scientists be prepared for staying the backbone of an industry becoming increasingly reliant on AI support?

In today’s blog, I would like to invite you all to reflect with me on what skills are needed in this new era and how to make it possible to build this new generation of safety scientists.

Traditional PV Skillsets and Core Competencies

Traditionally, clinical or pharmaceutical expertise has been the most welcome skill to support PV from clinical to post marketing levels. With that understanding, scientists are able to cover case processing activities from data entry to causality assessment, and case narrative preparation. Being able to understand adverse events, patient narratives and how all of this fits in a medical context, provides the core of what PV supports: Patient Safety First.

For a significant number of scientists, analytic skills have been playing an increasingly important role in the last few decades. Understanding how to use Excel and some key formulas can significantly alter the sharpness of aggregate analysis seen in signal detection, development safety update report (DSUR) or periodic benefit-risk evaluation report (PBRER) preparation.

The third group of essential skills is regulatory knowledge – PV is such a highly regulated space that navigating through it can give you a headache, particularly if you are cumulating clinical, post-marketing, and numerous countries for your activities, and place your company at risk for non-compliance. Indeed, going through the ICH, EMA, and FDA regulatory requirements and guidelines requires critical thinking skills in understanding regulatory verbiage to ensure proper interpretation of the regulations.

Lastly, I wanted to highlight how essential it is to have good communication skills. This is not specific to PV nor to pharma, but it makes a big difference when you want to understand each other. This is particularly true in an environment where cross-functional and global teams are required to collaborate. I personally can’t stress enough about this, considering that I have weekly discussions with colleagues from either clinical, IT, regulatory, case processing, or safety writing.

New Competencies for Tomorrow

Most AI systems in PV will involve a human in or on the loop. “In” the loop means that a human is needed to allow the AI system to complete its task, for example to approve or reject the causality assessment made by an AI. “On” the loop refers more to an oversight role for the human, for example, in duplication detection, where the human monitors the AI system to ensure it performs as expected.

AI literacy takes on many shapes, from staying critical to what an AI says to technical understanding. Ultimately, it should not be expected that everyone understands how to code an AI, nor how its architecture should look like to perform a given task. It is more about understanding that AI might be best suited to support assessment based on large data sets but might not be ideal when nuanced medical judgment is required for complex causality assessments. It is about understanding that AI is not a default solution, to know when traditional approaches are better suited than training an AI to deal with, for example, simple automation possible using an Excel formula. Finally, I would say that an important part of AI literacy is to not accept everything coming from an AI as true or wrong, but rather to stay critical about the AI answer.

From my point of view, many scientists will have to allow themselves to get more familiar with data science basics, which includes understanding structured versus unstructured data and which works best for the AI system they are using. Understanding the data world will help collaboration with IT teams, being able to explain the type of data you are dealing with and how variable they are, the AI system development and implementation will be smoother.

With technological revolution comes new ethical questions. AI does not make exception. When I first worked on AI in the PV world, I realized that data privacy and transparency were essential if we wanted to rely on it to support the community. Everyone will have to understand what is “ok” to do with AI and what is not; at the current state of the technology implementation, a lot of those questions are in the hands of the users, the safety scientist; and not solely on the development team. Is it ok to give private company information to ChatGPT used from OpenAI’s server or should this be shared only with an internally deployed instance of ChatGPT, without any data transferred to OpenAI?

The Hybrid Scientist

As you probably understand, I strongly believe that we tend toward a “Hybrid Scientist” role; but what does it mean to be “hybrid” in an AI world?

The general idea is that safety scientists will blend medical judgment with digital fluency, particularly around AI systems. It means that our safety scientists are currently relying on their medical judgment to perform their activities and ensure that PV is running smoothly. But tomorrow, we need them to also use more advanced technologies. Understanding AI is not knowing how every AI-powered system works but understanding key basic elements of AI and its specificities, including the biases coming with the technology:

  • Automation bias: The tendency for users to trust automated results excessively and to ignore their own judgment even when it contradicts the machine – “the machine can’t make a mistake; I must be wrong”.
    • Mitigation: As much as possible, rely on AI systems who are transparent on how they reach their conclusion and implement pauses to allow users to stay critical and use their judgment accordingly.
  • Expediency bias: Under time pressure, users may accept “good enough” AI answers without checking carefully – “the machine is very likely right, let’s move faster”.
    • Mitigation: Allow for slowing down in the workflow, maybe using some decision checklist to support taking enough time to not overlook the output reliability.
  • Endowment effect (IKEA effect): Users may overvalue AI-generated content simply because they helped build the system and designed it to get that specific result – “the model can’t be off; I must be overthinking it.”
    • Mitigation: The easiest is to have someone else review the outputs, or to be critique of the AI’s output as if it was generated by a stranger.

Ultimately, many safety scientists will act as sort of “interpreters” between the AI systems and the regulatory or clinical decision-making, which means being able to speak the PV language, but also to be fluent enough in the technological lexicon to be this middleman between AI and decision-making.

This idea of AI literacy is a must to have, but I feel that an important middle step is to support general technological literacy. Ensuring that safety scientists are comfortable with using formulas in Excel, to google their issues; basically, to use the available technologies to be better in their tasks.

To complete the picture of the hybrid scientist, I would like to talk about why humans are to stay in the loop for pharma related processes. We, as humans, are currently the only ones able to ensure that the decisions taken are following our values. Even though values vary from one human to another, general values are shared across most of us – patient first, for example. The future safety scientist will be the keeper of the values we, as humans, hold dear. Will you take the decision for a patient with a mild chronic condition to receive a miraculous drug if it has a 49% chance to lead to the complete paralysis of the patient? Maybe the AI has a threshold at 50% and will not think twice before taking this decision.

Training Pathways and Organizational Strategies

Now that we have a clearer idea of what the future hybrid safety scientist looks like, the question is, how do we get there?

Ideally, exposure to AI systems is a must. But through which path is still an open question. My personal thoughts are that all current academic paths for pharma should integrate some level of AI fluency. But let’s be fair, we might have limited impact on this; that being said, if you are a lecturer or if you have a say in the university program of those future scientists, please make a move! If students are reading this, just play with AI in your spare time and learn what AI systems are supporting current organizations, use AI itself to learn more about it.

Apart from academic learning, all current employees should be given some basic AI awareness training. I would strongly recommend each executive to get something moving in this direction. Set up a few basic training sessions to help everyone understand what is AI, how to use it and how to not use it, to teach them about the biases and how to counter them. I know that without a clear ROI, it is sometimes hard to justify such investments; yet, my vision is to create a more prepared PV field for the upcoming AI transition.

Today, we can also support our teams by involving them in the AI transitions – they can provide critical insights and help to implement the technology; they can be the human in the loop. Practice will be key to making safety scientists more comfortable with AI systems. It can of course be supplemented by workshops and other online training courses from relevant trainers.

In the end, all this will fail if we do not have a culture shift. Indeed, all the above will feel like a burden to a majority of scientists. It is important to spend the time to develop and deploy the changes in a smart way – if the technology is bad, it will not work out anyway; but even if the solution is great, many do not want to change the status quo. I am a big fan of Simon Sinek, an inspiring person who understands how to lead global culture changes in companies. Using the law of diffusion, he illustrates the need to target the “early adopters”, your AI champions. They are the ones who are going to help you change the culture in your company, to help you welcome the technological change and spread the interest of this new “thing”. I would like you to listen this 7 min video from Simon, trust me, it is worth every minute and will help you to understand how to best support this change we need: How to Create Change | Simon Sinek – YouTube

In summary, every foundation technology such as AI challenges existing norms, making it essential to anticipate change and adapt accordingly. Executives play a key role in guiding these transformations by embracing new technologies and supporting shifts in company culture. While it’s impossible to predict precisely how AI will shape PV, it is clear that individuals must develop new skills, and leaders should encourage them, highlighting the benefits of adding new abilities to their skillset


About UBC
United BioSource LLC (UBC) is the leading provider of evidence development solutions with expertise in uniting evidence and access. UBC helps biopharma mitigate risk, address product hurdles, and demonstrate safety, efficacy, and value under real-world conditions. UBC leads the market in providing integrated, comprehensive clinical, safety, and commercialization services and is uniquely positioned to seamlessly integrate best-in-class services throughout the lifecycle of a product.

About the Author

Christopher Henry, PhD, serves as a Safety Scientist on UBC’s Global Safety Writing Team. With a strong biomedical and AI background, Mr. Henry has brought his unique skillset to UBC’s pharmacovigilance team. He is responsible for authoring periodic safety reports such as DSURs, PBRERs, and PADERs, as well as conducting other signal management activities for pharmaceutical products that are still in development and products that are already marketed. Mr. Henry has been leading the development and implementation of artificial intelligence across UBC’s comprehensive pharmacovigilance services. He holds a PhD in Cell Physiology as well as a Master’s in Health Biology, Genetics, Epigenetics, & Cell Fate Control. He has worked at UBC for the past 3 years.

Other Recent Posts

Featured Image
Press
3 min read

UBC Appoints Dr. Massoud Toussi as Vice President, Real World Evidence

UBC welcomes Massoud Toussi as its new Vice President of Real-World Evidence. Toussi brings over 20 years of experience spanning pharma, CROs, and regulatory work. He's authored more than 50 publications and contributed to scientific standards through ENCePP, ISPE, ISPOR, and other key organizations.
The acquisition strengthens UBC’s leadership in evidence generation offerings with globally recognized expertise in HEOR, medical affairs and market access.
Press
4 min read

UBC Acquires Evidinno Outcomes Research, Expanding Real-World Evidence and HEOR Capabilities

United BioSource LLC (UBC) has acquired Evidinno Outcomes Research Inc., a Vancouver-based healthcare research consulting firm specializing in real-world evidence (RWE), epidemiology, evidence synthesis, and advanced statistical and economic analysis.
nanotechnology and abstract graphene structures
Press
6 min read

Datavant and UBC Partner to Transform Observational Research and Patient Access for Biopharmaceutical Sponsors

Datavant and United BioSource Corporation (UBC) have joined forces to transform late-phase research and patient access programs for biopharmaceutical sponsors. By combining Datavant’s privacy-first data connectivity with UBC’s expertise in specialty therapies, the partnership introduces two new offerings: Modern Observational Research Study Designs and Enhanced Patient Access Programs. These solutions aim to accelerate evidence generation, improve patient outcomes, and set a new standard for real-world research and access.
UBC logo white

Thank You for Connecting with UBC

.

 

What You Can Expect Next

.

UBC logo white

Get Ready to Change Your Business

.

Service Request

Bekki Bracken Brown Headshot

Bekki Bracken Brown

President & Chief Executive Officer

Bekki Bracken Brown serves as the President and CEO of UBC, guiding the company’s mission and values, including the improvement of access for patients to receive better outcomes. She oversees all aspects of UBC, such as operations, business growth strategy, sales and marketing, and acquisition support.

With over 20 years of industry experience, Ms. Brown brings knowledge from a successful career in senior management from her tenure at Quintiles, INC Research, and, most recently, with Syneos Health. She’s been a member of the North Carolina BIO Board of Directors since 2019. She is also a member of the Healthcare Businesswomen’s Association — Southeast Chapter and CHIEF, an organization that supports women executive leaders. Ms. Brown earned her bachelor’s degree at Duke University.