Site Logo

Hello, you are using an old browser that's unsafe and no longer supported. Please consider updating your browser to a newer version, or downloading a modern browser.

External controls in research header

UBC, a leading provider of pharmaceutical support services, convened a virtual gathering of experts for an invigorating discussion on the use of external comparators in clinical research and post-marketing programs. This discussion lead to the creation of an External Controls in Research Series that will be released in segments, whereby our experts respond to key questions relating to the what, why, and how of the use of external comparators.

The discussion included a group of experienced biopharmaceutical development professionals including epidemiologists, statisticians and data scientists exploring essential considerations in generating evidence on the safety and efficacy/effectiveness of treatments using external comparator groups in the study design.

In this External Controls in Research Series, we will be exploring various topics such as the historical decisions of regulators in accepting results from studies using external comparators; what factors should be considered when selecting optimal data source(s) to identify patients for the comparator group; and the future of external comparators in drug development. Comparing outcomes of patients in an appropriate external comparator cohort with outcomes of the study population allows researchers to accelerate evidence generation and achieve a range of clinical development objectives.

What is an external comparator and how are these being used in research?

Aaron Berger head shot

Aaron Berger, Executive Director, Late Stage & Real World Evidence

Let us first start with a review of definitions to establish context for the rest of our conversation.

For our discussion today we are going to use the term “external comparator,” however you will also hear this concept being referred to as a “external control” or “synthetic control.” An external comparator is a cohort of patients, often assembled from real world data sources outside of a prospective investigational study (such as a randomized, controlled trial), which is used to compare to a cohort of patients who participated in an investigational study. The external comparator cohort is designed to mimic as closely as possible the characteristics of the patient cohort from the investigational study to which it is being compared. Furthermore, because these external patients may come from real world data, they typically are treated with standard of care therapies in usual clinical practice. The term “study population” will be used to refer to patients who have received the therapy under investigation.

In understanding why an external comparator might be selected instead of a traditional comparator such as placebo control or standard of care within a clinical trial, it is important to consider the limitations and challenges that can be associated with traditional comparators. For example, in life threatening illnesses with limited or no treatment options, it may be considered unethical to conduct a study with a placebo control arm. In rare diseases, small patient populations and reluctance to enroll in randomized studies are factors that would impair the achievement of patient enrollment and sufficient statistical power in studies designed with traditional controls. For some rare diseases, patients may be eager to enroll in a study of a new investigational product thereby limiting the number of patients available for a comparator arm. In this situation, those patients remaining in a comparator arm may be very different in demographic characteristics and disease presentation, thereby introducing potential bias.

Furthermore, the digitization of healthcare data has created a rich landscape of high-quality healthcare databases. Taken in combination with recent technological advancements in data interoperability, we can now assemble highly customized external comparator data sets that contain the healthcare experience of patients with characteristics that closely replicate those of the study populations. These customized comparator datasets provide an efficient approach in clinical research settings that would have previously only used a traditional comparator.

In this second segment of our External Controls in Research Series, a virtual gathering of experts on the what, why, and how of external comparators in clinical research and post-marketing, we explore some examples of how external controls have been used successfully for regulatory submissions.

What are some examples of regulators accepting data generated from studies using external comparators?

Erika Kirsch head shot

Erika Kirsch, MLIS, Research Services Librarian

According to Hatswell et al. (2016), between 1999 and 2014, the EMA approved 44 indications without randomized controlled trial results and the FDA approved 60 indications. Single-arm trials incorporating external control arms as the comparison group are becoming more commonly accepted evidence in both regulatory approvals and health technology assessments (HTA).

An illustrative example is a recently conducted single-arm trial of Blincyto® (blinatumomab, Amgen Inc.) that utilized an external comparator to demonstrate standard of care outcomes in patients with Philadelphia (Ph)-positive leukemia (Rambaldi 2020). Where the originally approved indication in 2014 was for Ph-negative precursor B-cell relapsed or refractory acute lymphoblastic leukemia (ALL), the 2018 accelerated FDA approval and 2019 EMA approval resulted in a label expansion to include both the Ph-positive variation as well (Pulte 2018). The external historical control group was derived from medical chart reviews of patients who received standard of care, and used a weighted analysis of patient‐level data to establish effectiveness.

Bavencio® (avelumab, Merck KGA and Pfizer Inc), an IgG1 anti-PD-L1 antibody, received accelerated FDA and conditional EMA approval after comparison to historical controls for second line treatment of platinum-refractory locally advanced or metastatic urothelial carcinoma (mUC) (Baumfeld Andre 2019). In this example, the external controls were taken from a combination of an electronic healthcare database and a German cancer registry to establish the natural history of mUC.

We see drug developers using these approaches to accelerate research as detailed in the interview question above. Below are some select examples of recent FDA/EMA approvals which used external controls in the design of the studies provided as evidence:

Drug Regulatory Agency Approval Date Indication Data Source(s) Used
BAVENCIO (avelumab) EMA
FDA
2017 Merkel Cell Carcinoma (MCC)
Urothelial Carcinoma (UC)
Renal Cell Carcinoma (RCC)
McKesson iKnowMed
BLINCYTO (blinatumomab) EMA
FDA
Accelerated approval
2015
2014
Acute Lymphoblastic Leukemia (ALL) Historical control composed from large US and European cohorts
KYMRIAH (tisagenlecleucel) EMA Accelerated approval 2018 Acute Lymphoblastic Leukemia (ALL)
Relapsed or refractory (r/r) large B-cell lymphoma
Historical control datasets (blinatumomab and salvage chemotherapy)
OPDIVO (nivolumab) FDA 2015 Melanoma
Non-Small Cell Lung Cancer (NSCLC)
Small Cell Lung Cancer (SCLC)
Renal Cell Carcinoma (RCC)
Classical Hodgkin Lymphoma (cHL)
Squamous Cell Carcinoma of the Head and Neck (SCCHN)
Urothelial Carcinoma
Colorectal Cancer
Hepatocellular Carcinoma (HCC)
Esophageal Squamous Cell Carcinoma (ESCC)
Flatiron Health’s oncology database
VORAXAZE (glucarpidase) FDA 2012 Indicated to reduce toxic plasma methotrexate concentration in adult and pediatric patients with delayed methotrexate clearance Open-label compassionate-use study
YESCARTA (axicabtagene ciloleucel) EMA Accelerated approval 2018 Relapsed or refractory (r/r) large B-cell lymphoma Historical controls from SCHOLAR-1

Some common themes emerge from a review of this table:

  • Regulators are more likely to accept use of external control data when the disease is rare and/or the indication is for a disease with high unmet medical need, so it is not surprising that the majority of these approvals/label expansions are for oncology indications.
  •  The external controls came from a wide variety of sources: open label studies, historical controls from other randomized trials, and other secondary healthcare databases and chart reviews. This corroborates the general tenet of RWE described in a recent whitepaper from Duke-Margolis (Mahendraratnam Lederer et al, 2019) – that the data must be ‘fit for purpose’ and that there is no ‘perfect’ database to meet all external comparator needs; the right data must be chosen to address the particular research question at hand.

What are the critical factors that should be considered when selecting the optimal data source(s) for use as the external comparator?

Irene Cosmatos head shot

Irene Cosmatos, MPH, Sr. Research Specialist, Database Analytics Automation

Certain criteria for evaluating external comparison groups are more critical than others, depending on the data under consideration. Available data include (1) control or placebo arms from historical or ongoing clinical trials and (2) RWD that capture the healthcare experience of a defined population. RWD includes disease and product registries (both established and prospective) and routinely collected healthcare data, such as EMR and insurance claims. Additionally, for populations that require specialized clinical data for identification (e.g., evidence of a genetic mutation), chart reviews may be needed to provide the appropriate data elements for an external control arm (ECA). An overview of critical factors to consider when selecting appropriate data source(s) to create a scientifically valid ECA is provided below.

Considerations when using historical controls from randomized trials

Ideally, external control populations would come from the control arm of randomized clinical trials (RCTs) that meet the following: (1) address a similar research question, (2) have been conducted within several years and (3) have a similar study design and data capture process. Similarities and differences between the historical controls and the clinical trial target population should be evaluated to determine if the historical controls are appropriate for the research question. Relevant factors include:

  • Disease criteria – changes in diagnostic criteria over time need to be considered.
  • Study inclusion and exclusion criteria – determine if important subgroups were excluded from the potential historical control group. Were different thresholds or cutoffs used?
  • Data collection process – frequency and timing of the capture of important variables such as exposure classification and outcome need to be considered. Sufficient length of follow-up may also be a concern.
  • Clinical outcome definitions – determine if measurements of critical clinical endpoints are adequate to use as comparisons with clinical trial endpoints – e.g., sufficient level of detail, similar coding for diagnoses. If patient reported outcomes (PROs) are important to evaluate, were these captured in a similar manner?
  • Covariate definitions – the capture of important variables required for control of confounding is important to review. Does the potential control dataset capture important covariates with the same level of granularity (e.g., concomitant medication use)?

Additionally, other factors that impact the appropriateness of a historical control database as a source for a comparator arm in a clinical trial are:

  • Clinical trial design –An ECA is mostly commonly needed for single arm trials (SATs), where it may be unethical (e.g., life-threatening situations without alternative treatments) or unrealistic (e.g., rare diseases) to enroll patients into the standard of care (SOC) arm. Choosing an ECA for a SAT generally requires more rigor than when the ECA will serve as an additional comparator group for a traditional trial with existing controls.
  • Size of final population – after imposing the exclusion criteria that mimic the clinical trial protocol, how many patients still qualify for the proposed ECA? Is this a sufficient sample size to test the study question, based on the study design?
  • Standard of Care – biases in the types and patterns of treatment for the study indication in the historical control population need to be investigated, e.g., do the treatments represent the real-world clinical care experience of patients similar to those in the clinical trial, including dose and frequency of exposure.

Considerations when designing ECA using real-world observational data sources

  • Incomplete data ascertainment– Information in observational data sources do not originate from “protocol-driven” care. This differs from data capture in historical control databases, where patients have pre-specified points of encounters with physicians or other medical staff to provide regular follow-up. Therefore, the potential exists for incomplete capture of critical study information in observational data, including change in exposure (e.g., SOC patient switches to or adds the target drug) or outcome identification (e.g., occurrence of adverse events), leading to misclassification during the analysis.
  • Data Quality – The quality assurance/control process of data used for RCTs, which often become part of regulatory submissions, has a very high degree of stringency. Regular oversight is maintained to ensure protocol specifications and visit schedules are met, and quality control checks are built into the data capture process. This high level of scrutiny is generally not associated with non-interventional data and could produce information bias when conducting analyses.

Finally, although patient characteristics may differ between the external data that are being evaluated as an ECA and the target population in the clinical trial, this does not necessarily eliminate the data as a source for an appropriate external control group. Through analytic adjustment methods (e.g., propensity score matching on key variables, subgroup, or sensitivity analyses) the external dataset may be adjusted to reduce or even eliminate the effect of confounding on the results. Researchers must recognize, however, that the number of covariates available for analytical adjustment will be limited by the number of relevant variables available both in the clinical trial data and in the external control database.

What are the essential methods and technologies needed to execute a study design involving an external control?

Aaron Berger head shot

Aaron Berger, Executive Director, Late Stage & Real World Evidence

In the previous segment of our series we considered the important factors for selecting the optimal data source to identify anyone or more external comparator cohorts.  In this segment, we will explore the methods and technologies needed for assembling fit for purpose external comparator datasets.  

External comparator cohorts are often assembled by combining a range of disparate data sources including secondary data collected through routine healthcare delivery and billing by payers (e.g., administrative pharmacy and medical claims databases, electronic medical records (EMR), laboratory data). Additionally, pooled data from historical control arms of completed randomized clinical trials (RCTs) and registries provide a valuable source for developing an appropriate comparator cohort.  These data exist in non-uniform formats with varying degrees of completeness and quality, particularly in the case of EMR data. This dynamic presents methodological challenges that must be overcome through a deep understanding of the dataset under consideration for developing an external control arm (ECA), and may require a robust gap analysis that will serve as the foundation for a common data model (CDM) from which regulatory grade outputs can be generated.

UBC’s advanced methods specialists and technology architectures are key to enabling data integration and harmonization.  The resulting CDM forges interoperability across disparate data sets and leverages best-in-class HIPAA compliant de-identification and matching to support longitudinal patient record linkage.  These are essential ingredients for enriching the completeness and quality of external comparators.

Standardizing Data into the Common Data Model 

Assembling disparate data into a CDM to enable analysis as an external comparator requires the combined expertise of data scientists, epidemiologists, and data warehouse specialists. Incoming data are normalized and mapped to the CDM based on industry-standard models such as Study Data Tabulation Model (SDTM) or Observational Medical Outcomes Partnership (OMOP). The process for standardizing data sources into the CDM is depicted below in the figure below. 

Data Linkage 

The construction of an external comparator may entail the linkage of two or more datasets.  This complex process requires technical expertise to facilitate highly reliable patient identity mastering across these multiple data sources in a HIPAA compliant manner that does not require the exchange of PHI.   This process often referred to as “tokenization and linkage” requires expertise in complex domains such as Bayesian probabilistic matching, bloom filter hashing, and compliant use of hashed PHI fields.  

Once the optimal data are identified for the creation of the external control, researchers must apply advanced methods underpinned by a modern technology platform that is equipped to deliver fit-for-purpose datasets.  Please join us for the next segment where we will explore the important statistical analysis considerations for studies involving external control arms.

UBC epidemiologists, clinicians, and biostatisticians support the design and execution of modernized solutions that generate evidence on the safety and effectiveness of biopharmaceutical products. For more information get in touch with our team here.


REFERENCES

Baumfeld Andre E, Reynolds R, Caubel P, Azoulay L, Dreyer NA. Trial designs using real-world data: The changing landscape of the regulatory approval process. Pharmacoepidemiol Drug Saf. 2019. doi:10.1002/pds.4932

Hatswell AJ, Baio G, Berlin JA, Irs A, Freemantle N. Regulatory approval of pharmaceuticals without a randomised controlled study: analysis of EMA and FDA approvals 1999-2014. BMJ Open. 2016;6(6):e011666. doi:10.1136/bmjopen-2016-011666

Mahendraratnam Lederer, N., et al. Determining Real World Data’s Fitness for Use and the Role of Reliability. Duke-Margolis Center for Healthcare Policy. Sept 26, 2019. https://healthpolicy.duke.edu/publications/determining-real-world-datas-fitness-use-and-role-reliability. Accessed 11-3-2020.

Pulte ED, Vallejo J, Przepiorka D, et al. FDA Supplemental Approval: Blinatumomab for Treatment of Relapsed and Refractory Precursor B-Cell Acute Lymphoblastic Leukemia. Oncologist. 2018;23(11):1366-1371. doi:10.1634/theoncologist.2018-0179

Rambaldi A, Ribera JM, Kantarjian HM, et al. Blinatumomab compared with standard of care for the treatment of adult patients with relapsed/refractory Philadelphia chromosome-positive B-precursor acute lymphoblastic leukemia. Cancer. 2020;126(2):304-310. doi:10.1002/cncr.32558

Seeger, JD, Kourney JD, Iannacone, MR et al.  Methods for External Control Groups for Single Arm Trials or Long-Term Uncontrolled Extensions to Randomized Clinical Trials. doi: 10.1002/pds.5141

Thorlund K, Dron L, Park JH, Mills JM et al.  Synthetic and External Controls in Clinical Trials – A Primer for Researchers.  Clinical Epidemiology 2020:12 457–467

Burcu M, Dreyer NA, Franklin JM, et al.  Real-world evidence to support regulatory decision-making for medicines: Considerations for external control arms.  doi: 10.1002/pds.4975