CBER BEST Seminar Series

The CBER BEST Initiative Seminar Series is designed to share and discuss recent research of relevance to ongoing and future surveillance activities of CBER regulated products, namely biologics. The series focuses on safety and effectiveness of biologics including vaccines, blood components, blood-derived products, tissues and advanced therapies. The seminars will provide information on characteristics of biologics, required infrastructure, study designs, and analytic methods utilized for pharmacovigilance and pharmacoepidemiologic studies of biologics. They will also cover information regarding potential data sources, informatics challenges and requirements, utilization of real-world data and evidence, and risk-benefit analysis for biologic products. The length of each session may vary, and the presenters will be invited from outside FDA. 

Below you will find details of upcoming CBER BEST seminars, including virtual links that will be open to anybody who wishes to attend. Speakers who give their consent to be recorded will also have their presentations included on this page; you can find those sessions below the list of upcoming speakers.

Upcoming Seminars

More information will be posted when available.

Previous Seminars

Topic: Statistical methods for improving post-licensure vaccine safety surveillance

Presenter: Jennifer Clark Nelson, PhD, Director of Biostatistics & Senior Investigator, Biostatistics Division, Kaiser Permanente Washington Health Research Institute.

Slidedeck / Video Presentation Below 

Abstract: Improving statistical methods for post-licensure vaccine safety surveillance is critical for safeguarding public health and maintaining public trust in vaccination programs. This is especially important during pandemics like COVID-19 when vaccines are administered on a global scale at unprecedented speed. Many national vaccine safety surveillance efforts use electronic health records and insurance claims data from large multi-site health care data networks.  I will summarize challenges that can arise when using these secondary data sources to conduct safety studies. I will also discuss statistical approaches designed to better detect rare adverse events in these settings. These include 1) adapting sequential methods from clinical trials to this observational database setting in order to ensure more rapid detection and 2) using natural language processing of clinical notes in combination with machine learning methods to improve the accuracy with which vaccine safety outcomes are identified. I will illustrate methods using example safety questions that have arisen within FDA’s Sentinel Initiative and the CDC’s Vaccine Safety Datalink monitoring systems.

Bio: Jennifer Nelson is Director of the Biostatistics Division at Kaiser Permanente Washington Health Research Institute and an Affiliate Professor of Biostatistics at the University of Washington. Her research focuses on methods to quantify post-market safety and effectiveness for vaccines and drugs, with an emphasis on addressing statistical challenges of using electronic health record data from large health care systems. Since 2009, she has provided strategic direction for the U.S. Food and Drug Administration’s (FDA’s) Sentinel Network that facilitates rapid safety surveillance for FDA-regulated medical products. Over the past two decades, she has also led statistical advancements for the Centers for Disease Control and Prevention (CDC) sponsored Vaccine Safety Datalink (VSD) project, a national collaboration that has monitored vaccine safety in the U.S. since 1990. From 2021-2023 she was an invited member of the CDC’s Advisory Committee on Immunization Practices (ACIP) COVID-19 Vaccine Safety Technical Work Group, comprised of medical and public health national experts who informed recommendations on the use of COVID-19 vaccines in the U.S. She has also served an Associate Editor for the journal Vaccine since 2018. Her 2013 paper that adapted group sequential monitoring methods to a real-world observational vaccine safety data setting was one of the American Journal of Epidemiology’s Articles of the Year. She received her PhD in Biostatistics at the University of Washington in 1999, earned the VSD’s Margarette Kolczak Award for outstanding biostatistical contributions in the field of vaccine safety in 2009, and is an elected Fellow of the American Statistical Association.

Topic: Observational methods for COVID-19 vaccine effectiveness research: an empirical evaluation and target trial emulation

Presenter: Martí Català Sabaté, Medical Statistician and Data Scientist, University of Oxford 

Slide Deck/Full presentation is below

Abstract: There are scarce data on best practices to control for confounding in observational studies assessing vaccine effectiveness to prevent COVID-19. We compared the performance of three well-established methods [overlap weighting, inverse probability treatment weighting and propensity score (PS) matching] to minimize confounding when comparing vaccinated and unvaccinated people. Subsequently, we conducted a target trial emulation to study the ability of these methods to replicate COVID-19 vaccine trials.

Bio: Dr. Martí Català Sabaté is a senior data scientist using routinely collected health data to generate reliable evidence for improved patient care. He co-leads the Oxinfer group in the Health Data Sciences (HDS) section of the University of Oxford. His research focuses on generating evidence on the safety, effectiveness, and cost-effectiveness of medicines and procedures where develops R packages and code to curate and analyze data from millions of routinely recorded health care interactions. Dr. Català Sabaté and his team use a common data model developed by the OHDSI community to transform disparate sources of healthcare data into a standard format. Data partners worldwide transform their data into this format enabling cross-site analysis; sharing only analytic code and aggregated results between sites which allows data to be combined safely and produce more generalizable answers to research questions than if they were restricted to single datasets.

Dr. Català Sabaté also contributes to the DARWIN EU project as a programmer and study leader. This European Medicines Agency project will deliver real-world evidence from across Europe on diseases, populations, and the use and performance of medicines. Before joining HDS, he completed his PhD thesis in Computational and Applied Physics at Universitat Politècnica de Catalunya in Barcelona. There he built computational models to understand the natural history of tuberculosis and also built computational models to predict the evolution of the COVID-19 pandemic, which were used by the Catalan authorities and European Commission (DG-Connect).

Topic: A modified self-controlled case series method for event-dependent exposures and high event-related mortality, with application to COVID-19 vaccine safety

Presenter: Yonas Ghebremichael-Weldeselassie, Lecturer of Statistics at School of Mathematics and Statistics, The Open University, UK

Slide Deck/Full presentation is below

Abstract: We propose a modified self-controlled case series (SCCS) method to handle both event-dependent exposures and high event-related mortality. This development is motivated by an epidemiological study undertaken in France to quantify potential risks of cardiovascular events associated with COVID-19 vaccines. Event-dependence of vaccinations, and high event-related mortality, are likely to arise in other SCCS studies of COVID-19 vaccine safety. Using this case study and simulations to broaden its scope, we explore these features and the biases they may generate, implement the modified SCCS model, illustrate some of the properties of this model, and develop a new test for presence of a dose effect. The model we propose has wider application, notably when the event of interest is death.

Bio: Yonas Weldeselassie is a Lecturer of Statistics at School of Mathematics and Statistics, The Open University, UK. He graduated in statistics and demography from University of Asmara, Eritrea and went on to become an assistant lecturer in Mekelle University, Ethiopia, and then a Senior Research Fellow in Medical Statistics at Warwick Medical School, division of Population Evidence and Technologies. He earned a Msc in Biostatistics from Hasselt University, Belgium and PhD in statistics from the Open University, UK. After working as a research associate, on MRC project ‘Software tools and online resources for the self-controlled case series method and its extensions’, at the department of mathematics and statistics, the Open University since 2014, he joined Warwick Medical School in June 2017. His main research interest is in medical statistics specially in the methodological development and application of the self-controlled case series (SCCS) method. He published a book on SCCS with Paddy Farrington and Heather Whitaker, and he is currently working on early prediction of gestational diabetes mellitus.

Topic: Applying Machine Learning in Distributed Networks to Support Activities for Post-Market Surveillance of Medical Products: Opportunities, Challenges, and Considerations 

Presenter: Jenna Wong, Assistant Professor in the Department of Population Medicine at Harvard Medical School and Harvard Pilgrim Health Care Institute 

Slide Deck / Full presentation is posted below

Abstract: Access to larger amounts and more complex types of electronic health data in distributed data networks like the US Food and Drug Administration’s Sentinel System and the Observational Health Data Sciences and Informatics (OHDSI) Program has created a growing interest to use more flexible machine learning techniques to enhance their various activities. However, the siloed storage and diverse nature of the databases in these distributed networks create unique challenges. This presentation will discuss various opportunities for using flexible machine learning techniques to enhance the activities of distributed networks that monitor the post-market safety and effectiveness of medical products. It will also describe unique challenges that these networks face when applying such methods, discuss approaches and considerations for addressing these challenges, and provide examples of projects and efforts that the Sentinel System and OHDSI Program have undertaken in these areas. These efforts underscore the important role that machine learning will likely play in advancing the capabilities of distributed networks for post-market surveillance in the years to come.  

Bio: Dr. Jenna Wong, PhD is an Assistant Professor in the Department of Population Medicine at Harvard Medical School and Harvard Pilgrim Health Care Institute. She received her MSc in Epidemiology from the University of Ottawa and her PhD in Epidemiology from McGill University. She also previously worked at the Institute for Clinical Evaluative Sciences in Ontario, Canada, conducting population-based studies using data from linked provincial administrative healthcare databases. Her research focuses on generating evidence to inform the appropriate and safe use of marketed medications in real-world settings, particularly when they are used beyond their labeled indications in the absence of supporting data. She has extensive experience working with large electronic healthcare databases, including administrative claims and electronic health record data, where another major focus of her research is on leveraging machine learning to improve tasks like risk adjustment, computable phenotyping, imputing missing data, and extracting information from more complex data types, to enhance the utility of real-world data in pharmacoepidemiologic research. 

Topic: Reliability in Observational Research: Assessing Covariate Imbalance in Small Studies 

Presenter: George Hripcsak, Vivian Beaumont Allen Professor of Biomedical Informatics, Columbia University

Slide Deck | Full presentation is posted below

Abstract: One of the major challenges facing observational research is the risk of producing a biased estimate due to confounding. Propensity score adjustment, invented 40 years ago, addresses confounding by balancing covariates in subject treatment groups through matching, stratification, inverse probability weighting, etc. Diagnostics ensure that the adjustment has been effective. A common technique is to check whether the standardized mean difference for each relevant covariate is less than a threshold like 0.1. For small sample sizes, the probability of falsely rejecting a study because of chance imbalance where no underlying balance exists approaches 1. We propose an alternative diagnostic that checks whether the standardized mean difference statistically significantly exceeds the threshold. With simulation and real-world data, we find that this diagnostic achieves a better trade-off of type 1 error rate and power than nominal threshold tests and than not testing for sample sizes of 250 to 4000 and for 20 to 100,000 covariates. In network studies, meta-analysis of effect estimates must be accompanied by meta-analysis of the diagnostics or else systematic confounding may overwhelm the estimated effect. Our procedure for statistically testing balance at both the database level and the meta-analysis level achieves the best balance of type-1 error rate and power. Our approach supports the review of large numbers of covariates, enabling a more rigorous diagnostic process. 

Bio: Dr. George Hripcsak, MD, MS, is Vivian Beaumont Allen Professor at Columbia University’s Department of Biomedical Informatics. Dr. George Hripcsak, MD, MS, is Vivian Beaumont Allen Professor at Columbia University’s Department of Biomedical Informatics. He is a board-certified internist with degrees in chemistry, medicine, and biostatistics. Dr. Hripcsak is interested in the clinical information stored in electronic health records and their use in improving health care. Health record data are sparse, irregularly sampled, complex, and biased. Using causal inference, nonlinear time series analysis, machine learning, knowledge engineering, and natural language processing, he is developing the methods necessary to produce reliable medical evidence from the data. Dr. Hripcsak leads the Observational Health Data Sciences and Informatics (OHDSI) coordinating center; OHDSI is an international network with thousands of collaborators from 83 countries and health records on almost one billion unique patients. He co-chaired the Meaningful Use Workgroup of the U.S. Department of Health and Human Services’ Office of the National Coordinator of Health Information Technology. Dr. Hripcsak is a member of the National Academy of Medicine, the American College of Medical Informatics, the International Academy of Health Sciences Informatics, and the New York Academy of Medicine. He was awarded the highest honor by the American College of Medical Informatics, the 2022 Morris F. Collen Award of Excellence. He has over 500 publications. 

Topic: Real-World Effectiveness of BNT162b2 Against Infection and Severe Diseases in Children and Adolescents: causal inference under misclassification in treatment status

Presenter: Dr. Yong Chen, Professor & Director of the Center for Health AI and Synthesis of Evidence (CHASE) at the University of Pennsylvania

Description: The current understanding of long-term effectiveness of the BNT162b2 vaccine across diverse U.S. pediatric populations is limited. We assessed the effectiveness of BNT162b2 against various strains of the SARS-CoV-2 virus using data from a national collaboration of pediatric health systems (PEDSnet). We emulated three target trials to assess the real-world effectiveness of BNT162b during the Delta and Omicron variant periods. In the U.S., immunization records are often captured and stored across multiple disconnected sources, resulting in incomplete vaccination records in patients’ electronic health records (EHR). We implemented a novel trial emulation pipeline accounting for possible misclassification bias in vaccine documentation in EHRs. The effectiveness of the BNT162b2 vaccine was estimated from the Poisson regression model with confounders balanced via propensity score stratification. This study suggests BNT162b2 was effective among children and adolescents in Delta and Omicron periods for a range of COVID-19-related outcomes and is associated with a lower risk for cardiac complications. 

Bio: Dr. Yong Chen is tenured Professor of Biostatistics and the Founding Director of the Center for Health AI and Synthesis of Evidence (CHASE) at the University of Pennsylvania. He is an elected fellow of American Statistical Association, International Statistical Institute, Society for Research Synthesis Methodology, American College of Medical Informatics, and American Medical Informatics Association. He founded the Penn Computing, Inference and Learning (PennCIL) lab at the University of Pennsylvania, focusing on clinical evidence generation and evidence synthesis using clinical and real-world data. During pandemic, Dr. Chen is serving as biostatistics core director for a national multi-center study on Post-Acute Sequelae of SARS CoV-2 infection (PASC), involving more than 9 million pediatric patients across 40 health systems.

Video

Topic: KEEPER: Standardized structured data from electronic health records as an alternative to chart review for case adjudication and phenotype evaluation

Presenter: Anna Ostropolets, Director, Head of Innovation Lab, Odysseus Data Services

Topic: Use of Linked Databases in Pharmacodepidemiology: Considerations for Potential Selection Bias 

Presenter: Jenny Sun, Pfizer

Topic: Avoidable and bias-inflicting methodological pitfalls in real-world studies of medication safety and effectiveness 

Presenter: Katsiaryna Bykov, Harvard Medical School

Topic: Leveraging real-world data for better health in Europe through collaborations between regulators & academia 

Presenters: Xintong Li and Daniel Prieto-Alhambra, University of Oxford, NDORMS

Topic: Quantifying bias due to disease- and exposure misclassification in studies of vaccine effectiveness

Presenter: Kaatje Bollaerts, P-95

Topic: Negative outcome controls and p-value calibration in RWE generation

Presenter: Martijn Schuemie, Janssen R&D

Topic: Bayesian Safety Surveillance with Adaptive Bias Correction

Presenter: Fan Bu, UCLA

Topic: Bayesian Adaptive Validation Design for Vaccine Surveillance

Presenters: Timothy Lash and Lindsay Collin, Emory

 

Topic: Everything keeps changing: What COVID-19 taught us about surveillance

Presenter: Marc Lipsitch, Harvard

Topic: Quantitative Bias Analysis Methods to Improve Inferences

Presenter: Matthew Fox, BUSPH

Topic: Addressing Selection and Confounding Bias in Test-Negative Study Designs for Flu and COVID-19 Monitoring 

Presenter: Eric Tchetgen Tchetgen, University of Pennsylvania

Topic: Evaluating Use of Methods for Adverse Events Under Surveillance For Vaccines

Presenter: Nicole Pratt, University of South Australia

Topic: Vaccine safety evaluation using the self-controlled case series method 

Presenter: Heather Whitaker, Open University

Topic: Exploring Vaccine Safety Datalink COVID vaccine rapid cycle analysis (RCA) methods 

Presenter: Nicola Klein, Kaiser Permanente

Topic: COVID-19 pharmacoepidemiology in Canada 

Presenter: Robert Platt, McGill University

Topic: Statistical learning with electronic health records data

Presenter: Jessica Gronsbell, University of Toronto

Topic: Methods for Monitoring the Safety and Effectiveness of COVID-19 vaccines 

Presenter: Bruce Fireman, Kaiser Permanente

at presenter’s request, this session was not recorded

Topic: Understanding Informed Presence Bias in EHR Data 

Presenter: Ben Goldstein, Duke

Topic: Vaccine safety surveillance systems for routine and pandemic immunization programs

Presenter: Daniel Salmon, Johns Hopkins

Top