Correct inference from systematic reviews of RCTs

By | March 9, 2017

To gauge the effects of medical interventions, we often use meta-analysis to combine the results of randomized control trials (RCTs). RCTs commonly use odds ratios (ORs) to measure the effect of a given intervention on the frequencies of events. Conventional methods of estimating overall ORs suffer from a number of issues. Drs. Chang and Hoaglin describe these failings and suggest alternative estimation methods in a new Medical Care article. Making it even easier for researchers, their handy online appendix [Word file] provides programming code for STAT, SAS, and R along with a description of all of the different options necessary to achieve similar results using those programs (some of the default settings differ).

For those unfamiliar with odds (pretty common in gambling), an OR is the ratio of the odds of an outcome, given a certain exposure, to the odds of that outcome without that exposure. Ready for the Triple Crown yet? In the case of clinical RCTs, the outcome is usually related to health, such as the number of deaths, and the exposures are clinical interventions being tested.

One of the central features of meta-analysis of RCTs is using statistical techniques to combine the results of multiple studies to obtain a point estimate for the overall effect and, usually, a 95% confidence interval. Conventional methods of estimating the overall effect, such as the DerSimonian and Laird or HKSJ method, are based on assumptions that are not always true. One of the most common false assumptions is that the within-study variances are known.

The alternative methods use the number of events and the number of people in each group from all of the studies to avoid the problems with using the studies’ log-odds-ratio, or log(OR), and its estimated variance. In the fixed-effects case, the alternative consists of using a logit transformation to connect the binomial probabilities for the events with the study-specific log-odds for the control group and the overall log(OR). For the random-effects case, the authors suggest a multi-level model with within-study variability and between-study variability as the two levels, based on a paper by Dr. Turner and colleagues. When the effect is measured as the log(OR), this model is often called a mixed effects logistic regression model. Another alternative is to use Bayesian analysis (same basic model), but that requires greater knowledge and specialized software.

Not surprisingly, the results of the example meta-analysis differ based on which method was used, alternative or conventional. There were very small differences between the results of different software packages for the alternative methods, but only in the 3rd decimal place.

So, armed with the knowledge and the alternative estimation methods, what should you do next when planning your meta-analysis? Chang and Hoaglin describe several steps, which I’ve paraphrased below:

  1. Think! Software is never a good substitute for thought. There are many subtle complications in meta-analysis, and consulting an expert may be advisable.
  2. If you have the number of events and the group sample sizes for each study in the meta-analysis, use the alternative approaches described in the article.
  3. If you only have study-level summaries, such as ORs, and you therefore have to use conventional meta-analysis techniques, try to account for sampling variation in your between-study variance estimate.
  4. Heterogeneity across studies related to design and other characteristics is common; use random-effects models unless you have a good reason to use fixed-effects models (for example, if you are very sure that a single overall effect differs by study only because of their sampling variation).
  5. Document your methods so other researchers can reproduce your results. Can’t stress this one enough.

Happy meta-analyzing!

Jess Williams

Jess Williams

Associate Professor at The Pennsylvania State University
Jessica A. Williams, PhD, MA is an Associate Professor of Health Policy and Administration at The Pennsylvania State University. Dr. Williams has been a member of the editorial board since 2013. Her research examines how workplace psychosocial factors affect the health and well-being of employees. Specifically, she investigates the role of pain in work disability and well-being. In addition, she researches the utilization of preventive medical services. She holds a Doctorate in Health Policy and Management from the UCLA Fielding School of Public Health, a Master's in Economics from the University of Michigan, Ann Arbor, and a BA in economics from Stanford University.
Jess Williams
Jess Williams

Latest posts by Jess Williams (see all)

Category: All Clinical trials Methods Tags: , ,

About Jess Williams

Jessica A. Williams, PhD, MA is an Associate Professor of Health Policy and Administration at The Pennsylvania State University. Dr. Williams has been a member of the editorial board since 2013. Her research examines how workplace psychosocial factors affect the health and well-being of employees. Specifically, she investigates the role of pain in work disability and well-being. In addition, she researches the utilization of preventive medical services. She holds a Doctorate in Health Policy and Management from the UCLA Fielding School of Public Health, a Master's in Economics from the University of Michigan, Ann Arbor, and a BA in economics from Stanford University.