Original Article
A framework for best evidence approaches can improve the transparency of systematic reviews

https://doi.org/10.1016/j.jclinepi.2012.06.001Get rights and content

Abstract

Objective

Systematic reviewers often use a “best evidence” approach to address the key questions, but what is meant by “best” is often unclear. The goal of this project was to create a decision framework for “best evidence” approaches to increase transparency in systematic reviews.

Study Design and Setting

The project was separated into three areas: 1) inclusion criteria, 2) evidence prioritization strategies, and 3) evaluative approaches. This commentary focuses only on the second task. The full report is available on the Effective Healthcare Web site of the Agency for Healthcare Research and Quality.

Results

The four identified strategies were as follows: 1) Use only the single best study; 2) Use the best set of studies; 3) Same as 2, but also consider whether the evidence permits a conclusion; and 4) Same as 3, but also consider the overall strength of the evidence. Simpler strategies (such as #1) are less likely to produce false conclusions, but are also more likely to yield insufficient evidence (possibly because of imprecise data).

Conclusion

Systematic reviewers routinely prioritize evidence in numerous ways. This document provides a conceptual construct to enhance the transparency of systematic reviewers' decisions.

Introduction

What is new?

  • Systematic reviewers often use “best evidence” approaches, but reviews vary greatly in what this means.

  • We created a framework of several “best evidence” approaches.

  • This commentary discusses four strategies for prioritizing evidence.

  • The strategies vary in the risk of inappropriate conclusions, as well as the risk of inappropriate lack of conclusions, and feasibility.

  • Reviewers can use this framework to maximize transparency.

Systematic reviewers often use a “best evidence” approach to address the key questions in the reviews. What is meant by “best,” however, is often unclear. The phrase “best evidence” was used by Slavin in a 1995 article as an “intelligent alternative” to a meta-analysis of all available evidence on a given clinical question [1]. This approach was designed to allow exclusion of lower-quality studies (based on a priori criteria) if enough higher-quality studies are available. The underlying concept is evidence prioritization (i.e., prioritizing some studies over others), which is used by all systematic reviews.

In this commentary, “best evidence” refers to any strategy for prioritizing evidence. It can help ensure (but cannot guarantee) that the review's conclusions will stand the test of time. However, reviewers face a variety of dilemmas regarding how to prioritize the evidence. Components such as risk of bias and applicability are themselves multifaceted, and the resulting complexity has spawned innumerable approaches for prioritizing evidence, with no organizing framework [2].

We recently authored a report that provides such a framework for defining the “best evidence”; the full report appears on the Effective Healthcare Web site of the Agency for Healthcare Research and Quality (AHRQ) [3]. Essentially, the report addresses a reviewer's decisions about lowering the evidence threshold. Why might reviewers do this? How can it be done? The report, which is not intended to be prescriptive, can help reviewers improve the transparency of decisions made during the process of performing a systematic review. Such transparency serves the important function of enabling end users to assess a review's methodology and applicability [4].

During a review, evidence can be prioritized at several stages, such as the search strategy, the inclusion criteria, the outcomes analyzed, and which studies will be pooled in a meta-analysis. Our report was organized around three tasks: 1) create a list of possible inclusion criteria, and for each criterion, create a list of factors that might affect a reviewer's decision to use it, 2) create a list of evidence prioritization strategies, and 3) list the ways in which evidence prioritization strategies might be formally evaluated. This commentary focuses only on the second task, evidence prioritization strategies.

Section snippets

Evidence prioritization strategies

After the set of included studies for a key question is determined, a reviewer must decide which studies comprise the “best evidence” set. We define this as the set of studies that will be assessed and/or analyzed in an attempt to answer the key question. Reaching this answer may or may not involve meta-analysis.

Studies not considered as part of the “best evidence” set, but still included, would be tabled but not used to inform conclusions. Some reviewers may choose to use all included studies

Conclusions

Systematic reviewers routinely prioritize evidence in numerous ways. Our goal was to provide a framework for understanding the possibilities, considering influential factors, and choosing among the myriad of options. These decisions should be explicitly described in methods sections of systematic reviews. This will help enhance the transparency of review processes, which in turn may help users determine how different reviews of the same topic can reach different conclusions.

Cited by (19)

  • Views of parents regarding human papillomavirus vaccination: A systematic review and meta-ethnographic synthesis of qualitative literature

    2019, Research in Social and Administrative Pharmacy
    Citation Excerpt :

    The quality of included studies was assessed by SM, LS and AF, using the Critical Appraisal Skills Programme (CASP) for qualitative research.15 However, studies that otherwise met the inclusion criteria, were not excluded based on the quality assessment, as lower quality studies may still provide evidence to address the research question.16 Step 3 involved reading the studies.

  • A scoping review and survey provides the rationale, perceptions, and preferences for the integration of randomized and nonrandomized studies in evidence syntheses and GRADE assessments

    2018, Journal of Clinical Epidemiology
    Citation Excerpt :

    Furthermore, some authors emphasize the utility of using NRS, alone or in addition to RS, for assessing and detecting potential harms and adverse events of health interventions [12,13]. Experts from both camps have addressed reasons for using both RS and NRS in the same systematic review, or exclusively using NRS, and suggested approaches for so doing [14–20]. For example, a recent framework [5], using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) certainty of evidence as guide, suggests that NRS evidence can prove useful in a systematic review addressing the causal effect of an intervention in the circumstances described in Box 1.

  • Grading the strength of a body of evidence when assessing health care interventions: An EPC update

    2015, Journal of Clinical Epidemiology
    Citation Excerpt :

    Criteria for this approach may be randomization or may be active-controlled vs. placebo-controlled, prospective vs. retrospective, and lower risk of bias vs. high risk of bias. Reviewers can determine an appropriate subset of studies for presenting review findings and SOE assessment through analyses with and without weaker studies (such as with a sensitivity analysis) [28,56]. Combining evidence from studies with a high risk of bias and those with less risk can be problematic [5].

View all citing articles on Scopus
View full text