How important is publication bias? A synthesis of available data

AIDS Educ Prev. 1997 Feb;9(1 Suppl):15-21.

Abstract

It has long been recognized that investigators frequently fail to report their research findings (Dickersin, 1990). Chalmers (1990) has suggested that this failure represents scientific misconduct since volunteers who consent to participate in research, and agencies that provide funding support for investigations, do so with the understanding that the work will make a contribution to knowledge. Clearly, knowledge that is not disseminated is not making a "contribution". This failure to publish is not only inappropriate scientific conduct, it also influences the information available for interpretation by the scientific community. Of course, if research is left randomly unpublished, there is less information available, but that information is unbiased. We now have solid evidence that failure to publish is not a random event; rather, publication is dramatically influenced by the direction and strength of research findings (Dickersin et al., 1987, 1992; Dickersin & Min, 1993; Easterbrook et al., 1991; Simes, 1986). This tendency of editors and reviewers to accept manuscripts submitted by investigators based on the strength and direction of the research findings is termed "publication bias". The problem has been under discussion for many years and has recently been studied directly in medicine and public health. This article will review the major evidence available regarding publication bias and will suggest measures for overcoming the problem.

Publication types

  • Review

MeSH terms

  • Clinical Trials as Topic
  • Confidence Intervals
  • Humans
  • Meta-Analysis as Topic
  • Odds Ratio
  • Publication Bias*
  • Treatment Failure