10 Search

A comprehensive search is at the core of all systematic reviews, and is essential to ensure that all relevant trials were included. A search that is not sufficiently thorough will be more vulnerable to publication bias. In the case of publication bias “an ounce of prevention is worth a pound of cure” since the tools available to identify and adjust for publication bias are insensitive and cannot discriminate between publication bias and alternative causes for small-study effects. Other considerations (such as when the search was last completed) are also necessary to ensure the search is sufficiently complete.

Checklist Questions

Were a reasonable number of relevant databases searched?
When was the search conducted? Is it likely there have been subsequent publications that may alter the results?
Was a sufficient effort made to find unpublished studies (or unreported results of published studies)?
Were sources of additional published/unpublished data sought out?

Databases of published literature: Were a reasonable number of relevant databases searched?

It is important to search multiple databases to maximize the identification of all relevant studies, as no single database includes all studies. One study (Royle P et al.) compared three major databases to a set of relevant studies established by searching twenty-six additional databases:

Table 11. Proportion of relevant trials identified by different databases.
Database Proportion of relevant trials identified
MEDLINE 69%
EMBASE 65%
CENTRAL 79%
Combining all three 97%

The optimal selection of which (and how many) databases to search will depend on the discipline, topic area, and type of intervention. For example, studies evaluating nursing and physiotherapy interventions should at minimum include CINAHL and PEDro, respectively. A good rule-of-thumb is to search MEDLINE and at least 1-2 other topic-specific databases (e.g. EMBASE and CENTRAL for pharmacotherapy studies).

Timeframe: When was the search conducted? Is it likely there have been subsequent publications that may alter the results?

There are no strict rules as to how long is too long before a review becomes outdated, as this largely depends on the rate of evidence generation in a given field or topic area. It is important to consider the rate at which new publications are being added to the literature (i.e. considering if it is a “hot” topic) and whether the results would likely be sensitive to new publications (i.e. a meta-analysis with low-to-moderate certainty). If there are already several large high-quality trials showing consistent results it is less likely that any new literature would substantially change results.

“Hot” topic: A living meta-analysis (i.e. a meta-analysis that is actively updated with new evidence) (Siemieniuk RA et al.) of drug treatments for COVID-19 illustrates an instance of rapidly changing evidence. The first version, published July 2020, included 32 RCTs and evaluated 17 therapies. The fourth version, published in March 2021, included 196 trials and evaluated 27 therapies. Under such circumstances, meta-analyses become quickly outdated.
“Cold” topic: The evidence surrounding the cardiovascular risk associated with rosiglitazone has changed minimally for over a decade. Meta-analyses from 2007 and 2010 (Nissen SE et al. 2007, 2010) demonstrated increases in the risk of myocardial infarction with rosiglitazone. Both reviews had large patient sample sizes (27,847 and 35, 531 respectively), a factor which weighed in favor of their persisting relevance. As such, the evidence on this topic has remained largely unchanged since those reviews.

Grey literature: Was a sufficient effort made to find unpublished studies (or unreported results of published studies)?

A thorough search of unpublished literature aims to minimize the effects of publication bias.

E.g. In a meta-analysis (Siu JT et al.) of N-acetylcysteine for non-acetaminophen-related acute liver failure the authors searched all of:

  • The following databases (without language restrictions): Cochrane Hepato‐Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials, MEDLINE Ovid, Embase Ovid, LILACS, Science Citation Index Expanded, and Conference Proceedings Citation Index – Science
  • The reference lists of all included studies and relevant papers
  • The following online clinical trial registries: ClinicalTrial.gov, European Medicines Agency, World Health Organization International Clinical Trial Registry Platform, the Food and Drug Administration, and pharmaceutical company sources for ongoing or unpublished trials

The authors of relevant papers were also contacted to inquire regarding any further published or unpublished work.

Why is publication bias so concerning?

Studies with statistically significant results (“positive” studies) are twice as likely to get published, and will typically get published faster (by a median of 1.3 years in one study) compared to trials with statistically non-significant results (“neutral” studies) (Hopewell S et al., Ioannidis JP).

  • Published trials have a 15% larger estimate of effect compared to unpublished trials (McAuley L et al.)
  • Although more common with industry-funded trials, government-funded studies are still prone to publication bias (32% vs. 18% unpublished 5 years after completion) (Jones CW et al.)
  • In one study, 90-98% of meta-analyses with very large effects observed in early trials became substantially smaller once subsequent studies became available (e.g. median odds ratio decreased from ~11 to ~4 after more trials were added to the first trial) (Pereira TV et al.)
  • In one study of 42 meta-analyses, in 93% of cases the addition of unpublished FDA outcome data changed the efficacy summary estimate (either increased or decreased) compared to the meta-analysis based purely on published outcome data (Hart B et al.)

Bottom line: Meta-analyses of only published trials will overestimate the effects of drugs and other interventions, especially when meta-analyses are conducted “earlier on” (before the neutral trials get published). Consequently, there is likely a greater risk of publication bias in meta-analyses based on a few small studies.

E.g. A review (Turner EH et al.) of antidepressants found that 94% of published trials demonstrated a statistically significant difference with respect to the primary outcome. However, when combined with unpublished FDA review data, only 51% of total trials demonstrated a statistically significant difference with respect to the primary outcome. Including only published studies increased the relative effect size by 32%.


Gif 1. Publication bias among antidepressant trials as reported by Turner EH et al. GIF created by Turner EH.

definition

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

NERDCAT Copyright © 2022 by Ricky Turgeon is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book