10 Search
A comprehensive search is at the core of all systematic reviews, and is essential to ensure that all relevant trials were included. A search that is not sufficiently thorough will be more vulnerable to publication bias. In the case of publication bias “an ounce of prevention is worth a pound of cure” since the tools available to identify and adjust for publication bias are insensitive and cannot discriminate between publication bias and alternative causes for small-study effects. Other considerations (such as when the search was last completed) are also necessary to ensure the search is sufficiently complete.
Checklist Questions
Were a reasonable number of relevant databases searched? |
When was the search conducted? Is it likely there have been subsequent publications that may alter the results? |
Was a sufficient effort made to find unpublished studies (or unreported results of published studies)? |
Were sources of additional published/unpublished data sought out? |
Databases of published literature: Were a reasonable number of relevant databases searched?
It is important to search multiple databases to maximize the identification of all relevant studies, as no single database includes all studies. One study (Royle P et al.) compared three major databases to a set of relevant studies established by searching twenty-six additional databases:
Database | Proportion of relevant trials identified |
MEDLINE | 69% |
EMBASE | 65% |
CENTRAL | 79% |
Combining all three | 97% |
The optimal selection of which (and how many) databases to search will depend on the discipline, topic area, and type of intervention. For example, studies evaluating nursing and physiotherapy interventions should at minimum include CINAHL and PEDro, respectively. A good rule-of-thumb is to search MEDLINE and at least 1-2 other topic-specific databases (e.g. EMBASE and CENTRAL for pharmacotherapy studies).
Timeframe: When was the search conducted? Is it likely there have been subsequent publications that may alter the results?
There are no strict rules as to how long is too long before a review becomes outdated, as this largely depends on the rate of evidence generation in a given field or topic area. It is important to consider the rate at which new publications are being added to the literature (i.e. considering if it is a “hot” topic) and whether the results would likely be sensitive to new publications (i.e. a meta-analysis with low-to-moderate certainty). If there are already several large high-quality trials showing consistent results it is less likely that any new literature would substantially change results.
Grey literature: Was a sufficient effort made to find unpublished studies (or unreported results of published studies)?
A thorough search of unpublished literature aims to minimize the effects of publication bias.
E.g. In a meta-analysis (Siu JT et al.) of N-acetylcysteine for non-acetaminophen-related acute liver failure the authors searched all of:
- The following databases (without language restrictions): Cochrane Hepato‐Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials, MEDLINE Ovid, Embase Ovid, LILACS, Science Citation Index Expanded, and Conference Proceedings Citation Index – Science
- The reference lists of all included studies and relevant papers
- The following online clinical trial registries: ClinicalTrial.gov, European Medicines Agency, World Health Organization International Clinical Trial Registry Platform, the Food and Drug Administration, and pharmaceutical company sources for ongoing or unpublished trials
The authors of relevant papers were also contacted to inquire regarding any further published or unpublished work.
Why is publication bias so concerning?
Studies with statistically significant results (“positive” studies) are twice as likely to get published, and will typically get published faster (by a median of 1.3 years in one study) compared to trials with statistically non-significant results (“neutral” studies) (Hopewell S et al., Ioannidis JP).
- Published trials have a 15% larger estimate of effect compared to unpublished trials (McAuley L et al.)
- Although more common with industry-funded trials, government-funded studies are still prone to publication bias (32% vs. 18% unpublished 5 years after completion) (Jones CW et al.)
- In one study, 90-98% of meta-analyses with very large effects observed in early trials became substantially smaller once subsequent studies became available (e.g. median odds ratio decreased from ~11 to ~4 after more trials were added to the first trial) (Pereira TV et al.)
- In one study of 42 meta-analyses, in 93% of cases the addition of unpublished FDA outcome data changed the efficacy summary estimate (either increased or decreased) compared to the meta-analysis based purely on published outcome data (Hart B et al.)
Bottom line: Meta-analyses of only published trials will overestimate the effects of drugs and other interventions, especially when meta-analyses are conducted “earlier on” (before the neutral trials get published). Consequently, there is likely a greater risk of publication bias in meta-analyses based on a few small studies.
E.g. A review (Turner EH et al.) of antidepressants found that 94% of published trials demonstrated a statistically significant difference with respect to the primary outcome. However, when combined with unpublished FDA review data, only 51% of total trials demonstrated a statistically significant difference with respect to the primary outcome. Including only published studies increased the relative effect size by 32%.
Gif 1. Publication bias among antidepressant trials as reported by Turner EH et al. GIF created by Turner EH.
A review that systematically identifies all potentially relevant studies on a research question. The aggregate of studies is then evaluated with respect to factors such as risk of bias of individual studies or heterogeneity among results. The qualitative combination of results is a systematic review.
Refers to a systematic tendency for results to be published based upon the direction or statistical significance of the results. This results in bias when aggregating evidence if methods are more likely to include published literature than unpublished literature.
A tendency for smaller published studies to demonstrate a larger effect size than larger published studies. One possible cause is publication bias. However, other possible causes include systematic differences between smaller and larger studies (e.g. stricter enrolment criteria, adherence and/or follow-up in smaller studies, more pragmatic design in larger studies).
A meta-analysis is a quantitative combination of the data obtained in a systematic review.
Randomized controlled trials are those in which participants are randomly allocated to two or more groups which are given different treatments.
Odds ratios are the ratio of odds (events divided by non-events) in the intervention group to the odds in the comparator group. For example, if the odds of an event in the treatment group is 0.2 and the odds in the comparator group is 0.1, then the OR is 2 (0.2/0.1). See here for a more detailed discussion.
A primary outcome is an outcome from which trial design choices are based (e.g. sample size calculations). Primary outcomes are not necessarily the most important outcomes.
Calculates the effect of an intervention via a fractional comparison with the comparator group (i.e. intervention group measure ÷ comparator group measure). Used for binary outcomes. Relative risk, odds ratio, or hazards ratio are all expressions of relative effect. For example, if the risk of developing neuropathy was 1% in the treatment group and 2% in the comparator group, then the relative risk is 0.5 (1 ÷ 2). See the Absolute Risk Differences and Relative Measures of Effect discussion here for more information.