Provide a brief synopsis

Publication Bias in SystematicSupplement A to Chapter 30 Reviews

Publication bias falls within a broad category called dissemination bias. In their extensive review of the literature, Song and colleagues (2010) defined dissemination bias as follows: “Dissemination bias occurs when the dissem- ination profile of a study’s results depends on the direction or strength of its findings. The dis- semination profile is defined as the accessibility of research results or the possibility of research findings being identified by potential users. The spectrum of the dissemination profile ranges from completely inaccessible to easily accessi- ble, according to whether, when, where and how research is published” (p. 3).

In addition to publication bias, Song and col- leagues (2010) identified other types of dissemina- tion bias, such as the following:

l Outcome reporting bias (incomplete reporting of outcomes that were measured in a trial, favor- ing the reporting of outcomes with significant results)

l Time lag bias (occurs when the speed of publi- cation depends on the direction and strength of the study results)

l Language bias (occurs when the language of publication depends on the direction and strength of the study results)

l Citation bias (a bias that occurs when the proba- bility that a study will be cited is associated with the study results)

In this Supplement, we focus on publica- tion bias—the type of dissemination bias that has received the most attention; it is also an element in the widely used GRADE system of evaluating an overall body of evidence, as described in the text- book. Publication bias has been a concern going back decades and is defined as “the tendency on the parts of investigators, reviewers, and editors to sub- mit or accept manuscripts for publication based on the direction or strengths of the findings” (Dickersin, 1990, p. 1385). More recent studies have continued to find strong evidence of publication bias (e.g., Bassler et al., 2016; Dwan et al., 2008; Dwan et al., 2013).

Because of publication bias, studies in which the null hypothesis was retained (i.e., the results were not statistically significant) are disproportionately likely to be unpublished, and this bias could lead to an overestimation of effects if only published reports are included in a meta-analysis. Also, as we noted in Chapter 29, many pilot studies are never published, and results from pilots may therefore be absent from systematic reviews. Some researchers, therefore, use strategies to assess publication bias, to avoid it, and to adjust for it.

 

30-2 � PART 6 Building an Evidence Base for Nursing Practice

TIP It has been suggested that if a pilot study has led to a full-scale study, the pilot study results should not be included in a meta-analysis. If, however, the pilot trial has not been followed by a larger study, it is legitimate to include the results in the review.

Song et al. (2010) advised that the first step for preventing publication bias is heightened aware- ness of its detrimental consequences. Similarly, Meerpohl and colleagues (2015), who developed evidence-informed recommendations to reduce dis- semination bias in clinical research, strongly rec- ommended raising awareness about dissemination bias and efforts to reduce it.

Strategies to address publication bias in sys- tematic reviews fall into three broad categories: efforts to prevent such bias by thorough search- ing for relevant unpublished studies, detecting publication bias during the review, and min- imizing the impact of the bias using statistical strategies.

REDUCING THE RISK OF PUBLICATION BIAS

If searches for relevant studies are restricted to electronic bibliographic databases, the risk of publication bias is high. For example, Greenhalgh and Peacock (2005) found that only 30% of the studies eligible for their systematic review on a complex topic were obtained from database and hand searches, whereas the major- ity of studies were identified through snowball- ing and drawing on personal knowledge. There is also evidence of indexing errors in electronic databases that contribute to biases if such data- bases are the only method of searching (Song et al., 2010). Searches should draw on many differ- ent strategies.

Reviewers can normally pursue several ave- nues for tracking down studies that have not been published—or that have been published in out- lets other than traditional journals. One import- ant step is a search of trial and research registries to identify studies underway. Communicating with key researchers in a field is another avenue

to discovering unpublished work. Searching for abstracts in conference proceedings is also rec- ommended, which usually involves searching the websites of relevant organizations and societies. Another source, especially if the review concerns a pharmaceutical or other health product, is reg- ulatory or company reports, such as those issued by the US Federal Drug Administration (FDA). Dissertations should also be searched.

Several resources can be tapped to locate grey literature. These include the following:

l OpenGrey: A large database of European grey literature.

l New York Academy of Medicine’s Grey Literature Report: A collection of new grey lit- erature resources. (NYAM’s website also has a list of international grey lit producers).

l TRIP: Turning Research Info Practice (Enter “grey literature” in the search field).

l GreyNet International: Forums, a journal, and other resources on grey literature (http://www. greynet.org/thegreyjournal.html).

There are also special resources devoted specif- ically to negative results, including the following:

l Journal of negative results in biomedicine: Open-access, peer-reviewed, online journal, but ceased publication in 2017

l PLOS: The Missing Pieces: A Collection of Negative, Null and Inconclusive Results: Publication of negative, null and inconclusive results

l Journal of Pharmaceutical Negative Results: Peer-reviewed journal

DETECTION OF PUBLICATION BIAS

The next strategy concerns efforts to detect pub- lication bias in an ongoing systematic review. A good place to begin is to do a preliminary inspection of effect size estimates in a forest plot, to simply get a sense of the data. A useful prac- tice is to organize the list of studies in the plot from most precise (i.e., the ones with the larg- est sample sizes) to least precise and to look for

 

http://www.opengrey.eu/
http://www.greylit.org/home
http://www.greylit.org/home
http://www.greynet.org/thegreyjournal.html
http://www.greynet.org/thegreyjournal.html

 

 

SUPPLEMENT A TO CHAPTER 30 Publication Bias in Systematic Reviews � 30-3

trends—which are likely to be more discernible when there are many studies than when there are few. Such an arrangement may allow you to get a visual sense of whether the point estimates shift from right to left (or vice versa) toward the bottom of the plot, where the smaller studies are plotted. Several other approaches have been developed specifically to explore possible publication biases.

Funnel Plots

A widely used practice for examining the pos- sibility of publication bias among studies in a meta-analysis is to construct a funnel plot. In a funnel plot, effect size estimates from all indi- vidual studies in the sample (both published and unpublished) are plotted on the horizontal axis, and the precision of each effect size estimate (e.g., the standard error or its inverse) is plotted on the vertical axis.

Figure 1 illustrates two hypothetical funnel plots for a meta-analysis of a nursing intervention

in which the pooled effect size estimate (d) for 20 studies (10 published and 10 unpublished) is 0.20. In both graphs, the large, published stud- ies near the top tend to cluster on either side of the mean effect size. In the funnel plot on the left (A), the effects are fairly symmetric around the pooled effect size for both published (larger studies) and unpublished studies (smaller ones). In the asymmetric plot on the right (B), how- ever, unpublished studies with lower precision appear to have consistently lower effect size esti- mates. The graph suggests the possibility that if additional unpublished studies were located and added to the analysis, the overall effect size estimate might be lower. Note that in this figure, even if published and unpublished studies had not been plotted with different symbols, the fun- nel plot would still suggest a possible bias based on variation in precision between the studies near the top and near the bottom. Sterne and Egger (2001) suggest that it might be helpful to super- impose on the actual funnel plot a graph of what

No publication bias Possible publication bias

High High

Low Low

0.0 0.1 0.2 0.3 0.4 0.0 0.1 0.2 0.3 0.4

A Magnitude of effect (d ) B Magnitude of effect (d )

P re

ci si

o n o

f e ff e ct

e st

im a te

P re

ci si

o n o

f e ff e ct

e st

im a te

 

Pooled estimate of effect

= Effect for study published in a peer-reviewed journal

= Effect for study not published in a peer-reviewed journal

FIGURE 1 Two funnel plots suggesting no publication bias on the left (A) or possible publication bias on the right (B).

 

 

30-4 � PART 6 Building an Evidence Base for Nursing Practice

the expected distribution of studies would look Statistical Tests to Assess Publication Bias like in the absence of bias.

T I P Bax and colleagues (2009) have written about the power of graphs in meta-analysis and specifically discussed publication bias. Funnel plots can be created within many of the meta-analytic software packages.

Example of a Funnel Plot Used to Assess Publication Bias Ruppar and colleagues (2017) conducted a meta- analysis to assess the effectiveness of medication adherence interventions among hypertensive black adults. Their analysis included 37 studies with a majority of black or African-American participants. The review- ers noted that “We strived to minimize publication bias through comprehensive search strategies of both the indexed and grey literature” (p. 1146). Publication bias was assessed, in part, using a funnel plot in which effect size was plotted on the horizontal axis and standard error was plotted on the vertical axis (See Figure 2). The reviewers concluded that there was some evidence of publication bias.

0.0

0.2

0.4

0.6

0.8 –4

S ta

n d

a rd

e rr

o r

In most cases, the funnel plots from actual meta- analyses are not as clear-cut as those in Figure 1(B)— and, in any event, the interpretation of the distribu- tion of studies in a funnel plot is subjective. Several statistical tests have been developed to quantify the degree to which bias is reflected in the funnel plot. In fact, in a systematic review of method- ologic articles offering suggestions for detecting dissemination bias, Mueller and colleagues (2016) identified dozens of proposed approaches.

One such test is the Begg and Mazurdam (1994) rank-order correlation (Kendall tau) between the stan- dardized effect size and the standard errors of these effects. If the distribution in the funnel plot is asym- metric, higher standard errors would most typically be associated with larger effect sizes—i.e., the value of tau would be positive. This test tends to have low power, however, and so unless the bias is extremely severe, it will often be nonsignificant (Sterne et al., 2001). Thus, a nonsignificant result does not neces- sarily mean that there is no publication bias.

–3 –2 –1 0 1 2 3 4 Effect size (d)

FIGURE 2 Funnel plot from the meta-analysis by Ruppar et al. (2017), published in an online supplement. (Reprinted with permission from Ruppar, T., Dunbar-Jacob, J., Mehr, D., Lewis, L., & Conn, V. (2017). Medication adherence interventions among hypertensive black adults. Journal of Hypertension, 35, 1145-1153.)

 

SUPPLEMENT A TO CHAPTER 30 Publication Bias in Systematic Reviews � 30-5

A widely used approach is to use Egger’s linear regression method (Egger et al., 1997). In this test, the actual values of effect sizes and their precision are used in a regression, rather than as ranks. This test has been found to have greater power than the Kendall tau rank-order method, but power is still low if the sample of studies in the analysis is small or if the extent of bias is not severe. Thus, caution is needed in interpreting the results.

Example of Statistical Tests Assess Publication Bias Feng and colleagues (2014) did a meta-analysis of the association between breastfeeding and ovarian can- cer, using results reported in 19 primary studies. The researchers examined publication bias and applied both statistical tests described here. Both tests were nonsig- nifcant (Begg test, p = .89 and Egger test, p = .89).

Sensitivity Tests to Assess Publication Bias

In discussing publication bias in connection with GRADE ratings, Guyatt and colleagues (2011) noted that the most compelling evidence for publication bias lies in the reviewers’ success in obtaining results from some unpublished studies and then “demonstrating that the published and unpublished data show differ- ent results” (p. 1280). In other words, they recom- mended a sensitivity analysis to examine whether the results differ for published and unpublished findings. In the previously mentioned study by Ruppar and colleagues (2017) regarding medication adherence for hypertensive black adults, the analysis compared effect sizes for findings from published studies versus those from theses and dissertations. Substantial effect size differences were found: d = .36 for journal arti- cles and d = .05 for dissertations, which was consis- tent with their conclusion of mild publication bias as observed in the funnel plot (Figure 2).

ADDRESSING PUBLICATION BIAS

Numerous strategies for addressing publication bias have been proposed in the meta-analytic lit- erature (Mueller et al., 2016), although none has been found to be totally satisfactory. One method,

proposed many years ago by Rosenthal (1979), is to compute a fail-safe number that estimates the number of studies reporting nonsignificant results that would be needed to reverse the conclusion of a significant effect in a meta-analysis. Rosenthal called this a “file drawer” analysis, referring to the presumed location of studies that were missing from the meta-analysis. The algorithm for the fail- safe N involves computing a combined p value for all studies and then estimating how many “miss- ing” studies with an average effect of zero would be required to yield an overall nonsignificant effect size. The Cochrane Handbook (Higgins & Green, 2011) noted several problems with this approach— for example, the questionable assumption that the average effect in the missing studies is 0.0— and does not recommend its use. Nevertheless, fail-safe information is sometimes reported in meta-analyses.

Orwin (1983) suggested an alternative to Rosenthal’s approach. Orwin’s fail-safe method allows the researcher to designate a value other than zero for the overall effect. For example, the researcher could identify how many missing stud- ies would be needed to yield an overall effect size below which the effect would no longer be consid- ered clinically significant. In Orwin’s approach, researchers can also estimate that the mean effect in the missing studies is something other than zero.

Yet another approach, called a trim-and-fill method, was developed by Duval and Tweedie (2000), with the idea of imputing the missing stud- ies. In this iterative procedure, the smallest stud- ies are removed from the negative side of a funnel plot (assuming the bias is in the typical direction of nonsignificant effects in unpublished studies), and the adjusted effect size is computed for each “trim.” Because the removal of studies reduces variance, the algorithm adds the original study back but imputes a mirror image effect for each. Like other approaches, several problems with this approach have been identified.

Mueller and colleagues (2016), in their system- atic review of approaches, noted that it was difficult to advise which method should be used because all methods have limitations and their validity has

 

 

30-6 � PART 6 Building an Evidence Base for Nursing Practice

not been assessed. They concluded that “a thor- ough literature search remains crucial in systematic reviews” (p. 25).

As this brief overview suggests, the topic of publication bias has been given considerable atten- tion in recent years, and procedures for assessment and correction are often complex. Further guidance is available in the Cochrane Handbook (Higgins & Greene, 2011) and in a book devoted to this topic by Rothstein, Sutton, and Borenstein (2005).

T I P Publication bias has mainly been a concern in systematic reviews of quantitative research, espe- cially those involving a meta-analysis. However, the issue has gained some recent attention among those concerned with qualitative evidence synthe- sis as well (e.g., Toews et al., 2017). Dissemination bias also has been discussed in connection with the application of the GRADE-CERQual rating system for qualitative evidence syntheses (Booth et al., 2018).

REFERENCES CITED IN SUPPLEMENT A TO CHAPTER 30

*Bassler, D., Mueller, K., Briel, M., Kleijnen, J., Marusic, A., Wager, E., … Meerpohl, J. (2016). Bias in dissemination of clinical research findings: Structured OPEN framework of what, who and why, based on literature review and expert consensus. BMJ Open, 6, e010024.

*Bax, L., Ikeda, N., Fukui, N., Yaju, Y., Tsuruta, H., & Moons, K. (2009). More than numbers: The power of graphs in meta-analysis. American Journal of Epidemiology, 169, 249–255.

Begg, C. B., & Mazumdar, M. (1994). Operating characteristics of a rank correlation test for publication bias. Biometrics, 50, 1088–1101.

*Booth, A., Lewin, S., Glenton, C., Munthe-Kaas, H., Toews, I., Noyes, J., … Meerpohl, J. (2018). Applying GRADE- CERQual to qualitative evidence synthesis findings—Paper 7: Understanding the potential impacts of dissemination bias. Implementation Science, 13(Suppl. 1):12.

Dickersin, K. (1990). The existence of publication bias and risk factors for its occurrence. Journal of the American Medical Association, 263, 1385–1389.

Duval, S. J., & Tweedie, R. L. (2000). Trim and fill: A simple funnel-plot-based method of testing and adjusting for publi- cation bias in meta-analysis. Biometrics, 56, 455–463.

*Dwan, K., Altman, D., Arnaiz, J., Bloom, J., Chan, A., Cronin, E., … Williamson, P. (2008). Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One, 3, e3081.

*Dwan, K., Gamble, C., Williamson, P., & Kirkham, J. (2013). Systematic review of the empirical evidence of study publi- cation bias and outcome reporting bias—an updated review. PLoS One, 8, e66844.

*Egger, M., Davey Smith, G., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. BMJ, 315, 629–634.

Feng, L.P., Chen, H.L., & Shen, M. Y. (2014). Breastfeeding and the risk of ovarian cancer: A meta-analysis. Journal of Midwifery & Women’s Health, 59, 428–437.

*Greenhalgh, T., & Peacock, R. (2005). Effectiveness and efficiency of search methods in systematic reviews of complex evidence: Audit of primary sources. BMJ, 331, 1064–1065.

Guyatt, G., Oxman, A., Montori, V., Vist, G., Kunz, R., Brozek, J., … Schunemann, H. (2011). GRADE guidelines: Rating the quality of evidence—publication bias. Journal of Clinical Epidemiology, 64, 1277–1282.

*Higgins, J. P. T.,, & Green, S.,, (Eds.). (2011). Cochrane hand- book for systematic reviews of interventions version 5.1.0. The Cochrane Collaboration. Chapter 10. http://training. cochrane.org/handbook.

*Meerpohl, J., Schell, L., Bassler, D., Gallus, S., Kleijnen, J., Kulig, M., … Antes, G. (2015). Evidence-informed recom- mendations to reduce dissemination bias in clinical research: Conclusions from the OPEN (Overcome failure to Publish nEgative fiNdings) project based on an international consen- sus meeting. BMJ Open, 5, e0066666.

Mueller, K., Meerpohl, J., Briel, M., Antes, G., von Elm, E., Lang, B., … Bassler, D. (2016). Methods for detecting, quantifying, and adjusting for dissemination bias in meta- analysis are described. Journal of Clinical Epidemiology, 80, 25–33.

Orwin, R. G. (1983). A fail-safe N for effect size in meta- analysis. Journal of Educational Statistics, 8, 157–159.

Rosenthal, R. (1979). The “file drawer” problem and tolerance

for null results. Psychological Bulletin, 86, 638–641. Rothstein, H. R., Sutton, A., & Borenstein, M. (2005).

Publication bias in meta-analysis: Prevention, assessment and adjustments. Chichester, UK: Wiley-Blackwell.

Ruppar, T., Dunbar-Jacob, J., Mehr, D., Lewis, L., & Conn, V. (2017). Medication adherence interventions among hypertensive black adults. Journal of Hypertension, 35, 1145–1153.

*Song, F., Parekh, S., Hooper, L., Loke, Y., Ryder, J., Sutton, A., … Harvey, I. (2010). Dissemination and publication of research findings: An updated review of related biases. Health Technology Assessments, 14(8), 1-220.

Sterne, J. A., & Egger, M. (2001). Funnel plots for detecting bias in meta-analysis: Guidelines on choice of axis. Journal of Clinical Epidemiology, 54, 1046–1055.

 

 

 

 

 

 

 

 

SUPPLEMENT A TO CHAPTER 30 Publication Bias in Systematic Reviews � 30-7

*Sterne, J. A., & Egger, M., & Smith, G. (2001). Systematic assessment within qualitative evidence syntheses. Journal of reviews in health care: Investigating and dealing with Clinical Epidemiology, 88, 133–139. publication and other biases in meta-analysis. BMJ, 323, 101–105.

*A link to this open-access article is provided in the Toews, I., Booth, A., Berg, R., Lewin, S., Glenton, C., Munthe- Kass, H., … Meerpohl, J. (2017). Further exploration of dis- Toolkit of the Resource Manual for Chapter 30. semination bias in qualitative research required to facilitate

 

    • Publication Bias in Systematic Reviews
      • Reducing the Risk of Publication Bias
      • Detection of Publication Bias

Needs help with similar assignment?

We are available 24x7 to deliver the best services and assignment ready within 3-4 hours? Order a custom-written, plagiarism-free paper

Get Answer Over WhatsApp Order Paper Now