Causal overstatements reduced in press releases following academic study of health news

Version 2. Wellcome Open Res. 2020; 5: 6.

, Conceptualization, Formal Analysis, Investigation, Methodology, Project Administration, Writing – Original Draft Preparation,1, Conceptualization, Investigation, Methodology, Project Administration, Writing – Original Draft Preparation,2, Conceptualization, Formal Analysis, Investigation, Methodology, Project Administration, Writing – Review & Editing,1, Conceptualization, Investigation, Methodology, Supervision, Writing – Review & Editing,1, Conceptualization, Investigation, Methodology, Supervision, Writing – Review & Editing,1, Conceptualization, Funding Acquisition, Investigation, Methodology, Project Administration, Resources, Writing – Review & Editing,2 and , Conceptualization, Data Curation, Formal Analysis, Funding Acquisition, Investigation, Methodology, Project Administration, Resources, Supervision, Writing – Original Draft Preparation, Writing – Review & Editinga,1

Version Changes

Revised. Amendments from Version 1

In response to reviewer 1’s comments we have added more information about the data from 2011 to enable a better comparison. We have noted in abstract and discussion that regression the mean / natural fluctuation is a possible explanation, and removed the phrase ‘press release practice is malleable’. We have also adopted the phrase ‘causal over-statement’, as recommended. We have included Haber et al. and Shaffer et al. where recommended. We have corrected the typos.   In response to reviewer 2’s comments we have now defined ‘aligned’ in methods and the legend of figure 2. We have rephrased and expanded the sentence ‘the effect on news is diluted by other factors and so here we may have had sufficient power only to detect the effect on press releases’. As the reviewer notes, it is very difficult to avoid causal language even when we are primed to do so by the very topic of the paper. We now note more explicitly that the causal effect is hypothetical, and the data is merely consistent with this. We have also adopted the phrase ‘causal over-statement’, as recommended, and added more information about the reception of the original study in the introduction.

Abstract

Background: Exaggerations in health news were previously found to strongly associate with similar exaggerations in press releases. Moreover such exaggerations did not appear to attract more news. Here we assess whether press release practice changed after these reported findings; simply drawing attention to the issue may be insufficient for practical change, given the challenges of media environments.

Methods: We assessed whether rates of causal over-statement in press releases based on correlational data were lower following a widely publicised paper on the topic, compared to an equivalent baseline period in the preceding year.

Results: We found that over-statements in press releases were 28% (95% confidence interval = 16% to 45%) in 2014 and 13% (95% confidence interval = 6% to 25%) in 2015. A corresponding numerical reduction in exaggerations in news was not significant. The association between over-statements in news and press releases remained strong.

Conclusions: Press release over-statements were less frequent following publication of Sumner et al. (2014). However, this is correlational evidence and the reduction may be due to other factors or natural fluctuations.

Keywords: science news, hype, exaggeration, science communication

Introduction

News media help disseminate health information to millions of readers, but appealing news stories can contain misleading claims, with associated risks to public health (
Buhse
et al., 2018
[1];
Grilli
et al., 2002
[2];
Haneef
et al., 2015
[3];
Matthews
et al., 2016
[4];
Ramsay, 2013[5];
Schwitzer, 2008[6];
Sharma
et al., 2003
[7];
Yavchitz
et al., 2012
[8]).

Key sources of science and health news are press releases from journals, universities and funders (
Autzen, 2014[9];
de Semir
et al., 1998
[10]). Previous observational research has found that health news content is strongly associated with press release content, including when exaggerations occur (
Jackson & Moloney, 2016[11];
Lewis
et al., 2008
[12];
Schwartz
et al., 2012
[13];
Sumner
et al., 2014
[14];
Sumner
et al., 2016
[15];
Yavchitz
et al., 2012
[16]).

However, it is not clear whether such research has much influence on the practice of academics and press officers in the preparation of press releases. Given the need to write short compelling statements about complex research, it is all too easy to inadvertently allow over-statements. We believe that the majority of exaggerations are not purposeful, but arise from the desire to be impactful, clear and accessible. It may be very difficult for this to change. The finding that many news exaggerations are already in press release text (
Sumner
et al., 2014
[17]) certainly attracted interest and controversy (
BMJ Altmetric, 2019[18]). It was received positively by many press officers who are motivated to communicate science carefully, and helped catalyse some initiatives (
The Academy of Medical Sciences, 2017[19]). It was discussed in science communication conferences, blogs, twitter, and directly with press officer teams while developing a collaborative trial (
Adams
et al., 2019
[20]).

However, we do not know if sharing awareness has any potential effect on practice. Here we simply assess whether the rate of over-stated claims was lower in the 6 months following publication of that article (January to June 2015) compared to the equivalent 6 months in the previous year (January to June 2014). Our main interest was press release claims – since these were the source identified in
Sumner
et al. (2014)
[21] – but we also assessed the associated news stories. Clearly our data can only establish whether a detectable difference occurred, and will not establish its cause. We can additionally compare the data to a third time-point in 2011 (with some limitations).

We focus here on causal claims based on correlational evidence, a common and potentially impactful form of over-statement in academia and science reporting (
Buhse
et al., 2018
[22];
Ramsay, 2013[23];
Wang
et al., 2015
[24]). For example,
Haber
et al. (2018)
[25] found that in health news highly shared on social media, the causal language was over-stated relative to the underlying strength of the evidence in about half of the cases they studied. Meanwhile,
Shaffer
et al. (2018)
[26] provided some linking evidence that causal narratives in health news might genuinely affect readers’ health choices and intentions.

Distinguishing the types of evidence that can or cannot support causal inference is not intuitive (
Norris
et al., 2003
[27]). The distinction is multifactorial, but at its heart is the difference between correlational (observational) and experimental evidence. That is not to say that all observational studies are unable to support a causal inference (if large, replicated and other factors are controlled). There is also the question of reliability and power for small samples. It is reported that around half the correlations underlying media causal claims are not confirmed by later meta-analyses (
Dumas-Mallet
et al., 2017
[28]).

Sumner
et al. (2014)
[29] tested three types of exaggeration: causal claims, advice, and human claims from animal experiments. The results for causal claims appear robust across several studies in different contexts or using different analysis protocols (
Adams
et al., 2017
[30];
Adams
et al., 2019
[31];
Bratton
et al., 2019
[32];
Buhse
et al., 2018
[33];
Schat
et al., 2018
[34];
Sumner
et al., 2014
[35];
Sumner
et al., 2016
[36]). The results for exaggerated advice did not replicate in a subsequent sample (
Bratton
et al., 2019
[37]), probably because exaggeration in advice is difficult to define in a way that applies to all cases. Human claims based on animal research have dropped in frequency in the UK since the Declaration on Openness on Animal Research (2012) and Concordat on Openness on Animal Research (2014), probably because one previous motivation for such ‘exaggerations’ was to avoid revealing animal research facilities previously (
Bratton
et al., 2019
[38]; see also supplementary information in
Adams
et al., 2019
[39]). Therefore we focus here on causal claims based on correlational evidence evidence – which we will refer to as ‘causal over-statement’ – testing whether rates were lower in a six-month period in 2015 compared to the same months in 2014.

Method

Collection of press releases, journal articles and news

Press releases from 2014 and 2015 were collected from the same sample of 20 universities as used in
Sumner
et al. (2014)
[40], as well as from the BMJ, which published the paper. The press offices of these institutions were the most directly aware of the findings. This dataset is an expanded version of the dataset used in
Bratton
et al. (2019)
[41], which replicated
Sumner
et al. (2014)
[42] with the 2014 and 2015 samples from the 20 universities. The observation periods were January to June 2014 (pre-publication), and January to June 2015 (post-publication;
Sumner
et al., 2014
[43] was published in December). We chose a 6-month period to aim for a sufficient number, and compared equivalent months in case press release output has seasonal changes (e.g. associated with academic year). Online repositories (websites, and EurekAlert.org) were searched for any press releases from the included institutions. This resulted in a corpus of 4706 press releases. The sample was then restricted to those relevant to human health, using the same criteria as (
Sumner
et al., 2014
[44]); which included all biomedical, lifestyle, public health and psychological topics), that reported on a single, published, peer-reviewed research article. This left 1033 relevant press releases. To ensure similar sample numbers across institutions and to reduce the sample to one we had resources to code, we implemented a cap of 10 press releases for each time period for each institution, through random selection where necessary. This resulted in a sample of 368 press releases, for which the associated peer-reviewed journal articles were retrieved.
For each press release, relevant news articles (i.e. those which make reference to the source research) were collected via keyword searches using Google Search and the Nexis database (LexisNexis, New York, NY), up to 28 days after publication of the press release, and up to one week before (to allow for rare embargo breaches). The sample was then limited to cases where the study design was observational cross-sectional, observational longitudinal, or an observational meta-analysis (N=168 press releases). For analysing overstatements in press releases and news, we only used cases where the journal article was not already overstated (N=98 press releases; 322 news).

Article coding

Prior to coding, the corpus of articles underwent a redaction process using Automator software (5.0, Apple Inc.) to remove any references to the year 2014 or 2015. This was so that the coders, who were aware of the aim of the study, were not aware which condition each article belonged to. The articles were coded using the standardised coding sheet used by
Adams
et al., 2017
[45] (see raw data folder ‘before_after_data’ in (
Chambers
et al., 2019
[46])). For this analysis, only information regarding the statements of causal or correlational relationship was used. Two researchers (LB and AC or RCA) independently coded each article and any disagreements were discussed, with a third coder (AC or RCA) where necessary. This created a database with 100% agreement in coding.

Coding of causal and correlational claims. We used the scale developed by
Adams
et al. (2017)
[47], in which directly causal statements and
can cause statements are classed as over-statements for correlational evidence. On the other hand, it was not classed as an over-statement if the claim contained
might, may, could, linked to, predicts, associated with and other associative or conditional phrases. We refer to these phrases as ‘aligned’ with correlational evidence. Although
Sumner
et al. (2014)
[48] originally distinguished between some of these phrases, readers were found not to consistently rank any of them as stronger than the others (
Adams
et al., 2017
[49]). In contrast, readers consistently ranked
can cause and directly causal statements as stronger statements.

The strongest claims relating two variables in the study (e.g. a food and a disease) were recorded from the abstracts and discussion sections of journal articles. For press releases and news articles, the strongest statement was coded from the first two sentences of main text (where these were directly relevant to the research; general context was excluded).

We defined over-statements as causal or
can cause claims based on correlational evidence. We only analysed cases where the journal article did not already make such claims, since our focus was not on claims taken straight from the publication, where publications lags mean that such causal claims may have been originally penned at any time in 2014 or early 2015.

Statistical analysis

Consistent with our previous approach (
Sumner
et al., 2014
[50]), generalised estimating equations (GEE) were used (in SPSS version 24) to provide estimates and confidence intervals adjusting for the clustering of multiple articles to one source (multiple news articles from one press release; or multiple press releases from the same institution). The GEE is an extension of the quasi-likelihood approach and is used in situations in which data is clustered to estimate how much each data point should contribute statistically. The key part of the process is to estimate the correlation of data within clusters. At one extreme, all data within clusters might be fully correlated, in which case there is really only as many samples are there are clusters; separating the data points within clusters adds no additional information. At the other end of the extreme, data within clusters may be entirely uncorrelated; in this case the clustering does not matter and all data points can be treated as independent. In reality, data within clusters tends to be somewhat correlated, and the GEE estimates this and applies a weighting factor to those data points depending on the degree of correlation. The approach is accessibly explained by
Hanley
et al. (2003)
[51], so we do not replicate the equations here. We used a logit linking function because the data is binary, and an exchangeable working correlation, which is a common approach for clustered data and makes a parsimonious assumption that correlations are similar between all data within clusters.

Results

Press release overstatements

In the sample from 2014, 28% (95% confidence interval = 16% to 45%) of press releases made a causal over-statement: a causal claim or
can cause claim when the data were correlational and the journal article had not made a similar claim. In the sample from 2015, this rate was significantly lower, at 13% (95% confidence interval = 6% to 25%, see
). Thus, the odds of such overstatement were higher in 2014 (odds ratio = 2.7, 95% confidence interval = 1.03 to 6.97).

Figure 1.

Overstatement rates in press releases and news in 2014 and 2015.The rate for press releases was significantly reduced in 2015 versus 2014. For news any apparent reduction was not significant. Error bars represent 95% confidence intervals.

Comparison to 2011

Adams
et al. (2017)
[52] analysed a database of press releases from universities in 2011 using a definition of causal over-statement similar to the one used here. The number of press releases with causal over-statement was 19% (95% confidence interval = 14% to 25%). There were some methodological differences; it was a reananlysis of the data in
Sumner
et al. (2014)
[53], who collected a full year of press releases from 20 universities with no cap on numbers from each institution, and used partial instead of complete double coding.

News overstatements

In the sample from 2014, 30% (95% confidence interval = 17% to 47%) of news made a causal overstatement. In the sample from 2015, this rate was 20% (95% confidence interval = 11% to 32%, see
). There was not a significant difference (95% confidence interval of the odds ratio = 0.6 to 4.8). These numbers can also be compared to those for news in 2011, analysed by
Adams
et al. (2017)
[54]. The number of news with causal over-statement was 32% (95% confidence interval = 24% to 41%).

News statements as a function of press release statements

To assess whether the drop in press release over-statements meant a weakening of the previously found association between news claims and press release claims, we assessed this association following the same methods as previously described (
Bratton
et al., 2019
[55];
Schat
et al., 2018
[56];
Sumner
et al., 2014
[57];
Sumner
et al., 2016
[58]). Across 2014 and 2015 combined, the odds of over-stated news claims were 12 times higher (95% confidence interval = 4.5 to 32) for over-stated press releases (69% news over-stated, 95% confidence interval = 49% to 84%), than for aligned press releases (16% news over-stated, 95% confidence interval = 10% to 24%). This association between news and press releases (
) was not different between the years (odds ratio = 1.1, 95% confidence interval = .2 to 6.2) and is consistent with the association between news and press releases seen for exaggerations and other content previously.

Figure 2.

Causal overstatements in news articles as a function of press release overstatement and year of publication.

‘Aligned’ press releases or news are those that do not make causal claims stronger than
‘may cause’. The association between news and press releases is present in both years and not statistically different between years. Error bars represent 95% confidence intervals.

Discussion

We set out to assess whether there was evidence of changes in press release practice after academic publications about health news and press releases. We found an approximate halving (28% to 13%) in the rate of causal over-statements in press releases based on correlational evidence in the 6 months following a widely shared publication (
Sumner
et al., 2014
[59]) compared to an equivalent 6 months the preceding year (
). These rates can additionally be compared to the rate of 19% in a dataset from 2011 (abeit with some differences in methodology).

This evidence is correlational itself, and may not mean that the publication caused the change, since other factors may also have changed between 2014 and 2015. There has been scrutiny of health news and press releases from multiple quarters, and also press officer staff turnover may spontaneously change the balance of language in causal claims. At one extreme, it is possible that the changes were fully random; that 2014 was an unusually high year, and a drop to 13% was merely natural fluctuation/regression to the mean. At the other extreme is a fully causal explanation; that press releases were on a trajectory of rising causal over-statement, and awareness-raising reversed that trend. The truth normally lies somewhere between extreme interpretations, and all the above factors may have played a role. Moreover, whatever the causal chain, the drop or fluctuation shows that a high rate of causal language is not inevitable in press releases, despite the need to be concise and appealing.

Beyond the main focus on press releases, we also saw a numerical reduction in overstatements in news, but this was not significant (
). However, we found a strong association between news and press release language (
), consistent across years and consistent with previous research (
Bratton
et al., 2019
[60];
Schat
et al., 2018
[61];
Sumner
et al., 2014
[62];
Sumner
et al., 2016
[63]). There was no reason for this association to change while the time pressures on journalists remain intense, and importantly it did not weaken with the reduction of overstatement in press releases. This strong association raises the question of why significant reduction in press release overstatement was not mirrored in significant reduction in news overstatement (
), if a causal chain were operating such that press release claims influence news claims.

In fact the data are consistent with such a causal effect, because it is expected to be diluted by other factors. Numerically, if news carries overstatements for around 70% of overstated press releases and 15% of non-overstated press releases (e.g.
), and if this is difference is causal, we can calculate the expected change in news overstatement as a result of the change in press release overstatement seen in
(28% to 13%). We would therefore expect overstatements in around 30% (0.7*28+0.15*72) of news in 2014 and 22% (0.7*13+0.15*87) of news in 2015, which is close to what we saw, and clearly a diluted effect compared with the press release reduction of 28% to 13% (this outline calculation assumes similar news uptake for press releases regardless of overstatement, consistent with previous results; (
Bratton
et al., 2019
[64];
Schat
et al., 2018
[65];
Sumner
et al., 2014
[66];
Sumner
et al., 2016
[67]). Therefore our results are consistent with the non-significant effect in news being due to dilution and insufficient power, and should not be taken as evidence for no change in the news despite a difference in press releases.

We based our analysis on press releases and news for journal articles that did not already make causal claims. Of additional note, a GEE analysis of the journal articles themselves showed there were already causal claims in an estimated 40% of the 168 peer-reviewed journal articles based on correlational evidence (and meeting our other inclusion criteria). This tendency to use causal language, even in peer reviewed research conclusions, has been noted previously (
Wang
et al., 2015
[68]). It would be worth following such rates to find if they too might show signs of decreasing as awareness is raised.

We analysed only one form of overstatement – causal claims, which are a cornerstone of scientific inference. There are many other forms of potential overstatement that we did not analyse, including the two originally assessed by
Sumner
et al. (2014)
[69]: advice to readers and human inference based on non-human research. There are different reasons why we did not use these here as a test for professional practice change. For advice, we cannot compare it objectively to an aspect of study methods, and have recently reported that the association between advice in news and press releases did not replicate (using a subset of the 2014/15 sample used here;
Bratton
et al., 2019
[70]). We believe this may show that advice exaggeration is difficult to define. Audiences change between journal articles, press releases and news, and thus the appropriate phrasing of advice may legitimately change. For human inference from non-human research, the publication date of
Sumner
et al., 2014
[71] was confounded with the Concordat on Openness on Animal Research in the UK, which was signed by the majority of the institutions in this replication from May 2014 onwards. This is likely to be the major driver of drops in the rate of animal research studies being described in human terms (
Adams
et al., 2019
[72];
Bratton
et al., 2019
[73]).

Conclusions

Previous converging evidence suggests that press releases have a strong influence on health and science news, which in turn influences public health decisions and health practioners (see
Introduction). Here, we found a reduction in press release causal overstatement associated with the publication of a study examining news and press release exaggerations. Although correlational, this evidence may suggest that press release practice can change in response to research, given that casual overstatements do not seem to have been decreasing prior to this. However, natural fluctuation or other factors that changed between 2014 and 2015 could also explain the data.

Notes

[version 2; peer review: 2 approved]

Funding Statement

This work was supported by Wellcome grant 104943/Z/14/Z (CDC and PS), ESRC grant ES/M000664/1 (PS), H2020 ERC Consolidator grant 647893-CCT (CDC).

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References


Articles from Wellcome Open Research are provided here courtesy of The Wellcome Trust

References

  1. ^ Buhse
    et al., 2018
    (www.ncbi.nlm.nih.gov)
  2. ^ Grilli
    et al., 2002
    (www.ncbi.nlm.nih.gov)
  3. ^ Haneef
    et al., 2015
    (www.ncbi.nlm.nih.gov)
  4. ^ Matthews
    et al., 2016
    (www.ncbi.nlm.nih.gov)
  5. ^ Ramsay, 2013 (www.ncbi.nlm.nih.gov)
  6. ^ Schwitzer, 2008 (www.ncbi.nlm.nih.gov)
  7. ^ Sharma
    et al., 2003
    (www.ncbi.nlm.nih.gov)
  8. ^ Yavchitz
    et al., 2012
    (www.ncbi.nlm.nih.gov)
  9. ^ Autzen, 2014 (www.ncbi.nlm.nih.gov)
  10. ^ de Semir
    et al., 1998
    (www.ncbi.nlm.nih.gov)
  11. ^ Jackson & Moloney, 2016 (www.ncbi.nlm.nih.gov)
  12. ^ Lewis
    et al., 2008
    (www.ncbi.nlm.nih.gov)
  13. ^ Schwartz
    et al., 2012
    (www.ncbi.nlm.nih.gov)
  14. ^ Sumner
    et al., 2014
    (www.ncbi.nlm.nih.gov)
  15. ^ Sumner
    et al., 2016
    (www.ncbi.nlm.nih.gov)
  16. ^ Yavchitz
    et al., 2012
    (www.ncbi.nlm.nih.gov)
  17. ^ Sumner
    et al., 2014
    (www.ncbi.nlm.nih.gov)
  18. ^ BMJ Altmetric, 2019 (www.ncbi.nlm.nih.gov)
  19. ^ The Academy of Medical Sciences, 2017 (www.ncbi.nlm.nih.gov)
  20. ^ Adams
    et al., 2019
    (www.ncbi.nlm.nih.gov)
  21. ^ Sumner
    et al. (2014)
    (www.ncbi.nlm.nih.gov)
  22. ^ Buhse
    et al., 2018
    (www.ncbi.nlm.nih.gov)
  23. ^ Ramsay, 2013 (www.ncbi.nlm.nih.gov)
  24. ^ Wang
    et al., 2015
    (www.ncbi.nlm.nih.gov)
  25. ^ Haber
    et al. (2018)
    (www.ncbi.nlm.nih.gov)
  26. ^ Shaffer
    et al. (2018)
    (www.ncbi.nlm.nih.gov)
  27. ^ Norris
    et al., 2003
    (www.ncbi.nlm.nih.gov)
  28. ^ Dumas-Mallet
    et al., 2017
    (www.ncbi.nlm.nih.gov)
  29. ^ Sumner
    et al. (2014)
    (www.ncbi.nlm.nih.gov)
  30. ^ Adams
    et al., 2017
    (www.ncbi.nlm.nih.gov)
  31. ^ Adams
    et al., 2019
    (www.ncbi.nlm.nih.gov)
  32. ^ Bratton
    et al., 2019
    (www.ncbi.nlm.nih.gov)
  33. ^ Buhse
    et al., 2018
    (www.ncbi.nlm.nih.gov)
  34. ^ Schat
    et al., 2018
    (www.ncbi.nlm.nih.gov)
  35. ^ Sumner
    et al., 2014
    (www.ncbi.nlm.nih.gov)
  36. ^ Sumner
    et al., 2016
    (www.ncbi.nlm.nih.gov)
  37. ^ Bratton
    et al., 2019
    (www.ncbi.nlm.nih.gov)
  38. ^ Bratton
    et al., 2019
    (www.ncbi.nlm.nih.gov)
  39. ^ Adams
    et al., 2019
    (www.ncbi.nlm.nih.gov)
  40. ^ Sumner
    et al. (2014)
    (www.ncbi.nlm.nih.gov)
  41. ^ Bratton
    et al. (2019)
    (www.ncbi.nlm.nih.gov)
  42. ^ Sumner
    et al. (2014)
    (www.ncbi.nlm.nih.gov)
  43. ^ Sumner
    et al., 2014
    (www.ncbi.nlm.nih.gov)
  44. ^ Sumner
    et al., 2014
    (www.ncbi.nlm.nih.gov)
  45. ^ Adams
    et al., 2017
    (www.ncbi.nlm.nih.gov)
  46. ^ Chambers
    et al., 2019
    (www.ncbi.nlm.nih.gov)
  47. ^ Adams
    et al. (2017)
    (www.ncbi.nlm.nih.gov)
  48. ^ Sumner
    et al. (2014)
    (www.ncbi.nlm.nih.gov)
  49. ^ Adams
    et al., 2017
    (www.ncbi.nlm.nih.gov)
  50. ^ Sumner
    et al., 2014
    (www.ncbi.nlm.nih.gov)
  51. ^ Hanley
    et al. (2003)
    (www.ncbi.nlm.nih.gov)
  52. ^ Adams
    et al. (2017)
    (www.ncbi.nlm.nih.gov)
  53. ^ Sumner
    et al. (2014)
    (www.ncbi.nlm.nih.gov)
  54. ^ Adams
    et al. (2017)
    (www.ncbi.nlm.nih.gov)
  55. ^ Bratton
    et al., 2019
    (www.ncbi.nlm.nih.gov)
  56. ^ Schat
    et al., 2018
    (www.ncbi.nlm.nih.gov)
  57. ^ Sumner
    et al., 2014
    (www.ncbi.nlm.nih.gov)
  58. ^ Sumner
    et al., 2016
    (www.ncbi.nlm.nih.gov)
  59. ^ Sumner
    et al., 2014
    (www.ncbi.nlm.nih.gov)
  60. ^ Bratton
    et al., 2019
    (www.ncbi.nlm.nih.gov)
  61. ^ Schat
    et al., 2018
    (www.ncbi.nlm.nih.gov)
  62. ^ Sumner
    et al., 2014
    (www.ncbi.nlm.nih.gov)
  63. ^ Sumner
    et al., 2016
    (www.ncbi.nlm.nih.gov)
  64. ^ Bratton
    et al., 2019
    (www.ncbi.nlm.nih.gov)
  65. ^ Schat
    et al., 2018
    (www.ncbi.nlm.nih.gov)
  66. ^ Sumner
    et al., 2014
    (www.ncbi.nlm.nih.gov)
  67. ^ Sumner
    et al., 2016
    (www.ncbi.nlm.nih.gov)
  68. ^ Wang
    et al., 2015
    (www.ncbi.nlm.nih.gov)
  69. ^ Sumner
    et al. (2014)
    (www.ncbi.nlm.nih.gov)
  70. ^ Bratton
    et al., 2019
    (www.ncbi.nlm.nih.gov)
  71. ^ Sumner
    et al., 2014
    (www.ncbi.nlm.nih.gov)
  72. ^ Adams
    et al., 2019
    (www.ncbi.nlm.nih.gov)
  73. ^ Bratton
    et al., 2019
    (www.ncbi.nlm.nih.gov)
  74. ^ PMC free article (www.ncbi.nlm.nih.gov)
  75. ^ PubMed (www.ncbi.nlm.nih.gov)
  76. ^ CrossRef (dx.doi.org)
  77. ^ Google Scholar (scholar.google.com)
  78. ^ PubMed (www.ncbi.nlm.nih.gov)
  79. ^ CrossRef (dx.doi.org)
  80. ^ Google Scholar (scholar.google.com)
  81. ^ Reference Source (bmj.altmetric.com)
  82. ^ Google Scholar (scholar.google.com)
  83. ^ CrossRef (dx.doi.org)
  84. ^ Google Scholar (scholar.google.com)
  85. ^ PMC free article (www.ncbi.nlm.nih.gov)
  86. ^ PubMed (www.ncbi.nlm.nih.gov)
  87. ^ CrossRef (dx.doi.org)
  88. ^ Google Scholar (scholar.google.com)
  89. ^ PMC free article (www.ncbi.nlm.nih.gov)
  90. ^ PubMed (www.ncbi.nlm.nih.gov)
  91. ^ CrossRef (dx.doi.org)
  92. ^ Google Scholar (scholar.google.com)
  93. ^ CrossRef (dx.doi.org)
  94. ^ Google Scholar (scholar.google.com)
  95. ^ PubMed (www.ncbi.nlm.nih.gov)
  96. ^ CrossRef (dx.doi.org)
  97. ^ Google Scholar (scholar.google.com)
  98. ^ PMC free article (www.ncbi.nlm.nih.gov)
  99. ^ PubMed (www.ncbi.nlm.nih.gov)
  100. ^ CrossRef (dx.doi.org)
  101. ^ Google Scholar (scholar.google.com)
  102. ^ PubMed (www.ncbi.nlm.nih.gov)
  103. ^ CrossRef (dx.doi.org)
  104. ^ Google Scholar (scholar.google.com)
  105. ^ PMC free article (www.ncbi.nlm.nih.gov)
  106. ^ PubMed (www.ncbi.nlm.nih.gov)
  107. ^ CrossRef (dx.doi.org)
  108. ^ Google Scholar (scholar.google.com)
  109. ^ PMC free article (www.ncbi.nlm.nih.gov)
  110. ^ PubMed (www.ncbi.nlm.nih.gov)
  111. ^ CrossRef (dx.doi.org)
  112. ^ Google Scholar (scholar.google.com)
  113. ^ PubMed (www.ncbi.nlm.nih.gov)
  114. ^ CrossRef (dx.doi.org)
  115. ^ Google Scholar (scholar.google.com)
  116. ^ CrossRef (dx.doi.org)
  117. ^ Google Scholar (scholar.google.com)
  118. ^ CrossRef (dx.doi.org)
  119. ^ Google Scholar (scholar.google.com)
  120. ^ PMC free article (www.ncbi.nlm.nih.gov)
  121. ^ PubMed (www.ncbi.nlm.nih.gov)
  122. ^ CrossRef (dx.doi.org)
  123. ^ Google Scholar (scholar.google.com)
  124. ^ CrossRef (dx.doi.org)
  125. ^ Google Scholar (scholar.google.com)
  126. ^ PubMed (www.ncbi.nlm.nih.gov)
  127. ^ CrossRef (dx.doi.org)
  128. ^ Google Scholar (scholar.google.com)
  129. ^ Reference Source (openaccess.leidenuniv.nl)
  130. ^ Google Scholar (scholar.google.com)
  131. ^ PMC free article (www.ncbi.nlm.nih.gov)
  132. ^ PubMed (www.ncbi.nlm.nih.gov)
  133. ^ CrossRef (dx.doi.org)
  134. ^ Google Scholar (scholar.google.com)
  135. ^ PMC free article (www.ncbi.nlm.nih.gov)
  136. ^ PubMed (www.ncbi.nlm.nih.gov)
  137. ^ CrossRef (dx.doi.org)
  138. ^ Google Scholar (scholar.google.com)
  139. ^ PubMed (www.ncbi.nlm.nih.gov)
  140. ^ CrossRef (dx.doi.org)
  141. ^ Google Scholar (scholar.google.com)
  142. ^ PubMed (www.ncbi.nlm.nih.gov)
  143. ^ CrossRef (dx.doi.org)
  144. ^ Google Scholar (scholar.google.com)
  145. ^ PMC free article (www.ncbi.nlm.nih.gov)
  146. ^ PubMed (www.ncbi.nlm.nih.gov)
  147. ^ CrossRef (dx.doi.org)
  148. ^ Google Scholar (scholar.google.com)
  149. ^ PMC free article (www.ncbi.nlm.nih.gov)
  150. ^ PubMed (www.ncbi.nlm.nih.gov)
  151. ^ CrossRef (dx.doi.org)
  152. ^ Google Scholar (scholar.google.com)
  153. ^ Reference Source (acmedsci.ac.uk)
  154. ^ Google Scholar (scholar.google.com)
  155. ^ PubMed (www.ncbi.nlm.nih.gov)
  156. ^ CrossRef (dx.doi.org)
  157. ^ Google Scholar (scholar.google.com)
  158. ^ PMC free article (www.ncbi.nlm.nih.gov)
  159. ^ PubMed (www.ncbi.nlm.nih.gov)
  160. ^ CrossRef (dx.doi.org)
  161. ^ Google Scholar (scholar.google.com)
1 2

Share