Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA, United States of America.
Carolina Population Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States of America.
PLoS One. 2018 May 30;13(5):e0196346. doi: 10.1371/journal.pone.0196346. eCollection 2018.
The pathway from evidence generation to consumption contains many steps which can lead to overstatement or misinformation. The proliferation of internet-based health news may encourage selection of media and academic research articles that overstate strength of causal inference. We investigated the state of causal inference in health research as it appears at the end of the pathway, at the point of social media consumption.
We screened the NewsWhip Insights database for the most shared media articles on Facebook and Twitter reporting about peer-reviewed academic studies associating an exposure with a health outcome in 2015, extracting the 50 most-shared academic articles and media articles covering them. We designed and utilized a review tool to systematically assess and summarize studies' strength of causal inference, including generalizability, potential confounders, and methods used. These were then compared with the strength of causal language used to describe results in both academic and media articles. Two randomly assigned independent reviewers and one arbitrating reviewer from a pool of 21 reviewers assessed each article.
We accepted the most shared 64 media articles pertaining to 50 academic articles for review, representing 68% of Facebook and 45% of Twitter shares in 2015. Thirty-four percent of academic studies and 48% of media articles used language that reviewers considered too strong for their strength of causal inference. Seventy percent of academic studies were considered low or very low strength of inference, with only 6% considered high or very high strength of causal inference. The most severe issues with academic studies' causal inference were reported to be omitted confounding variables and generalizability. Fifty-eight percent of media articles were found to have inaccurately reported the question, results, intervention, or population of the academic study.
We find a large disparity between the strength of language as presented to the research consumer and the underlying strength of causal inference among the studies most widely shared on social media. However, because this sample was designed to be representative of the articles selected and shared on social media, it is unlikely to be representative of all academic and media work. More research is needed to determine how academic institutions, media organizations, and social network sharing patterns impact causal inference and language as received by the research consumer.
从证据生成到使用的途径包含许多步骤,这些步骤可能导致夸大或错误信息。基于互联网的健康新闻的扩散可能会鼓励选择夸大因果推断强度的媒体和学术研究文章。我们调查了健康研究中因果推断的状态,因为它出现在途径的末尾,即社交媒体消费点。
我们从 NewsWhip Insights 数据库中筛选了 Facebook 和 Twitter 上分享最多的媒体文章,这些文章报道了 2015 年一项将暴露与健康结果相关联的同行评议学术研究,提取了分享最多的 50 篇学术文章和报道这些文章的媒体文章。我们设计并使用了一种审查工具来系统地评估和总结研究的因果推断强度,包括可推广性、潜在混杂因素和使用的方法。然后将这些与学术文章和媒体文章中用于描述结果的因果语言的强度进行比较。21 名评审员中的两名随机分配的独立评审员和一名仲裁评审员对每篇文章进行评估。
我们接受了 64 篇与 50 篇学术文章相关的最受分享的媒体文章进行审查,占 2015 年 Facebook 分享的 68%和 Twitter 分享的 45%。34%的学术研究和 48%的媒体文章使用了审查员认为与因果推断强度不匹配的语言。70%的学术研究被认为因果推断强度较低或非常低,只有 6%被认为因果推断强度高或非常高。学术研究因果推断中最严重的问题是报道遗漏混杂变量和可推广性。58%的媒体文章被发现不准确地报道了学术研究的问题、结果、干预或人群。
我们发现,在社交媒体上分享的最广泛的研究中,呈现给研究消费者的语言强度与研究的因果推断强度之间存在很大差距。然而,由于该样本旨在代表社交媒体上选择和分享的文章,因此不太可能代表所有学术和媒体工作。需要进一步研究以确定学术机构、媒体组织和社交网络共享模式如何影响研究消费者接收到的因果推断和语言。