Hill/Levene Schools of Business, University of Regina, Regina, SK S4S 0A2, Canada;
Sloan School, Massachusetts Institute of Technology, Cambridge, MA 02138;
Proc Natl Acad Sci U S A. 2019 Feb 12;116(7):2521-2526. doi: 10.1073/pnas.1806781116. Epub 2019 Jan 28.
Reducing the spread of misinformation, especially on social media, is a major challenge. We investigate one potential approach: having social media platform algorithms preferentially display content from news sources that users rate as trustworthy. To do so, we ask whether crowdsourced trust ratings can effectively differentiate more versus less reliable sources. We ran two preregistered experiments ( = 1,010 from Mechanical Turk and = 970 from Lucid) where individuals rated familiarity with, and trust in, 60 news sources from three categories: () mainstream media outlets, () hyperpartisan websites, and () websites that produce blatantly false content ("fake news"). Despite substantial partisan differences, we find that laypeople across the political spectrum rated mainstream sources as far more trustworthy than either hyperpartisan or fake news sources. Although this difference was larger for Democrats than Republicans-mostly due to distrust of mainstream sources by Republicans-every mainstream source (with one exception) was rated as more trustworthy than every hyperpartisan or fake news source across both studies when equally weighting ratings of Democrats and Republicans. Furthermore, politically balanced layperson ratings were strongly correlated ( = 0.90) with ratings provided by professional fact-checkers. We also found that, particularly among liberals, individuals higher in cognitive reflection were better able to discern between low- and high-quality sources. Finally, we found that excluding ratings from participants who were not familiar with a given news source dramatically reduced the effectiveness of the crowd. Our findings indicate that having algorithms up-rank content from trusted media outlets may be a promising approach for fighting the spread of misinformation on social media.
减少错误信息的传播,尤其是在社交媒体上,是一个重大挑战。我们研究了一种潜在的方法:让社交媒体平台的算法优先显示用户认为可信的新闻来源的内容。为此,我们要研究众包信任评级是否能有效地区分更可靠和不太可靠的新闻来源。我们进行了两项预先注册的实验(来自 Mechanical Turk 的 = 1010 名参与者和来自 Lucid 的 = 970 名参与者),其中个人对来自三个类别的 60 个新闻来源的熟悉程度和信任程度进行了评级:()主流媒体机构、()极端党派网站和()制作明显虚假内容的网站(“假新闻”)。尽管存在明显的党派差异,但我们发现,普通民众普遍认为主流媒体比极端党派或假新闻来源更值得信赖。尽管这种差异在民主党人中比共和党人更大,主要是因为共和党人不信任主流媒体,但在两项研究中,当平等加权民主党人和共和党人的评级时,每个主流来源(只有一个例外)都被评为比每个极端党派或假新闻来源更值得信赖。此外,政治立场中立的普通民众的评级与专业事实核查者提供的评级高度相关(= 0.90)。我们还发现,尤其是在自由派人士中,认知反思能力较高的个体能够更好地区分低质量和高质量的来源。最后,我们发现,排除不熟悉某个新闻来源的参与者的评级会大大降低群体的效果。我们的研究结果表明,让算法对来自受信任的媒体机构的内容进行优先排序可能是打击社交媒体上错误信息传播的一种有前途的方法。