Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore.
Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore.
Soc Sci Med. 2023 Jul;328:115979. doi: 10.1016/j.socscimed.2023.115979. Epub 2023 May 22.
This study examines the proliferation of COVID-19 misinformation through Plandemic-a pseudo-documentary of COVID-19 conspiracy theories-on social media and examines how factors such as (a) themes of misinformation, (b) types of misinformation, (c) sources of misinformation, (d) emotions of misinformation, and (e) fact-checking labels amplify or attenuate online misinformation during the early days of the pandemic. Using CrowdTangle, a Facebook API, we collected a total of 5732 publicly available Facebook pages posts containing Plandemic-related keywords from January 1 to December 19, 2020. A random sample of 600 posts was subsequently coded, and the data were analyzed using negative binomial regression to examine factors associated with amplification and attenuation. Overall, the extended an extended Social Amplification of Risk Framework (SARF) provided a theoretical lens to understand why certain misinformation was amplified, while others were attenuated. As for posts with misinformation, results showed that themes related to private firms, treatment and prevention of virus transmission, diagnosis and health impacts, virus origins, and social impact were more likely to be amplified. While the different types of misinformation (manipulated, fabricated, or satire) and emotions were not associated with amplification, the type of fact-check labels did influence the virality of misinformation. Specifically, posts that were flagged as false by Facebook were more likely to be amplified, while the virality of posts flagged as containing partially false information was attenuated. Theoretical and practical implications were discussed.
本研究通过《大流行阴谋》(一部关于新冠病毒阴谋论的伪纪录片)考察了新冠病毒错误信息在社交媒体上的传播,并探讨了以下因素如何放大或减弱疫情早期的在线错误信息:(a)错误信息主题,(b)错误信息类型,(c)错误信息来源,(d)错误信息情绪,(e)事实核查标签。我们使用 Facebook 的 API(CrowdTangle)从 2020 年 1 月 1 日至 12 月 19 日收集了总共 5732 条包含与《大流行阴谋》相关关键词的公开可用的 Facebook 页面帖子。随后对 600 个帖子进行了随机抽样编码,并使用负二项回归分析数据,以检验与放大和衰减相关的因素。总体而言,扩展后的风险社会放大框架(SARF)为理解为什么某些错误信息被放大,而其他信息被削弱提供了理论视角。对于包含错误信息的帖子,结果表明,与私营公司、病毒传播的治疗和预防、诊断和健康影响、病毒起源以及社会影响相关的主题更有可能被放大。而不同类型的错误信息(操纵、捏造或讽刺)和情绪与放大无关,但事实核查标签的类型确实影响错误信息的传播性。具体来说,被 Facebook 标记为虚假的帖子更有可能被放大,而被标记为包含部分虚假信息的帖子的传播性则被削弱。讨论了理论和实践意义。