Lanius Candice, Weber Ryan, MacKenzie William I
University of Alabama in Huntsville, Huntsville, AL USA.
Soc Netw Anal Min. 2021;11(1):32. doi: 10.1007/s13278-021-00739-x. Epub 2021 Mar 12.
The COVID-19 infodemic is driven partially by Twitter bots. Flagging bot accounts and the misinformation they share could provide one strategy for preventing the spread of false information online. This article reports on an experiment ( = 299) conducted with participants in the USA to see whether flagging tweets as coming from bot accounts and as containing misinformation can lower participants' self-reported engagement and attitudes about the tweets. This experiment also showed participants tweets that aligned with their previously held beliefs to determine how flags affect their overall opinions. Results showed that flagging tweets lowered participants' attitudes about them, though this effect was less pronounced in participants who frequently used social media or consumed more news, especially from Facebook or Fox News. Some participants also changed their opinions after seeing the flagged tweets. The results suggest that social media companies can flag suspicious or inaccurate content as a way to fight misinformation. Flagging could be built into future automated fact-checking systems and other misinformation abatement strategies of the social network analysis and mining community.
新冠疫情信息疫情的部分原因是推特机器人。标记机器人账户及其分享的错误信息可能是防止虚假信息在网上传播的一种策略。本文报道了一项针对美国参与者进行的实验(n = 299),以了解将推文标记为来自机器人账户且包含错误信息是否会降低参与者自我报告的对这些推文的参与度和态度。该实验还向参与者展示了与他们先前持有的信念一致的推文,以确定标记如何影响他们的总体看法。结果表明,标记推文降低了参与者对它们的态度,不过在经常使用社交媒体或消费更多新闻(尤其是来自脸书或福克斯新闻的新闻)的参与者中,这种影响不太明显。一些参与者在看到被标记的推文后也改变了他们的看法。结果表明,社交媒体公司可以标记可疑或不准确的内容,以此作为对抗错误信息的一种方式。标记可以纳入未来的自动事实核查系统以及社交网络分析和挖掘社区的其他错误信息消除策略中。