Askari Hadi, Chhabra Anshuman, von Hohenberg Bernhard Clemm, Heseltine Michael, Wojcieszak Magdalena
Department of Computer Science, University of California, Davis, USA.
Department of Computer Science and Engineering, University of South Florida, Tampa, USA.
PNAS Nexus. 2024 Aug 23;3(9):pgae368. doi: 10.1093/pnasnexus/pgae368. eCollection 2024 Sep.
Polarization, misinformation, declining trust, and wavering support for democratic norms are pressing threats to the US Exposure to verified and balanced news may make citizens more resilient to these threats. This project examines how to enhance users' exposure to and engagement with verified and ideologically balanced news in an ecologically valid setting. We rely on a 2-week long field experiment on 28,457 Twitter users. We created 28 bots utilizing GPT-2 that replied to users tweeting about sports, entertainment, or lifestyle with a contextual reply containing a URL to the topic-relevant section of a verified and ideologically balanced news organization and an encouragement to follow its Twitter account. To test differential effects by gender of the bots, the treated users were randomly assigned to receive responses by bots presented as female or male. We examine whether our intervention enhances the following of news media organizations, sharing and liking of news content (determined by our extensive list of news media outlets), tweeting about politics, and liking of political content (determined using our fine-tuned RoBERTa NLP transformer-based model). Although the treated users followed more news accounts and the users in the female bot treatment liked more news content than the control, these results were small in magnitude and confined to the already politically interested users, as indicated by their pretreatment tweeting about politics. In addition, the effects on liking and posting political content were uniformly null. These findings have implications for social media and news organizations and offer directions for pro-social computational interventions on platforms.
两极分化、错误信息、信任度下降以及对民主规范的支持摇摆不定,这些都是美国面临的紧迫威胁。接触经过核实且平衡的新闻可能会使公民对这些威胁更具抵御能力。本项目研究如何在符合生态效度的环境中增强用户对经过核实且意识形态平衡的新闻的接触和参与度。我们对28457名推特用户进行了为期两周的实地实验。我们利用GPT-2创建了28个机器人,这些机器人会回复那些发布关于体育、娱乐或生活方式推文的用户,回复内容包含指向经过核实且意识形态平衡的新闻机构相关主题板块的网址,并鼓励用户关注其推特账号。为了测试机器人按性别产生的不同效果,接受处理的用户被随机分配,以接收呈现为女性或男性的机器人的回复。我们研究我们的干预措施是否能增强对新闻媒体机构的关注、新闻内容的分享和点赞(由我们广泛的新闻媒体列表确定)、关于政治的推文以及对政治内容的点赞(使用我们基于微调的基于RoBERTa自然语言处理变压器的模型确定)。尽管接受处理的用户关注了更多新闻账号,且接受女性机器人处理的用户比对照组更喜欢新闻内容,但这些结果在量级上较小,且仅限于那些在预处理时就发布过关于政治推文的、已经对政治感兴趣的用户。此外,对点赞和发布政治内容的影响均为零。这些发现对社交媒体和新闻机构具有启示意义,并为平台上的亲社会计算干预提供了方向。