Hosseinmardi Homa, Ghasemian Amir, Rivera-Lanas Miguel, Horta Ribeiro Manoel, West Robert, Watts Duncan J
Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA 19104.
Annenberg School of Communication, University of Pennsylvania, Philadelphia, PA 19104.
Proc Natl Acad Sci U S A. 2024 Feb 20;121(8):e2313377121. doi: 10.1073/pnas.2313377121. Epub 2024 Feb 13.
In recent years, critics of online platforms have raised concerns about the ability of recommendation algorithms to amplify problematic content, with potentially radicalizing consequences. However, attempts to evaluate the effect of recommenders have suffered from a lack of appropriate counterfactuals-what a user would have viewed in the absence of algorithmic recommendations-and hence cannot disentangle the effects of the algorithm from a user's intentions. Here we propose a method that we call "counterfactual bots" to causally estimate the role of algorithmic recommendations on the consumption of highly partisan content on YouTube. By comparing bots that replicate real users' consumption patterns with "counterfactual" bots that follow rule-based trajectories, we show that, on average, relying exclusively on the YouTube recommender results in less partisan consumption, where the effect is most pronounced for heavy partisan consumers. Following a similar method, we also show that if partisan consumers switch to moderate content, YouTube's sidebar recommender "forgets" their partisan preference within roughly 30 videos regardless of their prior history, while homepage recommendations shift more gradually toward moderate content. Overall, our findings indicate that, at least since the algorithm changes that YouTube implemented in 2019, individual consumption patterns mostly reflect individual preferences, where algorithmic recommendations play, if anything, a moderating role.
近年来,在线平台的批评者对推荐算法放大问题内容的能力表示担忧,这可能会产生激进化的后果。然而,评估推荐算法效果的尝试一直受到缺乏适当反事实数据(即用户在没有算法推荐的情况下会浏览的内容)的困扰,因此无法将算法的影响与用户的意图区分开来。在此,我们提出一种我们称之为“反事实机器人”的方法,以因果方式估计算法推荐在YouTube上高度党派化内容消费中的作用。通过将复制真实用户消费模式的机器人与遵循基于规则轨迹的“反事实”机器人进行比较,我们发现,平均而言,完全依赖YouTube推荐器会导致党派化消费减少,这种影响在重度党派化消费者中最为明显。采用类似方法,我们还表明,如果党派化消费者转向温和内容,YouTube的侧边栏推荐器会在大约30个视频内“忘记”他们的党派偏好,无论其先前的浏览历史如何,而首页推荐则会更缓慢地转向温和内容。总体而言,我们的研究结果表明,至少自YouTube在2019年实施算法更改以来,个人消费模式大多反映个人偏好,算法推荐即便有作用,也起到了缓和作用。