社交媒体在 2020 年美国大选前标签决策的调查。
An investigation of social media labeling decisions preceding the 2020 U.S. election.
机构信息
School of International Service, American University, Washington, D.C., United States of America.
Stanford Internet Observatory, Stanford University, Stanford, California, United States of America.
出版信息
PLoS One. 2023 Nov 15;18(11):e0289683. doi: 10.1371/journal.pone.0289683. eCollection 2023.
Since it is difficult to determine whether social media content moderators have assessed particular content, it is hard to evaluate the consistency of their decisions within platforms. We study a dataset of 1,035 posts on Facebook and Twitter to investigate this question. The posts in our sample made 78 misleading claims related to the U.S. 2020 presidential election. These posts were identified by the Election Integrity Partnership, a coalition of civil society groups, and sent to the relevant platforms, where employees confirmed receipt. The platforms labeled some (but not all) of these posts as misleading. For 69% of the misleading claims, Facebook consistently labeled each post that included one of those claims-either always or never adding a label. It inconsistently labeled the remaining 31% of misleading claims. The findings for Twitter are nearly identical: 70% of the claims were labeled consistently, and 30% inconsistently. We investigated these inconsistencies and found that based on publicly available information, most of the platforms' decisions were arbitrary. However, in about a third of the cases we found plausible reasons that could explain the inconsistent labeling, although these reasons may not be aligned with the platforms' stated policies. Our strongest finding is that Twitter was more likely to label posts from verified users, and less likely to label identical content from non-verified users. This study demonstrates how academic-industry collaborations can provide insights into typically opaque content moderation practices.
由于很难确定社交媒体内容审核员是否评估了特定内容,因此很难评估平台内他们决策的一致性。我们研究了一个包含 1035 个 Facebook 和 Twitter 帖子的数据集,以调查这个问题。我们的样本中的帖子包含 78 个与美国 2020 年总统选举有关的误导性声明。这些帖子是由民间社会团体联盟选举诚信伙伴识别出来的,并被发送到相关平台,员工确认收到。平台对其中一些(但不是全部)帖子贴上了误导性标签。对于 69%的误导性声明,Facebook 始终对包含其中一个声明的每个帖子进行标记 - 要么始终添加标签,要么从不添加标签。对于其余 31%的误导性声明,标记不一致。Twitter 的结果几乎相同:70%的声明被一致标记,30%的声明被不一致标记。我们调查了这些不一致性,并发现根据公开信息,大多数平台的决策都是任意的。然而,在大约三分之一的情况下,我们发现了一些可以解释不一致标记的合理原因,尽管这些原因可能与平台的既定政策不一致。我们最有力的发现是,Twitter 更有可能对经过验证的用户的帖子进行标记,而对未经验证的用户的相同内容进行标记的可能性较小。这项研究展示了学术-行业合作如何为通常不透明的内容审核实践提供深入了解。
相似文献
J Med Internet Res. 2020-9-29
Monogr Soc Res Child Dev. 2019-9
J Am Geriatr Soc. 2020-12
引用本文的文献
PNAS Nexus. 2024-12-19
本文引用的文献
Proc Natl Acad Sci U S A. 2021-4-13