Munzert Simon, Traunmüller Richard, Barberá Pablo, Guess Andrew, Yang JungHwan
Data Science Lab, Hertie School, 10017 Berlin, BE, Germany.
School of Social Sciences, University of Mannheim, 68159 Mannheim, BW, Germany.
PNAS Nexus. 2025 Feb 12;4(2):pgaf032. doi: 10.1093/pnasnexus/pgaf032. eCollection 2025 Feb.
The shift of public discourse to online platforms has intensified the debate over content moderation by platforms and the regulation of online speech. Designing rules that are met with wide acceptance requires learning about public preferences. We present a visual vignette study using a sample ( ) of German and US citizens that were exposed to synthetic social media vignettes mimicking actual cases of hateful speech. We find people's evaluations to be primarily shaped by message type and severity, and less by contextual factors. While focused measures like deleting hateful content are popular, more extreme sanctions like job loss find little support even in cases of extreme hate. Further evidence suggests in-group favoritism among political partisans. Experimental evidence shows that exposure to hateful speech reduces tolerance of unpopular opinions.
公众话语向网络平台的转移加剧了关于平台内容审核和网络言论监管的辩论。设计能获得广泛认可的规则需要了解公众偏好。我们开展了一项视觉小品研究,以德国和美国公民为样本,让他们接触模拟仇恨言论实际案例的合成社交媒体小品。我们发现,人们的评价主要受信息类型和严重程度影响,受背景因素影响较小。虽然像删除仇恨内容这样有针对性的措施很受欢迎,但即使在极端仇恨的情况下,像失业这样更严厉的制裁也几乎得不到支持。进一步的证据表明政治党派人士存在内群体偏袒。实验证据表明,接触仇恨言论会降低对不受欢迎观点的容忍度。