Abdalla Mohamad, Ally Mustafa, Jabri-Markwell Rita
Centre for Islamic Thought and Education, UNISA, Adelaide, Australia.
School of Education, Victoria University of Wellington, Wellington, New Zealand.
SN Soc Sci. 2021;1(9):238. doi: 10.1007/s43545-021-00240-4. Epub 2021 Sep 22.
Whilst preventing dehumanization of outgroups is a widely accepted goal in the field of countering violent extremism, current algorithms by social media platforms are focused on detecting individual samples through explicit language. This study tests whether explicit dehumanising language directed at Muslims is detected by tools of Facebook and Twitter; and further, whether the presence of explicit dehumanising terms is necessary to successfully dehumanise 'the other'-in this case, Muslims. Answering both these questions in the negative, this analysis extracts universally useful analytical tools that could be used together to consistently and competently assess actors using dehumanisation as a measure, even where that dehumanisation is cumulative and grounded in discourse, rather than explicit language. The output of one prolific actor identified by researchers as an anti-Muslim hate organisation, and four (4) other anti-Muslim actors, are discursively analysed, and impacts considered through the comments they elicit. Whilst this study focuses on material gathered with respect to anti-Muslim discourses, the findings are relevant to a range of contexts where groups are dehumanised on the basis of race or other protected attribute. This study suggests it is possible to predict aggregate harm by specific actors from a range of samples of borderline content that each might be difficult to discern as harmful individually.
虽然防止对外群体的非人化是打击暴力极端主义领域中一个广泛接受的目标,但社交媒体平台目前的算法专注于通过明确的语言来检测个体样本。本研究测试了Facebook和Twitter的工具是否能检测出针对穆斯林的明确的非人化语言;此外,明确的非人化用语的存在对于成功地将“他者”——在这种情况下是穆斯林——非人化是否必要。本分析对这两个问题的回答都是否定的,它提取了普遍有用的分析工具,这些工具可以一起用于持续且有效地评估将非人化作为一种手段的行为者,即使这种非人化是累积性的且基于话语,而非明确的语言。对研究人员认定为反穆斯林仇恨组织的一个多产行为者以及其他四个反穆斯林行为者的输出内容进行了话语分析,并通过它们引发的评论来考虑影响。虽然本研究聚焦于收集到的关于反穆斯林话语的材料,但研究结果适用于一系列基于种族或其他受保护属性而使群体非人化的背景。本研究表明,从一系列边缘内容样本中,可以预测特定行为者造成的总体伤害,而这些样本中的每一个单独来看可能都难以辨别其有害性。