Windisch Steven, Wiedlitzka Susann, Olaghere Ajima, Jenaway Elizabeth
Department of Criminal Justice Temple University Philadelphia Pennsylvania USA.
School of Social Sciences The University of Auckland Auckland New Zealand.
Campbell Syst Rev. 2022 May 25;18(2):e1243. doi: 10.1002/cl2.1243. eCollection 2022 Jun.
The unique feature of the Internet is that individual negative attitudes toward minoritized and racialized groups and more extreme, hateful ideologies can find their way onto specific platforms and instantly connect people sharing similar prejudices. The enormous frequency of hate speech/cyberhate within online environments creates a sense of normalcy about hatred and the potential for acts of intergroup violence or political radicalization. While there is some evidence of effective interventions to counter hate speech through television, radio, youth conferences, and text messaging campaigns, interventions for online hate speech have only recently emerged.
This review aimed to assess the effects of online interventions to reduce online hate speech/cyberhate.
We systematically searched 2 database aggregators, 36 individual databases, 6 individual journals, and 34 websites, as well as bibliographies of published reviews of related literature, and scrutiny of annotated bibliographies of related literature.
We included randomized and rigorous quasi-experimental studies of online hate speech/cyberhate interventions that measured the creation and/or consumption of hateful content online and included a control group. Eligible populations included youth (10-17 years) and adult (18+ years) participants of any racial/ethnic background, religious affiliation, gender identity, sexual orientation, nationality, or citizenship status.
The systematic search covered January 1, 1990 to December 31, 2020, with searches conducted between August 19, 2020 and December 31, 2020, and supplementary searches undertaken between March 17 and 24, 2022. We coded characteristics of the intervention, sample, outcomes, and research methods. We extracted quantitative findings in the form of a standardized mean difference effect size. We computed a meta-analysis on two independent effect sizes.
Two studies were included in the meta-analysis, one of which had three treatment arms. For the purposes of the meta-analysis we chose the treatment arm from the Álvarez-Benjumea and Winter (2018) study that most closely aligned with the treatment condition in the Bodine-Baron et al. (2020) study. However, we also present additional single effect sizes for the other treatment arms from the Álvarez-Benjumea and Winter (2018) study. Both studies evaluated the effectiveness of an online intervention for reducing online hate speech/cyberhate. The Bodine-Baron et al. (2020) study had a sample size of 1570 subjects, while the Álvarez-Benjumea and Winter (2018) study had a sample size of 1469 tweets (nested in 180 subjects). The mean effect was small (= -0.134, 95% confidence interval [-0.321, -0.054]). Each study was assessed for risk of bias on the following domains: randomization process, deviations from intended interventions, missing outcome data, measurement of the outcome, and selection of the reported results. Both studies were rated as "low risk" on the randomization process, deviations from intended interventions, and measurement of the outcome domains. We assessed the Bodine-Baron et al. (2020) study as "some" risk of bias regarding missing outcome data and "high risk" for selective outcome reporting bias. The Álvarez-Benjumea and Winter (2018) study was rated as "some concern" for the selective outcome reporting bias domain.
AUTHORS' CONCLUSIONS: The evidence is insufficient to determine the effectiveness of online hate speech/cyberhate interventions for reducing the creation and/or consumption of hateful content online. Gaps in the evaluation literature include the lack of experimental (random assignment) and quasi-experimental evaluations of online hate speech/cyberhate interventions, addressing the creation and/or consumption of hate speech as opposed to the accuracy of detection/classification software, and assessing heterogeneity among subjects by including both extremist and non-extremist individuals in future intervention studies. We provide suggestions for how future research on online hate speech/cyberhate interventions can fill these gaps moving forward.
互联网的独特之处在于,个人对少数族裔和种族化群体的负面态度以及更极端、仇恨性的意识形态能够在特定平台上出现,并立即将持有类似偏见的人联系起来。在线环境中仇恨言论/网络仇恨的频繁出现,营造出一种仇恨常态化的感觉,并增加了群体间暴力行为或政治激进化的可能性。虽然有证据表明通过电视、广播、青年会议和短信活动等方式可以有效干预仇恨言论,但针对在线仇恨言论的干预措施直到最近才出现。
本综述旨在评估在线干预措施对减少在线仇恨言论/网络仇恨的效果。
我们系统地搜索了2个数据库聚合器、36个单独的数据库、6本单独的期刊和34个网站,以及相关文献已发表综述的参考文献,并仔细审查了相关文献的注释书目。
我们纳入了关于在线仇恨言论/网络仇恨干预措施的随机和严格的准实验研究,这些研究测量了在线仇恨内容的产生和/或传播情况,并设有对照组。符合条件的人群包括任何种族/族裔背景、宗教信仰、性别认同、性取向、国籍或公民身份的青年(10 - 17岁)和成年人(18岁及以上)。
系统搜索涵盖了1990年1月1日至2020年12月31日,搜索于2020年8月19日至2020年12月31日进行,并于2022年3月17日至24日进行了补充搜索。我们对干预措施、样本、结果和研究方法的特征进行了编码。我们以标准化平均差效应大小的形式提取了定量研究结果。我们对两个独立的效应大小进行了荟萃分析。
荟萃分析纳入了两项研究,其中一项研究有三个治疗组。为了进行荟萃分析,我们从阿尔瓦雷斯 - 本胡梅亚和温特(2018年)的研究中选择了与博丁 - 巴伦等人(2020年)研究中的治疗条件最接近一致的治疗组。然而,我们也给出了阿尔瓦雷斯 - 本胡梅亚和温特(2018年)研究中其他治疗组的额外单效应大小。两项研究都评估了一种在线干预措施对减少在线仇恨言论/网络仇恨的有效性。博丁 - 巴伦等人(2020年)的研究样本量为1570名受试者,而阿尔瓦雷斯 - 本胡梅亚和温特(2018年)的研究样本量为1469条推文(嵌套在180名受试者中)。平均效应较小(= -0.134,95%置信区间[-0.321,-0.054])。对每项研究在以下领域进行了偏倚风险评估:随机化过程、与预期干预的偏差、缺失结果数据、结果测量以及报告结果的选择。两项研究在随机化过程、与预期干预的偏差以及结果测量领域均被评为“低风险”。我们将博丁 - 巴伦等人(2020年)的研究在缺失结果数据方面评估为存在“一定”偏倚风险,在选择性结果报告偏倚方面评估为“高风险”。阿尔瓦雷斯 - 本胡梅亚和温特(2018年)的研究在选择性结果报告偏倚领域被评为“有些担忧”。
证据不足以确定在线仇恨言论/网络仇恨干预措施对于减少在线仇恨内容的产生和/或传播的有效性。评估文献中的差距包括缺乏对在线仇恨言论/网络仇恨干预措施的实验性(随机分配)和准实验性评估,关注的是仇恨言论的产生和/或传播而非检测/分类软件的准确性,以及在未来的干预研究中通过纳入极端主义者和非极端主义者个体来评估受试者之间的异质性。我们为未来关于在线仇恨言论/网络仇恨干预措施的研究如何填补这些差距提供了建议。