Suppr超能文献

算法歧视比人为歧视引起的道德义愤更少。

Algorithmic discrimination causes less moral outrage than human discrimination.

作者信息

Bigman Yochanan E, Wilson Desman, Arnestad Mads N, Waytz Adam, Gray Kurt

机构信息

Department of Psychology, Yale University.

Kellogg School of Management, Northwestern University.

出版信息

J Exp Psychol Gen. 2023 Jan;152(1):4-27. doi: 10.1037/xge0001250. Epub 2022 Jun 27.

Abstract

Companies and governments are using algorithms to improve decision-making for hiring, medical treatments, and parole. The use of algorithms holds promise for overcoming human biases in decision-making, but they frequently make decisions that discriminate. Media coverage suggests that people are morally outraged by algorithmic discrimination, but here we examine whether people are outraged by algorithmic discrimination than by human discrimination. Eight studies test this hypothesis in the context of gender discrimination in hiring practices across diverse participant groups (online samples, a quasi-representative sample, and a sample of tech workers). We find that people are less morally outraged by algorithmic (vs. human) discrimination and are less likely to hold the organization responsible. The algorithmic outrage deficit is driven by the reduced attribution of prejudicial motivation to algorithms. Just as algorithms dampen outrage, they also dampen praise-companies enjoy less of a reputational boost when their algorithms (vs. employees) reduce gender inequality. Our studies also reveal a downstream consequence of algorithmic outrage deficit-people are less likely to find the company legally liable when the discrimination was caused by an algorithm (vs. a human). We discuss the theoretical and practical implications of these results, including the potential weakening of collective action to address systemic discrimination. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

摘要

公司和政府正在使用算法来改善招聘、医疗治疗和假释等方面的决策。算法的使用有望克服决策过程中的人类偏见,但它们经常做出歧视性的决策。媒体报道表明,人们对算法歧视感到道德愤慨,但在这里我们研究人们对算法歧视的愤慨是否超过对人类歧视的愤慨。八项研究在不同参与者群体(在线样本、准代表性样本和技术工人样本)的招聘实践中的性别歧视背景下检验了这一假设。我们发现,人们对算法(相对于人类)歧视的道德愤慨较少,也不太可能认为组织应承担责任。算法愤慨不足是由对算法偏见动机的归因减少所驱动的。正如算法抑制愤慨一样,它们也抑制赞扬——当公司的算法(相对于员工)减少性别不平等时,公司获得的声誉提升较少。我们的研究还揭示了算法愤慨不足的一个下游后果——当歧视是由算法(相对于人类)造成时,人们不太可能认为公司应承担法律责任。我们讨论了这些结果的理论和实践意义,包括解决系统性歧视的集体行动可能被削弱。(PsycInfo数据库记录(c)2023美国心理学会,保留所有权利)

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验