Suppr超能文献

关于算法歧视能力的假设。

Assumptions About Algorithms' Capacity for Discrimination.

作者信息

Jago Arthur S, Laurin Kristin

机构信息

University of Washington Tacoma, USA.

The University of British Columbia, Vancouver, Canada.

出版信息

Pers Soc Psychol Bull. 2022 Apr;48(4):582-595. doi: 10.1177/01461672211016187. Epub 2021 May 28.

Abstract

Although their implementation has inspired optimism in many domains, algorithms can both systematize discrimination and obscure its presence. In seven studies, we test the hypothesis that people instead tend to assume algorithms discriminate less than humans due to beliefs that algorithms tend to be both more accurate and less emotional evaluators. As a result of these assumptions, people are more interested in being evaluated by an algorithm when they anticipate that discrimination against them is possible. We finally investigate the degree to which information about how algorithms train using data sets consisting of human judgments and decisions change people's increased preferences for algorithms when they themselves anticipate discrimination. Taken together, these studies indicate that algorithms appear less discriminatory than humans, making people (potentially erroneously) more comfortable with their use.

摘要

尽管算法的应用在许多领域激发了乐观情绪,但它们既能使歧视系统化,又能掩盖歧视的存在。在七项研究中,我们检验了这样一个假设:由于人们认为算法往往是更准确且较少情绪化的评估者,所以人们反而倾向于认为算法的歧视性比人类小。基于这些假设,当人们预期自己可能受到歧视时,他们会对由算法进行评估更感兴趣。最后,我们研究了关于算法如何使用由人类判断和决策组成的数据集进行训练的信息,在多大程度上会改变人们在预期自己会受到歧视时对算法增加的偏好。综合来看,这些研究表明,算法看起来比人类的歧视性更小,这让人们(可能是错误地)对使用算法更放心。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验