Jago Arthur S, Laurin Kristin
University of Washington Tacoma, USA.
The University of British Columbia, Vancouver, Canada.
Pers Soc Psychol Bull. 2022 Apr;48(4):582-595. doi: 10.1177/01461672211016187. Epub 2021 May 28.
Although their implementation has inspired optimism in many domains, algorithms can both systematize discrimination and obscure its presence. In seven studies, we test the hypothesis that people instead tend to assume algorithms discriminate less than humans due to beliefs that algorithms tend to be both more accurate and less emotional evaluators. As a result of these assumptions, people are more interested in being evaluated by an algorithm when they anticipate that discrimination against them is possible. We finally investigate the degree to which information about how algorithms train using data sets consisting of human judgments and decisions change people's increased preferences for algorithms when they themselves anticipate discrimination. Taken together, these studies indicate that algorithms appear less discriminatory than humans, making people (potentially erroneously) more comfortable with their use.
尽管算法的应用在许多领域激发了乐观情绪,但它们既能使歧视系统化,又能掩盖歧视的存在。在七项研究中,我们检验了这样一个假设:由于人们认为算法往往是更准确且较少情绪化的评估者,所以人们反而倾向于认为算法的歧视性比人类小。基于这些假设,当人们预期自己可能受到歧视时,他们会对由算法进行评估更感兴趣。最后,我们研究了关于算法如何使用由人类判断和决策组成的数据集进行训练的信息,在多大程度上会改变人们在预期自己会受到歧视时对算法增加的偏好。综合来看,这些研究表明,算法看起来比人类的歧视性更小,这让人们(可能是错误地)对使用算法更放心。