Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany.
Perspect Psychol Sci. 2024 Sep;19(5):849-859. doi: 10.1177/17456916231188052. Epub 2023 Sep 5.
Inequalities and injustices are thorny issues in liberal societies, manifesting in forms such as the gender-pay gap; sentencing discrepancies among Black, Hispanic, and White defendants; and unequal medical-resource distribution across ethnicities. One cause of these inequalities is -unconsciously formed associations between social groups and attributions such as "nurturing," "lazy," or "uneducated." One strategy to counteract implicit and explicit human biases is delegating crucial decisions, such as how to allocate benefits, resources, or opportunities, to algorithms. Algorithms, however, are not necessarily impartial and objective. Although they can detect and mitigate human biases, they can also perpetuate and even amplify existing inequalities and injustices. We explore how a philosophical thought experiment, Rawls's "veil of ignorance," and a psychological phenomenon, deliberate ignorance, can help shield individuals, institutions, and algorithms from biases. We discuss the benefits and drawbacks of methods for shielding human and artificial decision makers from potentially biasing information. We then broaden our discussion beyond the issues of bias and fairness and turn to a research agenda aimed at improving human judgment accuracy with the assistance of algorithms that conceal information that has the potential to undermine performance. Finally, we propose interdisciplinary research questions.
不平等和不公正现象是自由社会中的棘手问题,表现形式包括性别薪酬差距;黑人和西班牙裔以及白人和被告之间的量刑差异;以及种族之间医疗资源分配不均等。这些不平等现象的一个原因是——在社会群体之间以及在“养育”、“懒惰”或“未受教育”等归因之间无意识地形成关联。一种克服隐含和显在人类偏见的策略是将关键决策(例如如何分配利益、资源或机会)委托给算法。然而,算法不一定公正和客观。尽管它们可以检测和减轻人类偏见,但它们也可能延续甚至放大现有的不平等和不公正现象。我们探讨了哲学思想实验——罗尔斯的“无知之幕”和心理学现象——深思熟虑的无知如何帮助个人、机构和算法免受偏见的影响。我们讨论了为保护人类和人工智能决策者免受潜在偏见信息影响而采取的屏蔽方法的优缺点。然后,我们将讨论范围扩大到偏见和公平问题之外,转而讨论一个研究议程,旨在通过使用隐藏可能影响绩效的信息的算法来提高人类判断的准确性。最后,我们提出了跨学科的研究问题。