Suppr超能文献

为了规避人为偏见而进行蒙蔽:人类、机构和机器中的蓄意无知。

Blinding to Circumvent Human Biases: Deliberate Ignorance in Humans, Institutions, and Machines.

机构信息

Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany.

出版信息

Perspect Psychol Sci. 2024 Sep;19(5):849-859. doi: 10.1177/17456916231188052. Epub 2023 Sep 5.

Abstract

Inequalities and injustices are thorny issues in liberal societies, manifesting in forms such as the gender-pay gap; sentencing discrepancies among Black, Hispanic, and White defendants; and unequal medical-resource distribution across ethnicities. One cause of these inequalities is -unconsciously formed associations between social groups and attributions such as "nurturing," "lazy," or "uneducated." One strategy to counteract implicit and explicit human biases is delegating crucial decisions, such as how to allocate benefits, resources, or opportunities, to algorithms. Algorithms, however, are not necessarily impartial and objective. Although they can detect and mitigate human biases, they can also perpetuate and even amplify existing inequalities and injustices. We explore how a philosophical thought experiment, Rawls's "veil of ignorance," and a psychological phenomenon, deliberate ignorance, can help shield individuals, institutions, and algorithms from biases. We discuss the benefits and drawbacks of methods for shielding human and artificial decision makers from potentially biasing information. We then broaden our discussion beyond the issues of bias and fairness and turn to a research agenda aimed at improving human judgment accuracy with the assistance of algorithms that conceal information that has the potential to undermine performance. Finally, we propose interdisciplinary research questions.

摘要

不平等和不公正现象是自由社会中的棘手问题,表现形式包括性别薪酬差距;黑人和西班牙裔以及白人和被告之间的量刑差异;以及种族之间医疗资源分配不均等。这些不平等现象的一个原因是——在社会群体之间以及在“养育”、“懒惰”或“未受教育”等归因之间无意识地形成关联。一种克服隐含和显在人类偏见的策略是将关键决策(例如如何分配利益、资源或机会)委托给算法。然而,算法不一定公正和客观。尽管它们可以检测和减轻人类偏见,但它们也可能延续甚至放大现有的不平等和不公正现象。我们探讨了哲学思想实验——罗尔斯的“无知之幕”和心理学现象——深思熟虑的无知如何帮助个人、机构和算法免受偏见的影响。我们讨论了为保护人类和人工智能决策者免受潜在偏见信息影响而采取的屏蔽方法的优缺点。然后,我们将讨论范围扩大到偏见和公平问题之外,转而讨论一个研究议程,旨在通过使用隐藏可能影响绩效的信息的算法来提高人类判断的准确性。最后,我们提出了跨学科的研究问题。

相似文献

2
Veil-of-ignorance reasoning favors the greater good.无知之幕推理有利于更大的利益。
Proc Natl Acad Sci U S A. 2019 Nov 26;116(48):23989-23995. doi: 10.1073/pnas.1910125116. Epub 2019 Nov 12.

本文引用的文献

1
Deliberate ignorance-a barrier for information interventions targeting reduced meat consumption?
Psychol Health. 2024 Nov;39(11):1656-1673. doi: 10.1080/08870446.2023.2182895. Epub 2023 Mar 1.
2
Resolving content moderation dilemmas between free speech and harmful misinformation.解决言论自由和有害错误信息之间的内容审核困境。
Proc Natl Acad Sci U S A. 2023 Feb 14;120(7):e2210666120. doi: 10.1073/pnas.2210666120. Epub 2023 Feb 7.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验