Rotterdam School of Management, Department of Marketing Management, Erasmus University, Rotterdam 3062 PA, The Netherlands.
Questrom School of Business, Department of Marketing, Boston University, Boston, MA 02215.
Proc Natl Acad Sci U S A. 2024 Apr 16;121(16):e2317602121. doi: 10.1073/pnas.2317602121. Epub 2024 Apr 10.
Algorithmic bias occurs when algorithms incorporate biases in the human decisions on which they are trained. We find that people see more of their biases (e.g., age, gender, race) in the decisions of algorithms than in their own decisions. Research participants saw more bias in the decisions of algorithms trained on their decisions than in their own decisions, even when those decisions were the same and participants were incentivized to reveal their true beliefs. By contrast, participants saw as much bias in the decisions of algorithms trained on their decisions as in the decisions of other participants and algorithms trained on the decisions of other participants. Cognitive psychological processes and motivated reasoning help explain why people see more of their biases in algorithms. Research participants most susceptible to bias blind spot were most likely to see more bias in algorithms than self. Participants were also more likely to perceive algorithms than themselves to have been influenced by irrelevant biasing attributes (e.g., race) but not by relevant attributes (e.g., user reviews). Because participants saw more of their biases in algorithms than themselves, they were more likely to make debiasing corrections to decisions attributed to an algorithm than to themselves. Our findings show that bias is more readily perceived in algorithms than in self and suggest how to use algorithms to reveal and correct biased human decisions.
算法偏差是指算法在训练过程中纳入了人类决策中的偏见。我们发现,人们在算法的决策中看到的偏见(例如年龄、性别、种族)比在自己的决策中看到的更多。研究参与者在自己的决策和算法的决策中看到了更多的偏见,即使这些决策是相同的,参与者也有动机揭示自己的真实信念。相比之下,参与者在自己决策的算法和其他参与者决策的算法中看到的偏见一样多,也和其他参与者决策的算法中看到的偏见一样多。认知心理过程和动机推理有助于解释为什么人们在算法中看到更多的偏见。最容易受到偏见盲点影响的研究参与者最有可能在算法中看到比自我更多的偏见。参与者也更有可能认为算法比自己更容易受到无关偏见属性(例如种族)的影响,但不容易受到相关属性(例如用户评论)的影响。因为参与者在算法中看到的偏见比自己更多,所以他们更有可能对归因于算法的决策进行去偏纠正,而不是对自己的决策进行去偏纠正。我们的研究结果表明,偏见在算法中比在自我中更容易被察觉,并提出了如何利用算法来揭示和纠正有偏见的人类决策。