von Schenk Alicia, Klockmann Victor, Bonnefon Jean-François, Rahwan Iyad, Köbis Nils
Julius-Maximilians-Universität Würzburg, Department of Economics, Sanderring 2, 97070 Würzburg, Germany.
Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany.
iScience. 2024 Jun 27;27(7):110201. doi: 10.1016/j.isci.2024.110201. eCollection 2024 Jul 19.
Humans, aware of the social costs associated with false accusations, are generally hesitant to accuse others of lying. Our study shows how lie detection algorithms disrupt this social dynamic. We develop a supervised machine-learning classifier that surpasses human accuracy and conduct a large-scale incentivized experiment manipulating the availability of this lie-detection algorithm. In the absence of algorithmic support, people are reluctant to accuse others of lying, but when the algorithm becomes available, a minority actively seeks its prediction and consistently relies on it for accusations. Although those who request machine predictions are not inherently more prone to accuse, they more willingly follow predictions that suggest accusation than those who receive such predictions without actively seeking them.
人类意识到错误指控所带来的社会成本,通常不愿指责他人说谎。我们的研究表明测谎算法如何扰乱这种社会动态。我们开发了一种超越人类准确率的监督式机器学习分类器,并进行了一项大规模激励实验,操控这种测谎算法的可获取性。在没有算法支持的情况下,人们不愿指责他人说谎,但当算法可用时,少数人会积极寻求其预测,并始终依赖它进行指控。尽管那些请求机器预测的人并非天生更容易指责他人,但与那些未主动寻求而收到此类预测的人相比,他们更愿意遵循表明应进行指控的预测。