University of Southern California, USA.
University of Southern California, USA.
Curr Opin Psychol. 2022 Oct;47:101382. doi: 10.1016/j.copsyc.2022.101382. Epub 2022 Jun 11.
Advances in artificial intelligence (AI) enable new ways of exercising and experiencing power by automating interpersonal tasks such as interviewing and hiring workers, managing and evaluating work, setting compensation, and negotiating deals. As these techniques become more sophisticated, they increasingly support personalization where users can "tell" their AI assistants not only what to do, but how to do it: in effect, dictating the ethical values that govern the assistant's behavior. Importantly, these new forms of power could bypass existing social and regulatory checks on unethical behavior by introducing a new agent into the equation. Organization research suggests that acting through human agents (i.e., the problem of indirect agency) can undermine ethical forecasting such that actors believe they are acting ethically, yet a) show less benevolence for the recipients of their power, b) receive less blame for ethical lapses, and c) anticipate less retribution for unethical behavior. We review a series of studies illustrating how, across a wide range of social tasks, people may behave less ethically and be more willing to deceive when acting through AI agents. We conclude by examining boundary conditions and discussing potential directions for future research.
人工智能 (AI) 的进步使人们能够通过自动化人际任务(如面试和招聘员工、管理和评估工作、设定薪酬和谈判交易)来获得新的权力行使和体验方式。随着这些技术变得更加复杂,它们越来越支持个性化,用户不仅可以“告诉”他们的 AI 助手做什么,还可以告诉他们如何去做:实际上,是在规定支配助手行为的道德价值观。重要的是,这些新形式的权力可能会绕过现有的社会和监管对不道德行为的检查,因为它们将一个新的代理引入了这个等式。组织研究表明,通过人类代理(即间接代理的问题)行事可能会破坏道德预测,以至于行为者认为自己的行为是合乎道德的,但 a)对他们权力的接受者表现出较少的仁慈,b)对道德失误的指责较少,c)对不道德行为的报复预期较低。我们回顾了一系列研究,说明了在广泛的社会任务中,人们在通过 AI 代理行事时可能会表现出更少的道德行为,并更愿意欺骗。最后,我们考察了边界条件,并讨论了未来研究的潜在方向。