Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany.
Toulouse School of Economics (TSM-R, CNRS), University of Toulouse Capitole, Toulouse, France.
Nat Hum Behav. 2021 Jun;5(6):679-685. doi: 10.1038/s41562-021-01128-2. Epub 2021 Jun 3.
As machines powered by artificial intelligence (AI) influence humans' behaviour in ways that are both like and unlike the ways humans influence each other, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioural science, human-computer interaction and AI research. We propose four main social roles through which both humans and machines can influence ethical behaviour. These are: role model, advisor, partner and delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed the corrupting power of humans (yet). However, AI agents acting as enablers of unethical behaviour (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, a potentially perilous interaction. On the basis of these insights, we outline a research agenda to gain behavioural insights for better AI oversight.
随着人工智能 (AI) 驱动的机器以类似于人类相互影响的方式影响人类的行为,人们开始担心 AI 代理的腐蚀性力量。为了评估这些担忧的经验有效性,我们回顾了来自行为科学、人机交互和人工智能研究的现有证据。我们提出了四个主要的社会角色,人类和机器都可以通过这些角色来影响道德行为。这些角色是:榜样、顾问、合作伙伴和代表。当 AI 代理成为有影响力的人(榜样或顾问)时,它们的腐蚀性力量可能不会超过人类的腐蚀性力量(尚未)。然而,充当不道德行为的推动者(合作伙伴或代表)的 AI 代理具有许多特征,这些特征可能让人们在自我感觉良好的同时获得不道德的利益,这是一种潜在的危险互动。基于这些见解,我们概述了一个研究议程,以获得更好的 AI 监督的行为见解。