McKee Kevin R, Bai Xuechunzi, Fiske Susan T
DeepMind, N1C 4DN London, UK.
Department of Psychology, Princeton University, Princeton, NJ 08540, USA.
iScience. 2023 Jul 4;26(8):107256. doi: 10.1016/j.isci.2023.107256. eCollection 2023 Aug 18.
Artificial intelligence (A.I.) increasingly suffuses everyday life. However, people are frequently reluctant to interact with A.I. systems. This challenges both the deployment of beneficial A.I. technology and the development of deep learning systems that depend on humans for oversight, direction, and regulation. Nine studies ( = 3,300) demonstrate that social-cognitive processes guide human interactions across a diverse range of real-world A.I. systems. Across studies, perceived warmth and competence emerge prominently in participants' impressions of A.I. systems. Judgments of warmth and competence systematically depend on human-A.I. interdependence and autonomy. In particular, participants perceive systems that optimize interests aligned with human interests as warmer and systems that operate independently from human direction as more competent. Finally, a prisoner's dilemma game shows that warmth and competence judgments predict participants' willingness to cooperate with a deep-learning system. These results underscore the generality of intent detection to perceptions of a broad array of algorithmic actors.
人工智能(A.I.)越来越融入日常生活。然而,人们常常不愿与人工智能系统互动。这既给有益的人工智能技术的部署带来挑战,也给依赖人类进行监督、指导和监管的深度学习系统的开发带来挑战。九项研究(N = 3300)表明,社会认知过程指导着人类与各种现实世界人工智能系统的互动。在各项研究中,感知到的温暖和能力在参与者对人工智能系统的印象中显著凸显。对温暖和能力的判断系统地取决于人与人工智能的相互依存和自主性。具体而言,参与者认为优化与人类利益一致的利益的系统更温暖,而独立于人类指导运行的系统更有能力。最后,一个囚徒困境博弈表明,对温暖和能力的判断预测了参与者与深度学习系统合作的意愿。这些结果强调了意图检测对于广泛算法行为者认知的普遍性。