Zhang Yuyan, Wu Jiahua, Yu Feng, Xu Liying
Department of Psychology, School of Philosophy, Wuhan University, Wuhan 430079, China.
School of Marxism, Tsinghua University, Beijing 100084, China.
Behav Sci (Basel). 2023 Feb 16;13(2):181. doi: 10.3390/bs13020181.
Artificial intelligence has quickly integrated into human society and its moral decision-making has also begun to slowly seep into our lives. The significance of moral judgment research on artificial intelligence behavior is becoming increasingly prominent. The present research aims at examining how people make moral judgments about the behavior of artificial intelligence agents in a trolley dilemma where people are usually driven by controlled cognitive processes, and in a footbridge dilemma where people are usually driven by automatic emotional responses. Through three experiments ( = 626), we found that in the trolley dilemma (Experiment 1), the agent type rather than the actual action influenced people's moral judgments. Specifically, participants rated AI agents' behavior as more immoral and deserving of more blame than humans' behavior. Conversely, in the footbridge dilemma (Experiment 2), the actual action rather than the agent type influenced people's moral judgments. Specifically, participants rated action (a utilitarian act) as less moral and permissible and more morally wrong and blameworthy than inaction (a deontological act). A mixed-design experiment provided a pattern of results consistent with Experiment 1 and Experiment 2 (Experiment 3). This suggests that in different types of moral dilemmas, people adapt different modes of moral judgment to artificial intelligence, this may be explained by that when people make moral judgments in different types of moral dilemmas, they are engaging different processing systems.
人工智能已迅速融入人类社会,其道德决策也开始慢慢渗透到我们的生活中。对人工智能行为进行道德判断研究的意义日益凸显。本研究旨在考察人们在电车困境(人们通常受可控认知过程驱动)和人行天桥困境(人们通常受自动情感反应驱动)中如何对人工智能主体的行为做出道德判断。通过三项实验(N = 626),我们发现,在电车困境中(实验1),主体类型而非实际行为影响人们的道德判断。具体而言,与人类行为相比,参与者认为人工智能主体的行为更不道德,更应受到指责。相反,在人行天桥困境中(实验2),实际行为而非主体类型影响人们的道德判断。具体而言,与不作为(一种道义行为)相比,参与者认为行动(一种功利行为)的道德性更低、更不可取,道德错误更多且更应受到指责。一项混合设计实验得出了与实验1和实验2一致的结果模式(实验3)。这表明,在不同类型的道德困境中,人们对人工智能采用不同的道德判断模式,这可能是因为人们在不同类型的道德困境中做出道德判断时,使用的是不同的处理系统。