Damiano Luisa, Dumouchel Paul
Epistemology of the Sciences of the Artificial Research Group, Department of Ancient and Modern Civilizations, University of Messina, Messina, Italy.
Graduate School of Core Ethics and Frontier Sciences, Ritsumeikan University, Kyoto, Japan.
Front Psychol. 2018 Mar 26;9:468. doi: 10.3389/fpsyg.2018.00468. eCollection 2018.
Social robotics entertains a particular relationship with anthropomorphism, which it neither sees as a cognitive error, nor as a sign of immaturity. Rather it considers that this common human tendency, which is hypothesized to have evolved because it favored cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of cooperative and interactive agents - social robots. This approach leads social robotics to focus research on the engineering of robots that activate anthropomorphic projections in users. The objective is to give robots "social presence" and "social behaviors" that are sufficiently credible for human users to engage in comfortable and potentially long-lasting relations with these machines. This choice of 'applied anthropomorphism' as a research methodology exposes the artifacts produced by social robotics to ethical condemnation: social robots are judged to be a "cheating" technology, as they generate in users the illusion of reciprocal social and affective relations. This article takes position in this debate, not only developing a series of arguments relevant to philosophy of mind, cognitive sciences, and robotic AI, but also asking what social robotics can teach us about anthropomorphism. On this basis, we propose a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that condemns "anthropomorphism-based" social robots. To address the relevant ethical issues, we promote a critical experimentally based ethical approach to social robotics, "synthetic ethics," which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth.
社交机器人与拟人化有着特殊的关系,它既不将其视为认知错误,也不看作是不成熟的标志。相反,它认为这种人类的普遍倾向(据推测是因为它有利于早期人类之间的合作而进化而来)如今可用于促进人类与一种新型的合作互动主体——社交机器人之间的社交互动。这种方法使得社交机器人将研究重点放在能够激发用户拟人化投射的机器人工程上。目标是赋予机器人足够可信的“社交存在感”和“社交行为”,以便人类用户能够与这些机器建立舒适且可能持久的关系。将“应用拟人化”作为一种研究方法的这种选择,使社交机器人制造的人工制品面临伦理谴责:社交机器人被判定为一种“欺骗性”技术,因为它们在用户心中制造了相互的社会和情感关系的错觉。本文在这场辩论中表明立场,不仅提出了一系列与心灵哲学、认知科学和机器人人工智能相关的论点,还探讨了社交机器人能让我们对拟人化有哪些了解。在此基础上,我们提出一种理论视角,将拟人化描述为一种基本的互动机制,并反驳了谴责“基于拟人化”的社交机器人的伦理反思。为了解决相关的伦理问题,我们倡导一种基于实验的批判性伦理方法来研究社交机器人,即“合成伦理”,其目的是让人类能够将社交机器人用于两个主要目标:自我认知和道德成长。