Santoni de Sio Filippo, van den Hoven Jeroen
Section Ethics/Philosophy of Technology, Faculty Technology Policy and Management, Delft University of Technology, Delft, Netherlands.
Front Robot AI. 2018 Feb 28;5:15. doi: 10.3389/frobt.2018.00015. eCollection 2018.
Debates on lethal autonomous weapon systems have proliferated in the past 5 years. Ethical concerns have been voiced about a possible raise in the number of wrongs and crimes in military operations and about the creation of a "responsibility gap" for harms caused by these systems. To address these concerns, the principle of "meaningful human control" has been introduced in the legal-political debate; according to this principle, humans not computers and their algorithms should ultimately remain in control of, and thus morally responsible for, relevant decisions about (lethal) military operations. However, policy-makers and technical designers lack a detailed theory of what "meaningful human control" exactly means. In this paper, we lay the foundation of a philosophical account of meaningful human control, based on the concept of "guidance control" as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of "Responsible Innovation" and "Value-sensitive Design," our account of meaningful human control is cast in the form of design requirements. We identify two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a "tracking" condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a "tracing" condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation. As we think that meaningful human control can be one of the central notions in ethics of robotics and AI, in the last part of the paper, we start exploring the implications of our account for the design and use of non-military autonomous systems, for instance, self-driving cars.
在过去五年里,关于致命性自主武器系统的辩论激增。人们对军事行动中错误和犯罪数量可能增加以及这些系统造成的伤害产生“责任缺口”表达了伦理担忧。为解决这些担忧,“有意义的人类控制”原则已被引入法律政治辩论;根据这一原则,最终应由人类而非计算机及其算法控制并因此对有关(致命)军事行动的相关决策承担道德责任。然而,政策制定者和技术设计师缺乏关于“有意义的人类控制”确切含义的详细理论。在本文中,我们基于自由意志和道德责任哲学辩论中阐述的“引导控制”概念,奠定了有意义的人类控制的哲学阐释基础。遵循“负责任的创新”和“价值敏感设计”的理念,我们对有意义的人类控制的阐释采用设计要求的形式。我们确定了自主系统要保持在有意义的人类控制之下需满足的两个一般必要条件:第一,“追踪”条件,即系统应能够回应设计和部署该系统的人类的相关道德理由以及系统运行环境中的相关事实;第二,“追溯”条件,即系统的设计应确保始终有可能将其操作结果追溯到设计和运行链条上的至少一个人。由于我们认为有意义的人类控制可能是机器人技术和人工智能伦理的核心概念之一,在本文的最后部分,我们开始探讨我们的阐释对非军事自主系统(例如自动驾驶汽车)的设计和使用的影响。