Suppr超能文献

人类对自主系统的有效控制:一种哲学阐释。

Meaningful Human Control over Autonomous Systems: A Philosophical Account.

作者信息

Santoni de Sio Filippo, van den Hoven Jeroen

机构信息

Section Ethics/Philosophy of Technology, Faculty Technology Policy and Management, Delft University of Technology, Delft, Netherlands.

出版信息

Front Robot AI. 2018 Feb 28;5:15. doi: 10.3389/frobt.2018.00015. eCollection 2018.

Abstract

Debates on lethal autonomous weapon systems have proliferated in the past 5 years. Ethical concerns have been voiced about a possible raise in the number of wrongs and crimes in military operations and about the creation of a "responsibility gap" for harms caused by these systems. To address these concerns, the principle of "meaningful human control" has been introduced in the legal-political debate; according to this principle, humans not computers and their algorithms should ultimately remain in control of, and thus morally responsible for, relevant decisions about (lethal) military operations. However, policy-makers and technical designers lack a detailed theory of what "meaningful human control" exactly means. In this paper, we lay the foundation of a philosophical account of meaningful human control, based on the concept of "guidance control" as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of "Responsible Innovation" and "Value-sensitive Design," our account of meaningful human control is cast in the form of design requirements. We identify two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a "tracking" condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a "tracing" condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation. As we think that meaningful human control can be one of the central notions in ethics of robotics and AI, in the last part of the paper, we start exploring the implications of our account for the design and use of non-military autonomous systems, for instance, self-driving cars.

摘要

在过去五年里,关于致命性自主武器系统的辩论激增。人们对军事行动中错误和犯罪数量可能增加以及这些系统造成的伤害产生“责任缺口”表达了伦理担忧。为解决这些担忧,“有意义的人类控制”原则已被引入法律政治辩论;根据这一原则,最终应由人类而非计算机及其算法控制并因此对有关(致命)军事行动的相关决策承担道德责任。然而,政策制定者和技术设计师缺乏关于“有意义的人类控制”确切含义的详细理论。在本文中,我们基于自由意志和道德责任哲学辩论中阐述的“引导控制”概念,奠定了有意义的人类控制的哲学阐释基础。遵循“负责任的创新”和“价值敏感设计”的理念,我们对有意义的人类控制的阐释采用设计要求的形式。我们确定了自主系统要保持在有意义的人类控制之下需满足的两个一般必要条件:第一,“追踪”条件,即系统应能够回应设计和部署该系统的人类的相关道德理由以及系统运行环境中的相关事实;第二,“追溯”条件,即系统的设计应确保始终有可能将其操作结果追溯到设计和运行链条上的至少一个人。由于我们认为有意义的人类控制可能是机器人技术和人工智能伦理的核心概念之一,在本文的最后部分,我们开始探讨我们的阐释对非军事自主系统(例如自动驾驶汽车)的设计和使用的影响。

相似文献

1
Meaningful Human Control over Autonomous Systems: A Philosophical Account.
Front Robot AI. 2018 Feb 28;5:15. doi: 10.3389/frobt.2018.00015. eCollection 2018.
2
[The origin of informed consent].
Acta Otorhinolaryngol Ital. 2005 Oct;25(5):312-27.
3
Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach.
Minds Mach (Dordr). 2022 Jul 28:1-25. doi: 10.1007/s11023-022-09608-8.
4
Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.
Camb Q Healthc Ethics. 2021 Jul;30(3):435-447. doi: 10.1017/S0963180120000985.
5
Responsibility for crashes of autonomous vehicles: an ethical analysis.
Sci Eng Ethics. 2015 Jun;21(3):619-30. doi: 10.1007/s11948-014-9565-5. Epub 2014 Jun 11.
7
Resolving responsibility gaps for lethal autonomous weapon systems.
Front Big Data. 2022 Dec 6;5:1038507. doi: 10.3389/fdata.2022.1038507. eCollection 2022.
9
Designing robots that do no harm: understanding the challenges of Ethics Robots.
AI Ethics. 2023 Apr 17:1-9. doi: 10.1007/s43681-023-00283-8.

引用本文的文献

1
Capturing the design space of meaningful human control in military systems using repertory grids.
Front Psychol. 2025 Jul 16;16:1536667. doi: 10.3389/fpsyg.2025.1536667. eCollection 2025.
2
We need better images of AI and better conversations about AI.
AI Soc. 2025;40(5):3615-3626. doi: 10.1007/s00146-024-02101-z. Epub 2024 Oct 29.
3
Human control of AI systems: from supervision to teaming.
AI Ethics. 2025;5(2):1535-1548. doi: 10.1007/s43681-024-00489-4. Epub 2024 May 28.
4
A metaphysical account of agency for technology governance.
AI Soc. 2025;40(3):1723-1734. doi: 10.1007/s00146-024-01941-z. Epub 2024 Apr 21.
5
Establishing trust in artificial intelligence-driven autonomous healthcare systems: an expert-guided framework.
Front Digit Health. 2024 Nov 27;6:1474692. doi: 10.3389/fdgth.2024.1474692. eCollection 2024.
6
Responsibility Gap(s) Due to the Introduction of AI in Healthcare: An Ubuntu-Inspired Approach.
Sci Eng Ethics. 2024 Aug 1;30(4):34. doi: 10.1007/s11948-024-00501-4.
7
Owning Decisions: AI Decision-Support and the Attributability-Gap.
Sci Eng Ethics. 2024 Jun 18;30(4):27. doi: 10.1007/s11948-024-00485-1.
8
Equipping AI-decision-support-systems with emotional capabilities? Ethical perspectives.
Front Artif Intell. 2024 May 31;7:1398395. doi: 10.3389/frai.2024.1398395. eCollection 2024.
9
Large Language Models in Oncology: Revolution or Cause for Concern?
Curr Oncol. 2024 Mar 29;31(4):1817-1830. doi: 10.3390/curroncol31040137.
10
Physician's autonomy in the face of AI support: walking the ethical tightrope.
Front Med (Lausanne). 2024 Mar 28;11:1324963. doi: 10.3389/fmed.2024.1324963. eCollection 2024.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验