Bertolini Andrea, Episcopo Francesca
Scuola Superiore Sant'Anna, Dirpolis Institute, Pisa, Italy.
Università di Pisa, Department of Private Law and Scuola Superiore Sant'Anna, Dirpolis Institute, Pisa, Italy.
Front Robot AI. 2022 Apr 5;9:842213. doi: 10.3389/frobt.2022.842213. eCollection 2022.
Robotics and AI-based applications (RAI) are often said to be so technologically advanced that they should be held responsible for their actions, instead of the human who designs or operates them. The paper aims to prove that this thesis ("the exceptionalist claim")-as it stands-is both theoretically incorrect and practically inadequate. Indeed, the paper argues that such claim is based on a series of misunderstanding over the very notion and functions of "legal responsibility", which it then seeks to clarify by developing and interdisciplinary conceptual taxonomy. In doing so, it aims to set the premises for a more constructive debate over the feasibility of granting legal standing to robotic application. After a short Introduction setting the stage of the debate, the paper addresses the ontological claim, distinguishing the philosophical from the legal debate on the notion of i) subjectivity and ii) agency, with their respective implications. The analysis allows us to conclude that the attribution of legal subjectivity and agency are purely fictional and technical solutions to facilitate legal interactions, and is not dependent upon the intrinsic nature of the RAI. A similar structure is maintained with respect to the notion of responsibility, addressed first in a philosophical and then legal perspective, to demonstrate how the latter is often utilized to both pursue ex ante deterrence and ex post compensation. The focus on the second objective allows us to bridge the analysis towards functional (law and economics based) considerations, to discuss how even the attribution of legal personhood may be conceived as an attempt to simplify certain legal interactions and relations. Within such a framework, the discussion whether to attribute legal subjectivity to the machine needs to be kept entirely within the legal domain, and grounded on technical (legal) considerations, to be argued on a functional, bottom-up analysis of specific classes of RAI. That does not entail the attribution of animacy or the ascription of a moral status to the entity itself.
基于机器人技术和人工智能的应用程序(RAI)常常被认为技术上太过先进,以至于应该对其行为负责,而不是对设计或操作它们的人负责。本文旨在证明,这一论点(“例外论主张”)就其本身而言,在理论上是不正确的,在实践中也是不充分的。事实上,本文认为这种主张基于对“法律责任”这一概念和功能的一系列误解,随后试图通过构建一个跨学科的概念分类法来加以澄清。在此过程中,本文旨在为关于赋予机器人应用程序法律地位的可行性展开更具建设性的辩论奠定基础。在简短的引言为辩论设定背景之后,本文探讨了本体论主张,区分了关于以下概念的哲学辩论和法律辩论:i)主观性和ii)能动性,以及它们各自的影响。分析使我们得出结论,法律主观性和能动性的归属纯粹是为促进法律互动而虚构的技术手段,并不取决于RAI的内在本质。对于责任概念,本文也采用了类似的结构,先从哲学角度,再从法律角度进行探讨,以说明后者如何常常被用于实现事前威慑和事后补偿。对第二个目标的关注使我们能够将分析延伸至基于功能(法律与经济学)的考量,讨论即使赋予法人资格也可被视为简化某些法律互动和关系的一种尝试。在这样一个框架内,关于是否赋予机器法律主观性的讨论需要完全局限于法律领域,并基于技术(法律)考量,通过对特定类别的RAI进行功能性的自下而上分析来展开辩论。这并不意味着赋予该实体生命或赋予其道德地位。