Custers Bart, Lahmann Henning, Scott Benjamyn I
eLaw - the Center for Law and Digital Technologies, Leiden University, Leiden, The Netherlands.
International Institute of Air and Space Law, and eLaw - the Center for Law and Digital Technologies, Leiden University, Leiden, The Netherlands.
AI Soc. 2025;40(5):4035-4050. doi: 10.1007/s00146-024-02137-1. Epub 2025 Jan 11.
Complex technologies such as Artificial Intelligence (AI) can cause harm, raising the question of who is liable for the harm caused. Research has identified multiple liability gaps (i.e., unsatisfactory outcomes when applying existing liability rules) in legal frameworks. In this paper, the concepts of shared responsibilities and fiduciary duties are explored as avenues to address liability gaps. The development, deployment and use of complex technologies are not clearly distinguishable stages, as often suggested, but are processes of cooperation and co-creation. At the intersections of these stages, shared responsibilities and fiduciary duties of multiple actors can be observed. Although none of the actors have complete control or a complete overview, many actors have some control or influence, and, therefore, responsibilities based on fault, prevention or benefit. Shared responsibilities and fiduciary duties can turn liability gaps into liability overlaps. These concepts could be implemented in tort and contract law by amending existing law (e.g., by assuming that all stakeholders are liable unless they can prove they did not owe a duty of care) and by creating more room for partial liability reflecting partial responsibilities (e.g., a responsibility to signal or identify an issue without a corresponding responsibility to solve that issue). This approach better aligns legal liabilities with responsibilities, increases legal certainty, and increases cooperation and understanding between actors, improving the quality and safety of technologies. However, it may not solve all liability gaps, may have chilling effects on innovation, and may require further detailing through case law.
诸如人工智能(AI)这样的复杂技术可能会造成危害,这就引发了对于由谁来为所造成的危害负责的问题。研究已经在法律框架中识别出了多个责任缺口(即应用现有责任规则时出现的不尽如人意的结果)。在本文中,探讨了共同责任和受托责任的概念,将其作为解决责任缺口的途径。复杂技术的开发、部署和使用并非如通常所认为的那样是清晰可辨的阶段,而是合作与共同创造的过程。在这些阶段的交叉点上,可以观察到多个行为主体的共同责任和受托责任。虽然没有任何一个行为主体能够完全控制或全面了解情况,但许多行为主体都有一定的控制权或影响力,因此,基于过错、预防或受益而产生责任。共同责任和受托责任可以将责任缺口转变为责任重叠。这些概念可以通过修订现行法律(例如,假定所有利益相关者都有责任,除非他们能够证明自己没有注意义务)以及为反映部分责任的部分责任创造更多空间(例如,有发出信号或识别问题的责任,但没有解决该问题的相应责任),从而在侵权法和合同法中得以实施。这种方法能使法律责任与责任更好地契合,提高法律确定性,并增强行为主体之间的合作与理解,提升技术的质量和安全性。然而,它可能无法解决所有的责任缺口,可能会对创新产生抑制作用,并且可能需要通过判例法进一步细化。