Mökander Jakob, Axente Maria, Casolari Federico, Floridi Luciano
Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS UK.
UK All Party Parliamentary Group on AI (APPG AI), London, UK.
Minds Mach (Dordr). 2022;32(2):241-268. doi: 10.1007/s11023-021-09577-4. Epub 2021 Nov 5.
The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the that providers of high-risk AI systems are expected to conduct, and the that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that the AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.
拟议的欧洲人工智能法案(AIA)是全球任何主要经济体首次尝试制定人工智能通用法律框架。因此,AIA可能会成为关于如何(以及应该如何)监管人工智能系统的更大范围讨论的参考点。在本文中,我们描述并讨论了AIA中提出的两种主要执行机制:高风险人工智能系统供应商预期要进行的 ,以及供应商必须建立的用以记录高风险人工智能系统在其整个生命周期内性能的 。我们认为,AIA可以被解释为一项建立全欧洲范围人工智能审计生态系统的提议,尽管表述有所不同。我们的分析有两个主要贡献。首先,通过用从现有人工智能审计文献中借用的术语描述AIA中包含的执行机制,我们帮助人工智能系统供应商了解他们在实践中如何证明遵守AIA规定的要求。其次,通过从审计角度审视AIA,我们试图从先前的研究中提供可借鉴的经验教训,以进一步完善AIA中概述的监管方法。我们通过强调AIA中需要修正(或仅仅是澄清)的七个方面来得出结论。其中最重要的包括将模糊概念转化为可验证标准的必要性,以及加强基于内部检查的合格评定的制度保障。