Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, Eindhoven, The Netherlands.
Sci Eng Ethics. 2021 Feb 19;27(1):15. doi: 10.1007/s11948-020-00277-3.
In the development of governmental policy for artificial intelligence (AI) that is informed by ethics, one avenue currently pursued is that of drawing on "AI Ethics Principles". However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of 'Actionable Principles for AI'. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case study, elements are extracted from the development process of the Ethics Guidelines for Trustworthy AI of the European Commission's "High Level Expert Group on AI". Subsequently, these elements are expanded on and evaluated in light of their ability to contribute to a prototype framework for the development of 'Actionable Principles for AI'. The paper proposes the following three propositions for the formation of such a prototype framework: (1) preliminary landscape assessments; (2) multi-stakeholder participation and cross-sectoral feedback; and, (3) mechanisms to support implementation and operationalizability.
在制定有道德指导的人工智能(AI)政策时,目前正在探索的一个途径是借鉴“AI 伦理原则”。然而,这些 AI 伦理原则在政府政策中往往无法付诸实践。本文提出了一个开发“可操作 AI 原则”的新框架。该方法承认 AI 伦理原则的相关性,并着眼于方法论要素,以提高其在政策过程中的实际可操作性。作为一个案例研究,从欧盟委员会“人工智能高级专家组”制定的《可信 AI 伦理准则》的发展过程中提取要素。随后,根据它们对开发“可操作 AI 原则”原型框架的贡献能力,对这些要素进行扩展和评估。本文为形成这样一个原型框架提出了以下三个建议:(1)初步的景观评估;(2)多利益相关者的参与和跨部门的反馈;以及(3)支持实施和可操作性的机制。