Stix Charlotte
Philosophy and Ethics Group, Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, Eindhoven, The Netherlands.
AI Ethics. 2022;2(3):463-476. doi: 10.1007/s43681-021-00093-w. Epub 2021 Sep 29.
Governance efforts for artificial intelligence (AI) are taking on increasingly more concrete forms, drawing on a variety of approaches and instruments from hard regulation to standardisation efforts, aimed at mitigating challenges from high-risk AI systems. To implement these and other efforts, new institutions will need to be established on a national and international level. This paper sketches a blueprint of such institutions, and conducts in-depth investigations of three key components of any future AI governance institutions, exploring benefits and associated drawbacks: (1) "purpose", relating to the institution's overall goals and scope of work or mandate; (2) "geography", relating to questions of participation and the reach of jurisdiction; and (3) "capacity", the infrastructural and human make-up of the institution. Subsequently, the paper highlights noteworthy aspects of various institutional roles specifically around questions of institutional purpose, and frames what these could look like in practice, by placing these debates in a European context and proposing different iterations of a European AI Agency. Finally, conclusions and future research directions are proposed.
人工智能(AI)治理工作正呈现出越来越具体的形式,借鉴了从严格监管到标准化努力等各种方法和手段,旨在应对高风险人工智能系统带来的挑战。为了实施这些及其他工作,需要在国家和国际层面建立新的机构。本文勾勒了此类机构的蓝图,并对未来任何人工智能治理机构的三个关键组成部分进行了深入研究,探讨了其益处和相关弊端:(1)“宗旨”,涉及机构的总体目标、工作范围或任务授权;(2)“地域”,涉及参与问题和管辖权范围;(3)“能力”,机构的基础设施和人员构成。随后,本文突出了围绕机构宗旨问题的各种机构角色的值得关注的方面,并通过将这些讨论置于欧洲背景下并提出欧洲人工智能机构的不同设想,阐述了这些角色在实践中可能的样子。最后,提出了结论和未来的研究方向。