Suppr超能文献

致力于负责任人工智能的公司:从原则走向实施与监管?

Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?

作者信息

de Laat Paul B

机构信息

University of Groningen, Groningen, Netherlands.

出版信息

Philos Technol. 2021;34(4):1135-1193. doi: 10.1007/s13347-021-00474-3. Epub 2021 Oct 6.

Abstract

The term 'responsible AI' has been coined to denote AI that is fair and non-biased, transparent and explainable, secure and safe, privacy-proof, accountable, and to the benefit of mankind. Since 2016, a great many organizations have pledged allegiance to such principles. Amongst them are 24 AI companies that did so by posting a commitment of the kind on their website and/or by joining the 'Partnership on AI'. By means of a comprehensive web search, two questions are addressed by this study: (1) Did the signatory companies actually try to implement these principles in practice, and if so, how? (2) What are their views on the role of other societal actors in steering AI towards the stated principles (the issue of regulation)? It is concluded that some three of the largest amongst them have carried out valuable steps towards implementation, in particular by developing and open sourcing new software tools. To them, charges of mere 'ethics washing' do not apply. Moreover, some 10 companies from both the USA and Europe have publicly endorsed the position that apart from self-regulation, AI is in urgent need of governmental regulation. They mostly advocate focussing regulation on high-risk applications of AI, a policy which to them represents the sensible middle course between laissez-faire on the one hand and outright bans on technologies on the other. The future shaping of standards, ethical codes, and laws as a result of these regulatory efforts remains, of course, to be determined.

摘要

“负责任的人工智能” 这一术语已被创造出来,用以指代公平且无偏见、透明且可解释、安全可靠、保护隐私、可问责且造福人类的人工智能。自2016年以来,许多组织都宣誓拥护这些原则。其中有24家人工智能公司通过在其网站上发布此类承诺和/或加入 “人工智能合作组织” 来做到这一点。通过全面的网络搜索,本研究探讨了两个问题:(1)签署承诺的公司是否真的试图在实践中落实这些原则?如果是,又是如何落实的?(2)他们对于其他社会行为体在引导人工智能朝着既定原则发展方面所起的作用(监管问题)有何看法?研究得出的结论是,其中约三家最大的公司已经朝着实施迈出了有价值的步伐,特别是通过开发和开源新的软件工具。对它们来说,仅仅 “道德粉饰” 的指责并不适用。此外,来自美国和欧洲的约10家公司公开支持这样一种立场,即除了自我监管之外,人工智能迫切需要政府监管。它们大多主张将监管重点放在人工智能的高风险应用上,对它们来说,这一政策代表了一方面放任自由、另一方面彻底禁止技术这两种极端之间的明智中间路线。当然,这些监管努力所带来的标准、道德规范和法律的未来形态仍有待确定。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/99e1/8492454/5b6265f92160/13347_2021_474_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验