Suppr超能文献

可信人工智能评估清单:综述与建议

The assessment list for trustworthy artificial intelligence: A review and recommendations.

作者信息

Radclyffe Charles, Ribeiro Mafalda, Wortham Robert H

机构信息

School of Engineering, University of Bristol, Bristol, United Kingdom.

Centre for Accountable, Responsible, and Transparent AI (ART-AI), University of Bath, Bath, United Kingdom.

出版信息

Front Artif Intell. 2023 Mar 9;6:1020592. doi: 10.3389/frai.2023.1020592. eCollection 2023.

Abstract

In July 2020, the European Commission's High-Level Expert Group on AI (HLEG-AI) published the Assessment List for Trustworthy Artificial Intelligence (ALTAI) tool, enabling organizations to perform self-assessments of the fit of their AI systems and surrounding governance to the "7 Principles for Trustworthy AI." Prior research on ALTAI has focused primarily on specific application areas, but there has yet to be a comprehensive analysis and broader recommendations aimed at proto-regulators and industry practitioners. This paper therefore starts with an overview of this tool, including an assessment of its strengths and limitations. The authors then consider the success by which the ALTAI tool is likely to be of utility to industry in improving understanding of the risks inherent in AI systems and best practices to mitigate such risks. It is highlighted how research and practices from fields such as (ESG) can be of benefit for addressing similar challenges in ethical AI development and deployment. Also explored is the extent to which the tool is likely to be successful in being taken up by industry, considering various factors pertaining to its likely adoption. Finally, the authors also propose recommendations applicable internationally to similar bodies to the HLEG-AI regarding the gaps needing to be addressed between high-level principles and practical support for those on the front-line developing or commercializing AI tools. In all, this work provides a comprehensive analysis of the ALTAI tool, as well as recommendations to relevant stakeholders, with the broader aim of promoting more widespread adoption of such a tool in industry.

摘要

2020年7月,欧盟委员会人工智能高级专家组(HLEG-AI)发布了可信人工智能评估清单(ALTAI)工具,使各组织能够对其人工智能系统及周边治理与“可信人工智能7项原则”的契合度进行自我评估。此前关于ALTAI的研究主要集中在特定应用领域,但尚未有针对初步监管者和行业从业者的全面分析及更广泛建议。因此,本文首先概述该工具,包括对其优势和局限性的评估。作者随后考量ALTAI工具在帮助行业提高对人工智能系统固有风险的认识以及缓解此类风险的最佳实践方面可能具有的效用。文中强调了诸如(ESG)等领域的研究和实践如何有助于应对道德人工智能开发和部署中的类似挑战。还探讨了考虑到与工具可能被采用相关的各种因素,该工具在被行业采用方面可能取得的成功程度。最后,作者还针对HLEG-AI这样的类似机构在国际上提出建议,指出在为一线开发或商业化人工智能工具的人员提供的高层原则与实际支持之间需要填补的差距。总之,这项工作对ALTAI工具进行了全面分析,并向相关利益相关者提出了建议,其更广泛的目标是促进该工具在行业中得到更广泛的采用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a9e8/10034015/4044174b3574/frai-06-1020592-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验