Suppr超能文献

人工智能可解释性:技术和伦理维度。

Artificial intelligence explainability: the technical and ethical dimensions.

机构信息

Department of Computer Science, University of York, Deramore Lane, York YO10 5GH, UK.

出版信息

Philos Trans A Math Phys Eng Sci. 2021 Oct 4;379(2207):20200363. doi: 10.1098/rsta.2020.0363. Epub 2021 Aug 16.

Abstract

In recent years, several new technical methods have been developed to make AI-models more transparent and interpretable. These techniques are often referred to collectively as 'AI explainability' or 'XAI' methods. This paper presents an overview of XAI methods, and links them to stakeholder purposes for seeking an explanation. Because the underlying stakeholder purposes are broadly ethical in nature, we see this analysis as a contribution towards bringing together the technical and ethical dimensions of XAI. We emphasize that use of XAI methods must be linked to explanations of human decisions made during the development life cycle. Situated within that wider accountability framework, our analysis may offer a helpful starting point for designers, safety engineers, service providers and regulators who need to make practical judgements about which XAI methods to employ or to require. This article is part of the theme issue 'Towards symbiotic autonomous systems'.

摘要

近年来,已经开发出了几种新的技术方法,以使 AI 模型更加透明和可解释。这些技术通常统称为“AI 可解释性”或“XAI”方法。本文对 XAI 方法进行了概述,并将其与寻求解释的利益相关者目的联系起来。由于潜在的利益相关者目的在本质上具有广泛的伦理性质,我们认为这种分析有助于将 XAI 的技术和伦理层面结合起来。我们强调,必须将 XAI 方法的使用与在开发生命周期中做出的人类决策的解释联系起来。在更广泛的问责制框架内,我们的分析可以为设计师、安全工程师、服务提供商和监管机构提供一个有用的起点,他们需要对采用或要求哪种 XAI 方法做出实际判断。本文是主题为“共生自主系统”的特刊的一部分。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b72/8366909/7ee1f147f492/rsta20200363f01.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验