Suppr超能文献

通过风险治理调查人工智能的问责制:一项基于研讨会的探索性研究。

Investigating accountability for Artificial Intelligence through risk governance: A workshop-based exploratory study.

作者信息

Hohma Ellen, Boch Auxane, Trauth Rainer, Lütge Christoph

机构信息

School of Social Sciences and Technology, Institute for Ethics in AI, Technical University of Munich, Munich, Germany.

School of Engineering and Design, Chair of Automotive Technology, Technical University of Munich, Munich, Germany.

出版信息

Front Psychol. 2023 Jan 25;14:1073686. doi: 10.3389/fpsyg.2023.1073686. eCollection 2023.

Abstract

INTRODUCTION

With the growing prevalence of AI-based systems and the development of specific regulations and standardizations in response, accountability for consequences resulting from the development or use of these technologies becomes increasingly important. However, concrete strategies and approaches of solving related challenges seem to not have been suitably developed for or communicated with AI practitioners.

METHODS

Studying how risk governance methods can be (re)used to administer AI accountability, we aim at contributing to closing this gap. We chose an exploratory workshop-based methodology to investigate current challenges for accountability and risk management approaches raised by AI practitioners from academia and industry.

RESULTS AND DISCUSSION

Our interactive study design revealed various insights on which aspects do or do not work for handling risks of AI in practice. From the gathered perspectives, we derived 5 required characteristics for AI risk management methodologies (balance, extendability, representation, transparency and long-term orientation) and determined demands for clarification and action (e.g., for the definition of risk and accountabilities or standardization of risk governance and management) in the effort to move AI accountability from a conceptual stage to industry practice.

摘要

引言

随着基于人工智能的系统日益普及,以及相应的特定法规和标准化的发展,对这些技术开发或使用所产生后果的问责变得越来越重要。然而,解决相关挑战的具体策略和方法似乎尚未得到充分开发,也未与人工智能从业者进行有效沟通。

方法

研究如何(重新)利用风险治理方法来管理人工智能问责,我们旨在弥补这一差距。我们选择了基于研讨会的探索性方法,以调查学术界和行业的人工智能从业者在问责和风险管理方法方面面临的当前挑战。

结果与讨论

我们的交互式研究设计揭示了关于在实践中哪些方面适用于或不适用于处理人工智能风险的各种见解。从收集到的观点中,我们得出了人工智能风险管理方法所需的5个特征(平衡、可扩展性、代表性、透明度和长期导向),并确定了在将人工智能问责从概念阶段推进到行业实践的过程中需要澄清和采取行动的要求(例如,风险和问责的定义,或风险治理和管理的标准化)。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验