Suppr超能文献

人工智能中的偏见:认识并解决不可避免的伦理问题。

Biases in AI: acknowledging and addressing the inevitable ethical issues.

作者信息

Hofmann Bjørn

机构信息

Centre of Medical Ethics, The University of Oslo, Oslo, Norway.

Institute of the Health Sciences, The Norwegian University of Science and Technology (NTNU), Gjøvik, Norway.

出版信息

Front Digit Health. 2025 Aug 20;7:1614105. doi: 10.3389/fdgth.2025.1614105. eCollection 2025.

Abstract

Biases in artificial intelligence (AI) systems pose a range of ethical issues. The myriads of biases in AI systems are briefly reviewed and divided in three main categories: input bias, system bias, and application bias. These biases pose a series of basic ethical challenges: injustice, bad output/outcome, loss of autonomy, transformation of basic concepts and values, and erosion of accountability. A review of the many ways to identify, measure, and mitigate these biases reveals commendable efforts to avoid or reduce bias; however, it also highlights the persistence of unresolved biases. Residual and undetected biases present epistemic challenges with substantial ethical implications. The article further investigates whether the general principles, checklists, guidelines, frameworks, or regulations of AI ethics could address the identified ethical issues with bias. Unfortunately, the depth and diversity of these challenges often exceed the capabilities of existing approaches. Consequently, the article suggests that we must acknowledge and accept some residual ethical issues related to biases in AI systems. By utilizing insights from ethics and moral psychology, we can better navigate this landscape. To maximize the benefits and minimize the harms of biases in AI, it is imperative to identify and mitigate existing biases and remain transparent about the consequences of those we cannot eliminate. This necessitates close collaboration between scientists and ethicists.

摘要

人工智能(AI)系统中的偏差引发了一系列伦理问题。本文简要回顾了AI系统中大量的偏差,并将其分为三大类:输入偏差、系统偏差和应用偏差。这些偏差带来了一系列基本的伦理挑战:不公正、不良输出/结果、自主性丧失、基本概念和价值观的转变以及问责制的削弱。对识别、衡量和减轻这些偏差的多种方法进行的回顾揭示了为避免或减少偏差所做出的值得称赞的努力;然而,这也凸显了未解决偏差的持续性。残留和未被检测到的偏差带来了具有重大伦理意义的认知挑战。本文进一步探讨了AI伦理的一般原则、清单、指南、框架或法规是否能够解决已识别的偏差伦理问题。不幸的是,这些挑战的深度和多样性往往超出了现有方法的能力范围。因此,本文建议我们必须承认并接受与AI系统中的偏差相关的一些残留伦理问题。通过借鉴伦理学和道德心理学的见解,我们可以更好地应对这一局面。为了最大限度地提高AI偏差的益处并最小化其危害,必须识别并减轻现有的偏差,并对那些无法消除的偏差的后果保持透明。这需要科学家和伦理学家之间的密切合作。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88d2/12405166/ac7b9e08b47d/fdgth-07-1614105-ga001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验