文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

停止为高风险决策解释黑箱机器学习模型,转而使用可解释模型。

Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.

作者信息

Rudin Cynthia

机构信息

Duke University.

出版信息

Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.


DOI:10.1038/s42256-019-0048-x
PMID:35603010
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9122117/
Abstract

Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to black box models, rather than creating models that are in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.

摘要

黑箱机器学习模型目前正被用于全社会的高风险决策,在整个医疗保健、刑事司法和其他领域引发了问题。人们曾希望创建解释这些黑箱模型的方法能缓解其中一些问题,但试图解释黑箱模型,而不是一开始就创建可解释的模型,可能会使不良做法长期存在,并可能对社会造成灾难性危害。有一条前进的道路——那就是设计本质上可解释的模型。本文阐述了解释黑箱模型和使用本质上可解释的模型之间的差距,概述了在高风险决策中应避免使用可解释黑箱模型的几个关键原因,确定了可解释机器学习面临的挑战,并提供了几个示例应用,说明在刑事司法、医疗保健和计算机视觉领域,可解释模型有可能取代黑箱模型。

相似文献

[1]
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.

Nat Mach Intell. 2019-5

[2]
Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology.

Can J Cardiol. 2022-2

[3]
Explainable, trustworthy, and ethical machine learning for healthcare: A survey.

Comput Biol Med. 2022-10

[4]
Interpretable machine learning models for hospital readmission prediction: a two-step extracted regression tree approach.

BMC Med Inform Decis Mak. 2023-6-5

[5]
Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction.

J Imaging. 2020-5-28

[6]
Open your black box classifier.

Healthc Technol Lett. 2023-8-29

[7]
The Virtues of Interpretable Medical AI.

Camb Q Healthc Ethics. 2024-7

[8]
The Virtues of Interpretable Medical Artificial Intelligence.

Camb Q Healthc Ethics. 2022-12-16

[9]
Inherently interpretable position-aware convolutional motif kernel networks for biological sequencing data.

Sci Rep. 2023-10-11

[10]
Why did AI get this one wrong? - Tree-based explanations of machine learning model predictions.

Artif Intell Med. 2023-1

引用本文的文献

[1]
A review of image processing and analysis of computed tomography images using deep learning methods.

Phys Eng Sci Med. 2025-9-3

[2]
Radiomics Quality Score 2.0: towards radiomics readiness levels and clinical translation for personalized medicine.

Nat Rev Clin Oncol. 2025-9-3

[3]
Towards the genome-scale discovery of bivariate monotonic classifiers.

BMC Bioinformatics. 2025-9-2

[4]
Explainable AI in medicine: challenges of integrating XAI into the future clinical routine.

Front Radiol. 2025-8-5

[5]
Feasibility of fully automatic assessment of cervical canal stenosis using MRI via deep learning.

Quant Imaging Med Surg. 2025-9-1

[6]
Transforming Population Health Screening for Atherosclerotic Cardiovascular Disease with AI-Enhanced ECG Analytics: Opportunities and Challenges.

Curr Atheroscler Rep. 2025-9-1

[7]
The efficacy of machine learning algorithms in evaluating factors associated with shunt-dependent hydrocephalus after subarachnoid hemorrhage: a systematic review and meta-analysis.

Neurosurg Rev. 2025-9-1

[8]
Improving deceased donor kidney utilization: predicting risk of nonuse with interpretable models.

Front Artif Intell. 2025-8-13

[9]
AdapTor: Adaptive Topological Regression for quantitative structure-activity relationship modeling.

J Cheminform. 2025-8-28

[10]
Beyond Post hoc Explanations: A Comprehensive Framework for Accountable AI in Medical Imaging Through Transparency, Interpretability, and Explainability.

Bioengineering (Basel). 2025-8-15

本文引用的文献

[1]
Definitions, methods, and applications in interpretable machine learning.

Proc Natl Acad Sci U S A. 2019-10-16

[2]
Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.

PLoS Med. 2018-11-6

[3]
Modeling recovery curves with application to prostatectomy.

Biostatistics. 2019-10-1

[4]
On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products.

Big Data. 2017-9

[5]
The World Health Organization Adult Attention-Deficit/Hyperactivity Disorder Self-Report Screening Scale for DSM-5.

JAMA Psychiatry. 2017-5-1

[6]
Population-Level Prediction of Type 2 Diabetes From Claims Data and Analysis of Risk Factors.

Big Data. 2015-12

[7]
The Magical Mystery Four: How is Working Memory Capacity Limited, and Why?

Curr Dir Psychol Sci. 2010-2-1

[8]
The magical number seven plus or minus two: some limits on our capacity for processing information.

Psychol Rev. 1956-3

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索