• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

下一代医学决策支持:通往透明专家助手的路线图。

The Next Generation of Medical Decision Support: A Roadmap Toward Transparent Expert Companions.

作者信息

Bruckert Sebastian, Finzel Bettina, Schmid Ute

机构信息

Cognitive Systems, University of Bamberg, Bamberg, Germany.

出版信息

Front Artif Intell. 2020 Sep 24;3:507973. doi: 10.3389/frai.2020.507973. eCollection 2020.

DOI:10.3389/frai.2020.507973
PMID:33733193
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7861251/
Abstract

Increasing quality and performance of artificial intelligence (AI) in general and machine learning (ML) in particular is followed by a wider use of these approaches in everyday life. As part of this development, ML classifiers have also gained more importance for diagnosing diseases within biomedical engineering and medical sciences. However, many of those ubiquitous high-performing ML algorithms reveal a black-box-nature, leading to opaque and incomprehensible systems that complicate human interpretations of single predictions or the whole prediction process. This puts up a serious challenge on human decision makers to develop trust, which is much needed in life-changing decision tasks. This paper is designed to answer the question how expert companion systems for decision support can be designed to be interpretable and therefore transparent and comprehensible for humans. On the other hand, an approach for interactive ML as well as human-in-the-loop-learning is demonstrated in order to integrate human expert knowledge into ML models so that humans and machines act as companions within a critical decision task. We especially address the problem of between ML classifiers and its human users as a prerequisite for semantically relevant and useful explanations as well as interactions. Our roadmap paper presents and discusses an interdisciplinary yet integrated Comprehensible Artificial Intelligence (cAI)-transition-framework with regard to the task of medical diagnosis. We explain and integrate relevant concepts and research areas to provide the reader with a for achieving the transition from opaque black-box models to interactive, transparent, comprehensible and trustworthy systems. To make our approach tangible, we present suitable state of the art methods with regard to the medical domain and include a realization concept of our framework. The emphasis is on the concept of Mutual Explanations (ME) that we introduce as a dialog-based, incremental process in order to provide human ML users with trust, but also with stronger participation within the learning process.

摘要

一般而言,人工智能(AI)尤其是机器学习(ML)的质量和性能不断提高,随之而来的是这些方法在日常生活中的更广泛应用。作为这一发展的一部分,ML分类器在生物医学工程和医学科学领域的疾病诊断中也变得更加重要。然而,许多无处不在的高性能ML算法都具有黑箱性质,导致系统不透明且难以理解,这使得人类对单个预测或整个预测过程的解释变得复杂。这给人类决策者建立信任带来了严峻挑战,而在改变生活的决策任务中,信任是非常必要的。本文旨在回答如何设计可解释的专家决策支持系统,从而使其对人类来说是透明且可理解的这一问题。另一方面,展示了一种交互式ML以及人在回路学习的方法,以便将人类专家知识整合到ML模型中,使人类和机器在关键决策任务中作为伙伴发挥作用。我们特别将ML分类器与其人类用户之间的问题作为语义相关且有用的解释以及交互的先决条件来处理。我们的路线图论文针对医学诊断任务提出并讨论了一个跨学科但集成的可理解人工智能(cAI)过渡框架。我们解释并整合相关概念和研究领域,为读者提供一个从不透明的黑箱模型过渡到交互式、透明、可理解且值得信赖的系统的路线图。为了使我们的方法切实可行,我们介绍了医学领域的适用的现有方法,并包括我们框架的实现概念。重点是我们引入的相互解释(ME)概念,它是一个基于对话的增量过程,目的是为人类ML用户提供信任,同时也让他们在学习过程中有更强的参与感。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/709ede32aa6b/frai-03-507973-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/ef680d69ca8d/frai-03-507973-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/e3b2dbf14901/frai-03-507973-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/1efd7adf323f/frai-03-507973-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/c4809870ccfa/frai-03-507973-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/4757955816b6/frai-03-507973-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/fb1011f7a8ee/frai-03-507973-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/1acbed3f96e5/frai-03-507973-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/09df9ad69bec/frai-03-507973-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/709ede32aa6b/frai-03-507973-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/ef680d69ca8d/frai-03-507973-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/e3b2dbf14901/frai-03-507973-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/1efd7adf323f/frai-03-507973-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/c4809870ccfa/frai-03-507973-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/4757955816b6/frai-03-507973-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/fb1011f7a8ee/frai-03-507973-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/1acbed3f96e5/frai-03-507973-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/09df9ad69bec/frai-03-507973-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f798/7861251/709ede32aa6b/frai-03-507973-g0009.jpg

相似文献

1
The Next Generation of Medical Decision Support: A Roadmap Toward Transparent Expert Companions.下一代医学决策支持:通往透明专家助手的路线图。
Front Artif Intell. 2020 Sep 24;3:507973. doi: 10.3389/frai.2020.507973. eCollection 2020.
2
ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions.EXAID:一种用于皮肤损伤计算机辅助诊断的多模态解释框架。
Comput Methods Programs Biomed. 2022 Mar;215:106620. doi: 10.1016/j.cmpb.2022.106620. Epub 2022 Jan 5.
3
Ada-WHIPS: explaining AdaBoost classification with applications in the health sciences.Ada-WHIPS:解释 AdaBoost 分类及其在健康科学中的应用。
BMC Med Inform Decis Mak. 2020 Oct 2;20(1):250. doi: 10.1186/s12911-020-01201-2.
4
A Mobile App That Addresses Interpretability Challenges in Machine Learning-Based Diabetes Predictions: Survey-Based User Study.一款应对基于机器学习的糖尿病预测中可解释性挑战的移动应用程序:基于调查的用户研究。
JMIR Form Res. 2023 Nov 13;7:e50328. doi: 10.2196/50328.
5
Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology.揭开黑箱:可解释机器学习在心脏病学中的前景与局限。
Can J Cardiol. 2022 Feb;38(2):204-213. doi: 10.1016/j.cjca.2021.09.004. Epub 2021 Sep 14.
6
Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations.衡量解释的质量:系统可归因性量表(SCS):比较人类和机器的解释
Kunstliche Intell (Oldenbourg). 2020;34(2):193-198. doi: 10.1007/s13218-020-00636-z. Epub 2020 Jan 21.
7
CLARUS: An interactive explainable AI platform for manual counterfactuals in graph neural networks.CLARUS:一个用于图神经网络中人工反事实的交互式可解释 AI 平台。
J Biomed Inform. 2024 Feb;150:104600. doi: 10.1016/j.jbi.2024.104600. Epub 2024 Jan 30.
8
Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction.用于图像分类问题的可解释机器学习框架:脑胶质瘤癌症预测案例研究
J Imaging. 2020 May 28;6(6):37. doi: 10.3390/jimaging6060037.
9
A boosting framework for visuality-preserving distance metric learning and its application to medical image retrieval.一种保持视觉保真度的距离度量学习的提升框架及其在医学图像检索中的应用。
IEEE Trans Pattern Anal Mach Intell. 2010 Jan;32(1):30-44. doi: 10.1109/TPAMI.2008.273.
10
In Search of Trustworthy and Transparent Intelligent Systems With Human-Like Cognitive and Reasoning Capabilities.寻找具有类人认知和推理能力的可信且透明的智能系统。
Front Robot AI. 2020 Jun 19;7:76. doi: 10.3389/frobt.2020.00076. eCollection 2020.

引用本文的文献

1
Current methods in explainable artificial intelligence and future prospects for integrative physiology.可解释人工智能的当前方法与整合生理学的未来前景。
Pflugers Arch. 2025 Apr;477(4):513-529. doi: 10.1007/s00424-025-03067-7. Epub 2025 Feb 25.
2
Integrating Explainable Machine Learning in Clinical Decision Support Systems: Study Involving a Modified Design Thinking Approach.将可解释机器学习集成到临床决策支持系统中:一项采用改进设计思维方法的研究。
JMIR Form Res. 2024 Apr 16;8:e50475. doi: 10.2196/50475.
3
Pattern recognition of hematological profiles of tumors of the digestive tract: an exploratory study.

本文引用的文献

1
Resolving challenges in deep learning-based analyses of histopathological images using explanation methods.运用解释方法解决基于深度学习的组织病理学图像分析中的挑战。
Sci Rep. 2020 Apr 14;10(1):6423. doi: 10.1038/s41598-020-62724-2.
2
Causability and explainability of artificial intelligence in medicine.人工智能在医学中的可归因性与可解释性。
Wiley Interdiscip Rev Data Min Knowl Discov. 2019 Jul-Aug;9(4):e1312. doi: 10.1002/widm.1312. Epub 2019 Apr 2.
3
Identifying Clinical Terms in Medical Text Using Ontology-Guided Machine Learning.
消化道肿瘤血液学特征的模式识别:一项探索性研究。
Front Med (Lausanne). 2023 Aug 16;10:1208022. doi: 10.3389/fmed.2023.1208022. eCollection 2023.
4
A nested cohort 5-year Canadian surveillance of Gram-negative antimicrobial resistance for optimized antimicrobial therapy.一项嵌套队列 5 年加拿大革兰氏阴性菌抗菌药物耐药性监测,以优化抗菌药物治疗。
Sci Rep. 2023 Aug 29;13(1):14142. doi: 10.1038/s41598-023-40012-z.
5
When performance is not enough-A multidisciplinary view on clinical decision support.当表现不足时——临床决策支持的多学科视角。
PLoS One. 2023 Apr 24;18(4):e0282619. doi: 10.1371/journal.pone.0282619. eCollection 2023.
6
Time series clustering of T cell subsets dissects heterogeneity in immune reconstitution and clinical outcomes among MUD-HCT patients receiving ATG or PTCy.T 细胞亚群的时间序列聚类分析揭示了接受 ATG 或 PTCy 的 MUD-HCT 患者免疫重建和临床结局的异质性。
Front Immunol. 2023 Mar 20;14:1082727. doi: 10.3389/fimmu.2023.1082727. eCollection 2023.
7
Artificial Intelligence for Dementia Research Methods Optimization.用于痴呆症研究方法优化的人工智能
ArXiv. 2023 Mar 2:arXiv:2303.01949v1.
8
Unassisted Clinicians Versus Deep Learning-Assisted Clinicians in Image-Based Cancer Diagnostics: Systematic Review With Meta-analysis.基于图像的癌症诊断中无辅助临床医生与深度学习辅助临床医生的对比:系统评价与荟萃分析
J Med Internet Res. 2023 Mar 2;25:e43832. doi: 10.2196/43832.
9
The grammar of interactive explanatory model analysis.交互式解释模型分析的语法
Data Min Knowl Discov. 2023 Feb 14:1-37. doi: 10.1007/s10618-023-00924-w.
10
Hybrid feature engineering of medical data via variational autoencoders with triplet loss: a COVID-19 prognosis study.基于三重损失的变分自动编码器的医学数据混合特征工程:一项 COVID-19 预后研究。
Sci Rep. 2023 Feb 17;13(1):2827. doi: 10.1038/s41598-023-29334-0.
使用本体引导的机器学习识别医学文本中的临床术语。
JMIR Med Inform. 2019 May 10;7(2):e12596. doi: 10.2196/12596.
4
Deep neural networks outperform human expert's capacity in characterizing bioleaching bacterial biofilm composition.在表征生物浸出细菌生物膜组成方面,深度神经网络的表现优于人类专家的能力。
Biotechnol Rep (Amst). 2019 Mar 7;22:e00321. doi: 10.1016/j.btre.2019.e00321. eCollection 2019 Jun.
5
An Observational Study of Deep Learning and Automated Evaluation of Cervical Images for Cancer Screening.深度学习在宫颈癌筛查中对宫颈图像进行自动评估的观察性研究。
J Natl Cancer Inst. 2019 Sep 1;111(9):923-932. doi: 10.1093/jnci/djy225.
6
Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data.利用电子健康记录数据的机器学习算法中的潜在偏差。
JAMA Intern Med. 2018 Nov 1;178(11):1544-1547. doi: 10.1001/jamainternmed.2018.3763.
7
Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists.人机大战:深度学习卷积神经网络与 58 位皮肤科医生诊断黑色素瘤皮肤镜图像的对比研究
Ann Oncol. 2018 Aug 1;29(8):1836-1842. doi: 10.1093/annonc/mdy166.
8
Can machine-learning improve cardiovascular risk prediction using routine clinical data?机器学习能否利用常规临床数据改善心血管疾病风险预测?
PLoS One. 2017 Apr 4;12(4):e0174944. doi: 10.1371/journal.pone.0174944. eCollection 2017.
9
Interactive machine learning for health informatics: when do we need the human-in-the-loop?健康信息学中的交互式机器学习:何时需要人工介入?
Brain Inform. 2016 Jun;3(2):119-131. doi: 10.1007/s40708-016-0042-6. Epub 2016 Mar 2.
10
A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future Systems.影响自动化信任发展因素的元分析:对理解未来系统自主性的启示
Hum Factors. 2016 May;58(3):377-400. doi: 10.1177/0018720816634228. Epub 2016 Mar 22.