• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Proto-Caps:利用原型学习和特权信息进行可解释的医学图像分类

Proto-Caps: interpretable medical image classification using prototype learning and privileged information.

作者信息

Gallée Luisa, Lisson Catharina Silvia, Ropinski Timo, Beer Meinrad, Götz Michael

机构信息

Experimental Radiology, Ulm University Medical Center, Germany, Ulm, Germany.

XAIRAD-Cooperation for Artificial Intelligence in Experimental Radiology, Germany, Ulm, Germany.

出版信息

PeerJ Comput Sci. 2025 May 29;11:e2908. doi: 10.7717/peerj-cs.2908. eCollection 2025.

DOI:10.7717/peerj-cs.2908
PMID:40567722
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12192993/
Abstract

Explainable artificial intelligence (xAI) is becoming increasingly important as the need for understanding the model's reasoning grows when applying them in high-risk areas. This is especially crucial in the field of medicine, where decision support systems are utilised to make diagnoses or to determine appropriate therapies. Here it is essential to provide intuitive and comprehensive explanations to evaluate the system's correctness. To meet this need, we have developed Proto-Caps, an intrinsically explainable model for image classification. It explains its decisions by providing visual prototypes that resemble specific appearance features. These characteristics are predefined by humans, which on the one hand makes them understandable and on the other hand leads to the model basing its decision on the same features as the human expert. On two public datasets, this method shows better performance compared to existing explainable approaches, despite the additive explainability modality through the visual prototypes. In addition to the performance evaluations, we conducted an analysis of truthfulness by examining the joint information between the target prediction and its explanation output. This was done in order to ensure that the explanation actually reasons the target classification. Through extensive hyperparameter studies, we also found optimal model settings, providing a starting point for further research. Our work emphasises the prospects of combining xAI approaches for greater explainability and demonstrates that incorporating explainability does not necessarily lead to a loss of performance.

摘要

随着在高风险领域应用模型时对理解其推理过程的需求不断增加,可解释人工智能(xAI)变得越来越重要。这在医学领域尤为关键,因为医学领域利用决策支持系统进行诊断或确定合适的治疗方法。在此,提供直观且全面的解释以评估系统的正确性至关重要。为满足这一需求,我们开发了Proto-Caps,一种用于图像分类的内在可解释模型。它通过提供类似于特定外观特征的视觉原型来解释其决策。这些特征由人类预先定义,这一方面使其易于理解,另一方面导致模型基于与人类专家相同的特征做出决策。在两个公共数据集上,尽管通过视觉原型具有附加的可解释性模态,但该方法与现有的可解释方法相比仍表现出更好的性能。除了性能评估外,我们还通过检查目标预测与其解释输出之间的联合信息对真实性进行了分析。这样做是为了确保解释实际上能为目标分类提供推理依据。通过广泛的超参数研究,我们还找到了最优的模型设置,为进一步研究提供了一个起点。我们的工作强调了结合xAI方法以实现更高可解释性的前景,并表明纳入可解释性不一定会导致性能损失。

相似文献

1
Proto-Caps: interpretable medical image classification using prototype learning and privileged information.Proto-Caps:利用原型学习和特权信息进行可解释的医学图像分类
PeerJ Comput Sci. 2025 May 29;11:e2908. doi: 10.7717/peerj-cs.2908. eCollection 2025.
2
Cost-effectiveness of using prognostic information to select women with breast cancer for adjuvant systemic therapy.利用预后信息为乳腺癌患者选择辅助性全身治疗的成本效益
Health Technol Assess. 2006 Sep;10(34):iii-iv, ix-xi, 1-204. doi: 10.3310/hta10340.
3
Are Artificial Intelligence Models Listening Like Cardiologists? Bridging the Gap Between Artificial Intelligence and Clinical Reasoning in Heart-Sound Classification Using Explainable Artificial Intelligence.人工智能模型能像心脏病专家一样“聆听”吗?利用可解释人工智能弥合人工智能与心音分类临床推理之间的差距。
Bioengineering (Basel). 2025 May 22;12(6):558. doi: 10.3390/bioengineering12060558.
4
Adapting Safety Plans for Autistic Adults with Involvement from the Autism Community.在自闭症群体的参与下为成年自闭症患者调整安全计划。
Autism Adulthood. 2025 May 28;7(3):293-302. doi: 10.1089/aut.2023.0124. eCollection 2025 Jun.
5
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.在基层医疗机构或医院门诊环境中,如果患者出现以下症状和体征,可判断其是否患有 COVID-19。
Cochrane Database Syst Rev. 2022 May 20;5(5):CD013665. doi: 10.1002/14651858.CD013665.pub3.
6
Antidepressants for pain management in adults with chronic pain: a network meta-analysis.抗抑郁药治疗成人慢性疼痛的疼痛管理:一项网络荟萃分析。
Health Technol Assess. 2024 Oct;28(62):1-155. doi: 10.3310/MKRT2948.
7
Home treatment for mental health problems: a systematic review.心理健康问题的居家治疗:一项系统综述
Health Technol Assess. 2001;5(15):1-139. doi: 10.3310/hta5150.
8
Systemic pharmacological treatments for chronic plaque psoriasis: a network meta-analysis.系统性药理学治疗慢性斑块状银屑病:网络荟萃分析。
Cochrane Database Syst Rev. 2021 Apr 19;4(4):CD011535. doi: 10.1002/14651858.CD011535.pub4.
9
Systemic pharmacological treatments for chronic plaque psoriasis: a network meta-analysis.慢性斑块状银屑病的全身药理学治疗:一项网状Meta分析。
Cochrane Database Syst Rev. 2020 Jan 9;1(1):CD011535. doi: 10.1002/14651858.CD011535.pub3.
10
Survivor, family and professional experiences of psychosocial interventions for sexual abuse and violence: a qualitative evidence synthesis.性虐待和暴力的心理社会干预的幸存者、家庭和专业人员的经验:定性证据综合。
Cochrane Database Syst Rev. 2022 Oct 4;10(10):CD013648. doi: 10.1002/14651858.CD013648.pub2.

本文引用的文献

1
Risk of Bias in Chest Radiography Deep Learning Foundation Models.胸部X光深度学习基础模型中的偏倚风险
Radiol Artif Intell. 2023 Sep 27;5(6):e230060. doi: 10.1148/ryai.230060. eCollection 2023 Nov.
2
An Interpretable and Accurate Deep-Learning Diagnosis Framework Modeled With Fully and Semi-Supervised Reciprocal Learning.基于全监督和半监督互惠学习的可解释且精确的深度学习诊断框架模型
IEEE Trans Med Imaging. 2024 Jan;43(1):392-404. doi: 10.1109/TMI.2023.3306781. Epub 2024 Jan 2.
3
Artificial intelligence in radiology - beyond the black box.
放射学中的人工智能——超越黑箱
Rofo. 2023 Sep;195(9):797-803. doi: 10.1055/a-2076-6736. Epub 2023 May 9.
4
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.停止为高风险决策解释黑箱机器学习模型,转而使用可解释模型。
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.
5
Explainable artificial intelligence (XAI) in deep learning-based medical image analysis.深度学习在医学影像分析中的可解释人工智能(XAI)。
Med Image Anal. 2022 Jul;79:102470. doi: 10.1016/j.media.2022.102470. Epub 2022 May 4.
6
Lack of Transparency and Potential Bias in Artificial Intelligence Data Sets and Algorithms: A Scoping Review.人工智能数据集和算法中缺乏透明度和潜在偏见:范围综述。
JAMA Dermatol. 2021 Nov 1;157(11):1362-1369. doi: 10.1001/jamadermatol.2021.3129.
7
The role of artificial intelligence in healthcare: a structured literature review.人工智能在医疗保健中的作用:一项结构化文献综述。
BMC Med Inform Decis Mak. 2021 Apr 10;21(1):125. doi: 10.1186/s12911-021-01488-9.
8
CheXclusion: Fairness gaps in deep chest X-ray classifiers.CheXclusion:深度学习胸部 X 射线分类器中的公平性差距。
Pac Symp Biocomput. 2021;26:232-243.
9
Lung Nodule Classification Using Biomarkers, Volumetric Radiomics, and 3D CNNs.基于生物标志物、容积放射组学和 3D CNN 的肺结节分类。
J Digit Imaging. 2021 Jun;34(3):647-666. doi: 10.1007/s10278-020-00417-y. Epub 2021 Feb 2.
10
3D-MCN: A 3D Multi-scale Capsule Network for Lung Nodule Malignancy Prediction.3D-MCN:一种用于肺结节良恶性预测的三维多尺度胶囊网络。
Sci Rep. 2020 May 14;10(1):7948. doi: 10.1038/s41598-020-64824-5.