• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

DISCRET:为治疗效果评估合成可靠的解释。

DISCRET: Synthesizing Faithful Explanations For Treatment Effect Estimation.

作者信息

Wu Yinjun, Keoliya Mayank, Chen Kan, Velingker Neelay, Li Ziyang, Getzen Emily J, Long Qi, Naik Mayur, Parikh Ravi B, Wong Eric

机构信息

School of Computer Science, Peking University, Beijing, China.

Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, United States.

出版信息

Proc Mach Learn Res. 2024 Jul;235:53597-53618.

PMID:39205826
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11350397/
Abstract

Designing faithful yet accurate AI models is challenging, particularly in the field of individual treatment effect estimation (ITE). ITE prediction models deployed in critical settings such as healthcare should ideally be (i) accurate, and (ii) provide faithful explanations. However, current solutions are inadequate: state-of-the-art black-box models do not supply explanations, post-hoc explainers for black-box models lack faithfulness guarantees, and self-interpretable models greatly compromise accuracy. To address these issues, we propose DISCRET, a self-interpretable ITE framework that synthesizes faithful, rule-based explanations for each sample. A key insight behind DISCRET is that explanations can serve dually as to identify similar subgroups of samples. We provide a novel RL algorithm to efficiently synthesize these explanations from a large search space. We evaluate DISCRET on diverse tasks involving tabular, image, and text data. DISCRET outperforms the best self-interpretable models and has accuracy comparable to the best black-box models while providing faithful explanations. DISCRET is available at https://github.com/wuyinjun-1993/DISCRET-ICML2024.

摘要

设计既忠实又准确的人工智能模型具有挑战性,尤其是在个体治疗效果估计(ITE)领域。部署在医疗保健等关键环境中的ITE预测模型理想情况下应具备:(i)准确性,以及(ii)提供可靠的解释。然而,目前的解决方案并不完善:最先进的黑箱模型不提供解释,黑箱模型的事后解释器缺乏可靠性保证,而可自我解释的模型则大大牺牲了准确性。为了解决这些问题,我们提出了DISCRET,这是一个可自我解释的ITE框架,它为每个样本综合生成可靠的、基于规则的解释。DISCRET背后的一个关键见解是,解释可以双重作用于识别相似的样本子组。我们提供了一种新颖的强化学习算法,以从大型搜索空间中高效地综合这些解释。我们在涉及表格、图像和文本数据的各种任务上对DISCRET进行了评估。DISCRET优于最佳的可自我解释模型,并且在提供可靠解释的同时,其准确性与最佳黑箱模型相当。可在https://github.com/wuyinjun-1993/DISCRET-ICML2024获取DISCRET。

相似文献

1
DISCRET: Synthesizing Faithful Explanations For Treatment Effect Estimation.DISCRET:为治疗效果评估合成可靠的解释。
Proc Mach Learn Res. 2024 Jul;235:53597-53618.
2
Explaining Black-Box Models for Biomedical Text Classification.解释用于生物医学文本分类的黑盒模型。
IEEE J Biomed Health Inform. 2021 Aug;25(8):3112-3120. doi: 10.1109/JBHI.2021.3056748. Epub 2021 Aug 5.
3
Concept-Based Lesion Aware Transformer for Interpretable Retinal Disease Diagnosis.用于可解释视网膜疾病诊断的基于概念的病变感知Transformer
IEEE Trans Med Imaging. 2025 Jan;44(1):57-68. doi: 10.1109/TMI.2024.3429148. Epub 2025 Jan 2.
4
Improving Clinician Performance in Classifying EEG Patterns on the Ictal-Interictal Injury Continuum Using Interpretable Machine Learning.使用可解释的机器学习提高临床医生在发作期-发作间期损伤连续体上对脑电图模式进行分类的能力。
NEJM AI. 2024 Jun;1(6). doi: 10.1056/aioa2300331. Epub 2024 May 23.
5
Training calibration-based counterfactual explainers for deep learning models in medical image analysis.基于训练校准的深度学习模型反事实解释器在医学图像分析中的应用。
Sci Rep. 2022 Jan 12;12(1):597. doi: 10.1038/s41598-021-04529-5.
6
IHCP: interpretable hepatitis C prediction system based on black-box machine learning models.IHCP:基于黑盒机器学习模型的可解释丙型肝炎预测系统。
BMC Bioinformatics. 2023 Sep 6;24(1):333. doi: 10.1186/s12859-023-05456-0.
7
Accurate, interpretable predictions of materials properties within transformer language models.变压器语言模型中材料特性的准确、可解释预测。
Patterns (N Y). 2023 Aug 2;4(10):100803. doi: 10.1016/j.patter.2023.100803. eCollection 2023 Oct 13.
8
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
9
CI-GNN: A Granger causality-inspired graph neural network for interpretable brain network-based psychiatric diagnosis.CI-GNN:一种基于 Granger 因果关系的图神经网络,用于可解释的脑网络精神病学诊断。
Neural Netw. 2024 Apr;172:106147. doi: 10.1016/j.neunet.2024.106147. Epub 2024 Jan 26.
10
Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern Classification.用于可解释模式分类的复发感知长期认知网络
IEEE Trans Cybern. 2023 Oct;53(10):6083-6094. doi: 10.1109/TCYB.2022.3165104. Epub 2023 Sep 15.

本文引用的文献

1
Using Machine Learning to Individualize Treatment Effect Estimation: Challenges and Opportunities.利用机器学习实现个体化治疗效果估计:挑战与机遇。
Clin Pharmacol Ther. 2024 Apr;115(4):710-719. doi: 10.1002/cpt.3159. Epub 2024 Jan 12.
2
Testing Biased Randomization Assumptions and Quantifying Imperfect Matching and Residual Confounding in Matched Observational Studies.检验匹配观察性研究中的偏倚随机化假设并量化不完全匹配和残余混杂因素
J Comput Graph Stat. 2023;32(2):528-538. doi: 10.1080/10618600.2022.2116447. Epub 2022 Oct 19.
3
PsmPy: A Package for Retrospective Cohort Matching in Python.PsmPy:一个用于 Python 回顾性队列匹配的包。
Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:1354-1357. doi: 10.1109/EMBC48229.2022.9871333.
4
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.停止为高风险决策解释黑箱机器学习模型,转而使用可解释模型。
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.
5
The Cancer Genome Atlas Pan-Cancer analysis project.癌症基因组图谱泛癌分析项目。
Nat Genet. 2013 Oct;45(10):1113-20. doi: 10.1038/ng.2764.
6
SLIC superpixels compared to state-of-the-art superpixel methods.SLIC 超像素与最先进的超像素方法比较。
IEEE Trans Pattern Anal Mach Intell. 2012 Nov;34(11):2274-82. doi: 10.1109/TPAMI.2012.120.
7
ESTIMATING TREATMENT EFFECTS ON HEALTHCARE COSTS UNDER EXOGENEITY: IS THERE A 'MAGIC BULLET'?在外生性条件下估计治疗对医疗成本的影响:是否存在“万灵药”?
Health Serv Outcomes Res Methodol. 2011 Jul 1;11(1-2):1-26. doi: 10.1007/s10742-011-0072-8.