Suppr超能文献

可解释决策集:用于描述与预测的联合框架

Interpretable Decision Sets: A Joint Framework for Description and Prediction.

作者信息

Lakkaraju Himabindu, Bach Stephen H, Jure Leskovec

机构信息

Stanford University,

出版信息

KDD. 2016 Aug;2016:1675-1684. doi: 10.1145/2939672.2939874.

Abstract

One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model's prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency.

摘要

部署预测模型的最重要障碍之一是人类不理解和不信任它们。了解哪些变量在模型预测中很重要以及它们是如何组合的,对于帮助人们理解和信任自动决策系统非常有帮助。在此,我们提出了可解释决策集,这是一个构建预测模型的框架,该模型既高度准确又高度可解释。决策集是由独立的“如果-那么”规则组成的集合。由于每个规则都可以独立应用,决策集简单、简洁且易于解释。我们通过一个目标函数来形式化决策集学习,该目标函数同时优化规则的准确性和可解释性。特别是,我们的方法学习简短、准确且不重叠的规则,这些规则覆盖整个特征空间,并关注小但重要的类别。此外,我们证明我们的目标是一个非单调次模函数,我们有效地对其进行优化以找到一组接近最优的规则。实验表明,可解释决策集在分类方面与最先进的机器学习技术一样准确。它们的平均规模也比通过其他方法学习的基于规则的模型小三倍。最后,一项用户研究的结果表明,与其他为可解释性而设计的基于规则的模型相比,人们能够更快、更准确地回答关于可解释决策集决策边界的多项选择题,并基于这些边界编写类别描述。总体而言,我们的框架提供了一种新的可解释机器学习方法,该方法平衡了准确性、可解释性和计算效率。

相似文献

1
Interpretable Decision Sets: A Joint Framework for Description and Prediction.
KDD. 2016 Aug;2016:1675-1684. doi: 10.1145/2939672.2939874.
2
Learning Interpretable Rules for Scalable Data Representation and Classification.
IEEE Trans Pattern Anal Mach Intell. 2024 Feb;46(2):1121-1133. doi: 10.1109/TPAMI.2023.3328881. Epub 2024 Jan 8.
3
Interpretable machine learning methods for predictions in systems biology from omics data.
Front Mol Biosci. 2022 Oct 17;9:926623. doi: 10.3389/fmolb.2022.926623. eCollection 2022.
5
R.ROSETTA: an interpretable machine learning framework.
BMC Bioinformatics. 2021 Mar 6;22(1):110. doi: 10.1186/s12859-021-04049-z.
7
Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.
Artif Intell Med. 2014 Mar;60(3):197-219. doi: 10.1016/j.artmed.2013.12.006. Epub 2014 Jan 9.
8
Efficient and interpretable prediction of protein functional classes by correspondence analysis and compact set relations.
PLoS One. 2013 Oct 11;8(10):e75542. doi: 10.1371/journal.pone.0075542. eCollection 2013.
9
SMILE: systems metabolomics using interpretable learning and evolution.
BMC Bioinformatics. 2021 May 28;22(1):284. doi: 10.1186/s12859-021-04209-1.
10
Interpretable gene expression classifier with an accurate and compact fuzzy rule base for microarray data analysis.
Biosystems. 2006 Sep;85(3):165-76. doi: 10.1016/j.biosystems.2006.01.002. Epub 2006 Feb 21.

引用本文的文献

1
CRE: An R package for interpretable discovery and inference of heterogeneous treatment effects.
J Open Source Softw. 2023;8(92). doi: 10.21105/joss.05587. Epub 2023 Dec 15.
2
Two-step pragmatic subgroup discovery for heterogeneous treatment effects analyses: perspectives toward enhanced interpretability.
Eur J Epidemiol. 2025 Feb;40(2):141-150. doi: 10.1007/s10654-025-01215-y. Epub 2025 Mar 4.
3
Interpretable optimisation-based approach for hyper-box classification.
Mach Learn. 2025;114(3):51. doi: 10.1007/s10994-024-06643-7. Epub 2025 Feb 6.
4
Fast Interpretable Greedy-Tree Sums.
Proc Natl Acad Sci U S A. 2025 Feb 18;122(7):e2310151122. doi: 10.1073/pnas.2310151122. Epub 2025 Feb 14.
5
Adversarial Examples on XAI-Enabled DT for Smart Healthcare Systems.
Sensors (Basel). 2024 Oct 27;24(21):6891. doi: 10.3390/s24216891.
6
Explainable depression symptom detection in social media.
Health Inf Sci Syst. 2024 Sep 6;12(1):47. doi: 10.1007/s13755-024-00303-9. eCollection 2024 Dec.
7
Recommendations to promote fairness and inclusion in biomedical AI research and clinical use.
J Biomed Inform. 2024 Sep;157:104693. doi: 10.1016/j.jbi.2024.104693. Epub 2024 Jul 15.
8
Explainable Artificial Intelligence in Quantifying Breast Cancer Factors: Saudi Arabia Context.
Healthcare (Basel). 2024 May 15;12(10):1025. doi: 10.3390/healthcare12101025.
9
Roses Have Thorns: Understanding the Downside of Oncological Care Delivery Through Visual Analytics and Sequential Rule Mining.
IEEE Trans Vis Comput Graph. 2024 Jan;30(1):1227-1237. doi: 10.1109/TVCG.2023.3326939. Epub 2023 Dec 25.
10
Artificial Intelligence and Infectious Disease Imaging.
J Infect Dis. 2023 Oct 3;228(Suppl 4):S322-S336. doi: 10.1093/infdis/jiad158.

本文引用的文献

1
Very Simple Structure: An Alternative Procedure For Estimating The Optimal Number Of Interpretable Factors.
Multivariate Behav Res. 1979 Oct 1;14(4):403-14. doi: 10.1207/s15327906mbr1404_2.
2
Bayesian reasoning with ifs and ands and ors.
Front Psychol. 2015 Feb 25;6:192. doi: 10.3389/fpsyg.2015.00192. eCollection 2015.
3
Obtaining interpretable fuzzy classification rules from medical data.
Artif Intell Med. 1999 Jun;16(2):149-69. doi: 10.1016/s0933-3657(98)00070-0.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验