Lakkaraju Himabindu, Bach Stephen H, Jure Leskovec
Stanford University,
KDD. 2016 Aug;2016:1675-1684. doi: 10.1145/2939672.2939874.
One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model's prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency.
部署预测模型的最重要障碍之一是人类不理解和不信任它们。了解哪些变量在模型预测中很重要以及它们是如何组合的,对于帮助人们理解和信任自动决策系统非常有帮助。在此,我们提出了可解释决策集,这是一个构建预测模型的框架,该模型既高度准确又高度可解释。决策集是由独立的“如果-那么”规则组成的集合。由于每个规则都可以独立应用,决策集简单、简洁且易于解释。我们通过一个目标函数来形式化决策集学习,该目标函数同时优化规则的准确性和可解释性。特别是,我们的方法学习简短、准确且不重叠的规则,这些规则覆盖整个特征空间,并关注小但重要的类别。此外,我们证明我们的目标是一个非单调次模函数,我们有效地对其进行优化以找到一组接近最优的规则。实验表明,可解释决策集在分类方面与最先进的机器学习技术一样准确。它们的平均规模也比通过其他方法学习的基于规则的模型小三倍。最后,一项用户研究的结果表明,与其他为可解释性而设计的基于规则的模型相比,人们能够更快、更准确地回答关于可解释决策集决策边界的多项选择题,并基于这些边界编写类别描述。总体而言,我们的框架提供了一种新的可解释机器学习方法,该方法平衡了准确性、可解释性和计算效率。