Suppr超能文献

SHAP与LIME:信用风险判别能力评估

SHAP and LIME: An Evaluation of Discriminative Power in Credit Risk.

作者信息

Gramegna Alex, Giudici Paolo

机构信息

Department of Economics and Management, University of Pavia, Pavia, Italy.

出版信息

Front Artif Intell. 2021 Sep 17;4:752558. doi: 10.3389/frai.2021.752558. eCollection 2021.

Abstract

In credit risk estimation, the most important element is obtaining a probability of default as close as possible to the effective risk. This effort quickly prompted new, powerful algorithms that reach a far higher accuracy, but at the cost of losing intelligibility, such as Gradient Boosting or ensemble methods. These models are usually referred to as "black-boxes", implying that you know the inputs and the output, but there is little way to understand what is going on under the hood. As a response to that, we have seen several different Explainable AI models flourish in recent years, with the aim of letting the user see why the black-box gave a certain output. In this context, we evaluate two very popular eXplainable AI (XAI) models in their ability to discriminate observations into groups, through the application of both unsupervised and predictive modeling to the weights these XAI models assign to features locally. The evaluation is carried out on real Small and Medium Enterprises data, obtained from official italian repositories, and may form the basis for the employment of such XAI models for post-processing features extraction.

摘要

在信用风险评估中,最重要的因素是获得尽可能接近有效风险的违约概率。这一努力很快催生了新的强大算法,这些算法能达到更高的精度,但代价是失去了可理解性,比如梯度提升或集成方法。这些模型通常被称为“黑匣子”,这意味着你知道输入和输出,但几乎无法了解其内部运作机制。作为对此的回应,近年来我们看到了几种不同的可解释人工智能模型蓬勃发展,目的是让用户明白黑匣子为何给出特定输出。在此背景下,我们通过对这些可解释人工智能(XAI)模型在局部赋予特征的权重应用无监督和预测建模,来评估两种非常流行的XAI模型将观测值区分为不同组别的能力。评估是基于从意大利官方存储库获取的真实中小企业数据进行的,这可能为将此类XAI模型用于后处理特征提取奠定基础。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/8484963/ed72826fec50/frai-04-752558-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验