• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于可解释模式分类的复发感知长期认知网络

Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern Classification.

作者信息

Napoles Gonzalo, Salgueiro Yamisleydi, Grau Isel, Espinosa Maikel Leon

出版信息

IEEE Trans Cybern. 2023 Oct;53(10):6083-6094. doi: 10.1109/TCYB.2022.3165104. Epub 2023 Sep 15.

DOI:10.1109/TCYB.2022.3165104
PMID:35476562
Abstract

Machine-learning solutions for pattern classification problems are nowadays widely deployed in society and industry. However, the lack of transparency and accountability of most accurate models often hinders their safe use. Thus, there is a clear need for developing explainable artificial intelligence mechanisms. There exist model-agnostic methods that summarize feature contributions, but their interpretability is limited to predictions made by black-box models. An open challenge is to develop models that have intrinsic interpretability and produce their own explanations, even for classes of models that are traditionally considered black boxes like (recurrent) neural networks. In this article, we propose a long-term cognitive network (LTCN) for interpretable pattern classification of structured data. Our method brings its own mechanism for providing explanations by quantifying the relevance of each feature in the decision process. For supporting the interpretability without affecting the performance, the model incorporates more flexibility through a quasi-nonlinear reasoning rule that allows controlling nonlinearity. Besides, we propose a recurrence-aware decision model that evades the issues posed by the unique fixed point while introducing a deterministic learning algorithm to compute the tunable parameters. The simulations show that our interpretable model obtains competitive results when compared to state-of-the-art white and black-box models.

摘要

如今,用于模式分类问题的机器学习解决方案在社会和工业中得到了广泛应用。然而,大多数精确模型缺乏透明度和可问责性,这常常阻碍它们的安全使用。因此,显然需要开发可解释的人工智能机制。存在一些与模型无关的方法来总结特征贡献,但其可解释性仅限于黑箱模型所做的预测。一个开放的挑战是开发具有内在可解释性并能自行给出解释的模型,即使对于传统上被视为黑箱的模型类别,如(循环)神经网络。在本文中,我们提出了一种用于结构化数据可解释模式分类的长期认知网络(LTCN)。我们的方法通过量化决策过程中每个特征的相关性,带来了其自身提供解释的机制。为了在不影响性能的情况下支持可解释性,该模型通过允许控制非线性的准非线性推理规则引入了更多灵活性。此外,我们提出了一种递归感知决策模型,该模型在引入确定性学习算法来计算可调参数的同时,规避了由唯一不动点带来的问题。仿真结果表明,与当前的白盒和黑盒模型相比,我们的可解释模型取得了具有竞争力的结果。

相似文献

1
Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern Classification.用于可解释模式分类的复发感知长期认知网络
IEEE Trans Cybern. 2023 Oct;53(10):6083-6094. doi: 10.1109/TCYB.2022.3165104. Epub 2023 Sep 15.
2
Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology.揭开黑箱:可解释机器学习在心脏病学中的前景与局限。
Can J Cardiol. 2022 Feb;38(2):204-213. doi: 10.1016/j.cjca.2021.09.004. Epub 2021 Sep 14.
3
Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction.用于图像分类问题的可解释机器学习框架:脑胶质瘤癌症预测案例研究
J Imaging. 2020 May 28;6(6):37. doi: 10.3390/jimaging6060037.
4
Ada-WHIPS: explaining AdaBoost classification with applications in the health sciences.Ada-WHIPS:解释 AdaBoost 分类及其在健康科学中的应用。
BMC Med Inform Decis Mak. 2020 Oct 2;20(1):250. doi: 10.1186/s12911-020-01201-2.
5
Understanding the black-box: towards interpretable and reliable deep learning models.理解黑箱:迈向可解释且可靠的深度学习模型。
PeerJ Comput Sci. 2023 Nov 29;9:e1629. doi: 10.7717/peerj-cs.1629. eCollection 2023.
6
Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review.模型无关可解释人工智能框架在肿瘤学中的应用:一项叙述性综述
Transl Cancer Res. 2022 Oct;11(10):3853-3868. doi: 10.21037/tcr-22-1626.
7
Thermodynamics-inspired explanations of artificial intelligence.热力学启发的人工智能解释。
Nat Commun. 2024 Sep 9;15(1):7859. doi: 10.1038/s41467-024-51970-x.
8
An explainable self-attention deep neural network for detecting mild cognitive impairment using multi-input digital drawing tasks.一种基于可解释自注意力的深度神经网络,用于使用多输入数字绘图任务检测轻度认知障碍。
Alzheimers Res Ther. 2022 Aug 9;14(1):111. doi: 10.1186/s13195-022-01043-2.
9
Development of prediction models for one-year brain tumour survival using machine learning: a comparison of accuracy and interpretability.使用机器学习开发脑肿瘤一年生存率预测模型:准确性与可解释性的比较
Comput Methods Programs Biomed. 2023 May;233:107482. doi: 10.1016/j.cmpb.2023.107482. Epub 2023 Mar 13.
10
An Explainable Artificial Intelligence Framework for the Deterioration Risk Prediction of Hepatitis Patients.用于预测肝炎患者恶化风险的可解释人工智能框架。
J Med Syst. 2021 Apr 13;45(5):61. doi: 10.1007/s10916-021-01736-5.