• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于可解释人工智能的归纳逻辑编程技术批判性综述。

A Critical Review of Inductive Logic Programming Techniques for Explainable AI.

作者信息

Zhang Zheng, Yilmaz Levent, Liu Bo

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10220-10236. doi: 10.1109/TNNLS.2023.3246980. Epub 2024 Aug 5.

DOI:10.1109/TNNLS.2023.3246980
PMID:37018093
Abstract

Despite recent advances in modern machine learning algorithms, the opaqueness of their underlying mechanisms continues to be an obstacle in adoption. To instill confidence and trust in artificial intelligence (AI) systems, explainable AI (XAI) has emerged as a response to improve modern machine learning algorithms' explainability. Inductive logic programming (ILP), a subfield of symbolic AI, plays a promising role in generating interpretable explanations because of its intuitive logic-driven framework. ILP effectively leverages abductive reasoning to generate explainable first-order clausal theories from examples and background knowledge. However, several challenges in developing methods inspired by ILP need to be addressed for their successful application in practice. For example, the existing ILP systems often have a vast solution space, and the induced solutions are very sensitive to noises and disturbances. This survey paper summarizes the recent advances in ILP and a discussion of statistical relational learning (SRL) and neural-symbolic algorithms, which offer synergistic views to ILP. Following a critical review of the recent advances, we delineate observed challenges and highlight potential avenues of further ILP-motivated research toward developing self-explanatory AI systems.

摘要

尽管现代机器学习算法最近取得了进展,但其底层机制的不透明性仍然是采用过程中的一个障碍。为了增强对人工智能(AI)系统的信心和信任,可解释人工智能(XAI)应运而生,旨在提高现代机器学习算法的可解释性。归纳逻辑编程(ILP)作为符号AI的一个子领域,因其直观的逻辑驱动框架,在生成可解释的解释方面发挥着重要作用。ILP有效地利用溯因推理,从示例和背景知识中生成可解释的一阶子句理论。然而,要想在实践中成功应用受ILP启发的方法,还需要解决几个开发方面的挑战。例如,现有的ILP系统通常有庞大的解空间,而且诱导出的解对噪声和干扰非常敏感。这篇综述文章总结了ILP的最新进展,并讨论了统计关系学习(SRL)和神经符号算法,它们为ILP提供了协同的视角。在对这些最新进展进行批判性回顾之后,我们阐述了观察到的挑战,并强调了以ILP为动力的进一步研究的潜在途径,以开发具有自解释能力的AI系统。

相似文献

1
A Critical Review of Inductive Logic Programming Techniques for Explainable AI.用于可解释人工智能的归纳逻辑编程技术批判性综述。
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10220-10236. doi: 10.1109/TNNLS.2023.3246980. Epub 2024 Aug 5.
2
Generating Explanations for Conceptual Validation of Graph Neural Networks: An Investigation of Symbolic Predicates Learned on Relevance-Ranked Sub-Graphs.为图神经网络的概念验证生成解释:对在相关性排序子图上学习的符号谓词的研究。
Kunstliche Intell (Oldenbourg). 2022;36(3-4):271-285. doi: 10.1007/s13218-022-00781-7. Epub 2022 Nov 7.
3
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
4
Relational machine learning for electronic health record-driven phenotyping.用于电子健康记录驱动的表型分析的关系机器学习。
J Biomed Inform. 2014 Dec;52:260-70. doi: 10.1016/j.jbi.2014.07.007. Epub 2014 Jul 15.
5
Explanatory pragmatism: a context-sensitive framework for explainable medical AI.解释性实用主义:一个用于可解释医学人工智能的上下文敏感框架。
Ethics Inf Technol. 2022;24(1):13. doi: 10.1007/s10676-022-09632-3. Epub 2022 Feb 28.
6
Explainability and white box in drug discovery.药物发现中的可解释性和白盒。
Chem Biol Drug Des. 2023 Jul;102(1):217-233. doi: 10.1111/cbdd.14262. Epub 2023 Apr 27.
7
Explainable artificial intelligence in emergency medicine: an overview.急诊医学中的可解释人工智能:综述
Clin Exp Emerg Med. 2023 Dec;10(4):354-362. doi: 10.15441/ceem.23.145. Epub 2023 Nov 28.
8
A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System.医学可解释人工智能(XAI)调查:最新进展、可解释性方法、人机交互和评分系统。
Sensors (Basel). 2022 Oct 21;22(20):8068. doi: 10.3390/s22208068.
9
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.用于印度新冠疫情数据严重程度预测和症状分析的模型无关可解释人工智能工具。
Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023.
10
Explainable artificial intelligence approaches for brain-computer interfaces: a review and design space.用于脑机接口的可解释人工智能方法:综述与设计空间
J Neural Eng. 2024 Aug 8;21(4). doi: 10.1088/1741-2552/ad6593.