Zhang Zheng, Yilmaz Levent, Liu Bo
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10220-10236. doi: 10.1109/TNNLS.2023.3246980. Epub 2024 Aug 5.
Despite recent advances in modern machine learning algorithms, the opaqueness of their underlying mechanisms continues to be an obstacle in adoption. To instill confidence and trust in artificial intelligence (AI) systems, explainable AI (XAI) has emerged as a response to improve modern machine learning algorithms' explainability. Inductive logic programming (ILP), a subfield of symbolic AI, plays a promising role in generating interpretable explanations because of its intuitive logic-driven framework. ILP effectively leverages abductive reasoning to generate explainable first-order clausal theories from examples and background knowledge. However, several challenges in developing methods inspired by ILP need to be addressed for their successful application in practice. For example, the existing ILP systems often have a vast solution space, and the induced solutions are very sensitive to noises and disturbances. This survey paper summarizes the recent advances in ILP and a discussion of statistical relational learning (SRL) and neural-symbolic algorithms, which offer synergistic views to ILP. Following a critical review of the recent advances, we delineate observed challenges and highlight potential avenues of further ILP-motivated research toward developing self-explanatory AI systems.
尽管现代机器学习算法最近取得了进展,但其底层机制的不透明性仍然是采用过程中的一个障碍。为了增强对人工智能(AI)系统的信心和信任,可解释人工智能(XAI)应运而生,旨在提高现代机器学习算法的可解释性。归纳逻辑编程(ILP)作为符号AI的一个子领域,因其直观的逻辑驱动框架,在生成可解释的解释方面发挥着重要作用。ILP有效地利用溯因推理,从示例和背景知识中生成可解释的一阶子句理论。然而,要想在实践中成功应用受ILP启发的方法,还需要解决几个开发方面的挑战。例如,现有的ILP系统通常有庞大的解空间,而且诱导出的解对噪声和干扰非常敏感。这篇综述文章总结了ILP的最新进展,并讨论了统计关系学习(SRL)和神经符号算法,它们为ILP提供了协同的视角。在对这些最新进展进行批判性回顾之后,我们阐述了观察到的挑战,并强调了以ILP为动力的进一步研究的潜在途径,以开发具有自解释能力的AI系统。