BIC-ESAT, ERE, and SKLTCS, College of Engineering, Peking University, Beijing, 100871, P. R. China.
Eastern Institute for Advanced Study, Yongriver Institute of Technology, Ningbo, Zhejiang, 315200, P. R. China.
Adv Sci (Weinh). 2022 Dec;9(35):e2204723. doi: 10.1002/advs.202204723. Epub 2022 Oct 10.
The interpretability of deep neural networks has attracted increasing attention in recent years, and several methods have been created to interpret the "black box" model. Fundamental limitations remain, however, that impede the pace of understanding the networks, especially the extraction of understandable semantic space. In this work, the framework of semantic explainable artificial intelligence (S-XAI) is introduced, which utilizes a sample compression method based on the distinctive row-centered principal component analysis (PCA) that is different from the conventional column-centered PCA to obtain common traits of samples from the convolutional neural network (CNN), and extracts understandable semantic spaces on the basis of discovered semantically sensitive neurons and visualization techniques. Statistical interpretation of the semantic space is also provided, and the concept of semantic probability is proposed. The experimental results demonstrate that S-XAI is effective in providing a semantic interpretation for the CNN, and offers broad usage, including trustworthiness assessment and semantic sample searching.
近年来,深度神经网络的可解释性引起了越来越多的关注,已经有几种方法被用于解释“黑盒”模型。然而,仍然存在一些基本的限制,阻碍了人们对网络的理解速度,特别是对可理解的语义空间的提取。在这项工作中,引入了语义可解释人工智能(S-XAI)的框架,该框架利用了一种基于独特的行中心化主成分分析(PCA)的样本压缩方法,与传统的基于列中心化 PCA 不同,该方法可以从卷积神经网络(CNN)中获得样本的共同特征,并基于发现的语义敏感神经元和可视化技术提取可理解的语义空间。还提供了语义空间的统计解释,并提出了语义概率的概念。实验结果表明,S-XAI 能够有效地为 CNN 提供语义解释,并具有广泛的用途,包括可信度评估和语义样本搜索。