Liu Zhuoyang, Xu Feng
Key Lab of Information Science of Electromagnetic Waves, Fudan University, Shanghai, China.
Faculty of Math and Computer Science, Weizmann Institute of Science, Rehovot, Israel.
Front Artif Intell. 2023 Oct 13;6:974295. doi: 10.3389/frai.2023.974295. eCollection 2023.
In recent years, with the rapid development of deep learning technology, great progress has been made in computer vision, image recognition, pattern recognition, and speech signal processing. However, due to the black-box nature of deep neural networks (DNNs), one cannot explain the parameters in the deep network and why it can perfectly perform the assigned tasks. The interpretability of neural networks has now become a research hotspot in the field of deep learning. It covers a wide range of topics in speech and text signal processing, image processing, differential equation solving, and other fields. There are subtle differences in the definition of interpretability in different fields. This paper divides interpretable neural network (INN) methods into the following two directions: model decomposition neural networks, and semantic INNs. The former mainly constructs an INN by converting the analytical model of a conventional method into different layers of neural networks and combining the interpretability of the conventional model-based method with the powerful learning capability of the neural network. This type of INNs is further classified into different subtypes depending on which type of models they are derived from, i.e., mathematical models, physical models, and other models. The second type is the interpretable network with visual semantic information for user understanding. Its basic idea is to use the visualization of the whole or partial network structure to assign semantic information to the network structure, which further includes convolutional layer output visualization, decision tree extraction, semantic graph, etc. This type of method mainly uses human visual logic to explain the structure of a black-box neural network. So it is a post-network-design method that tries to assign interpretability to a black-box network structure afterward, as opposed to the pre-network-design method of model-based INNs, which designs interpretable network structure beforehand. This paper reviews recent progress in these areas as well as various application scenarios of INNs and discusses existing problems and future development directions.
Front Artif Intell. 2023-10-13
Comput Intell Neurosci. 2021
IEEE Trans Radiat Plasma Med Sci. 2021-11
Adv Sci (Weinh). 2022-12
Neural Netw. 2024-11
Front Artif Intell. 2020-7-23
Front Psychiatry. 2020-10-29
Indian J Anaesth. 2024-12
Nat Commun. 2024-5-22
IEEE Trans Radiat Plasma Med Sci. 2021-11
Nature. 2021-6
Proc Natl Acad Sci U S A. 2021-5-25
Light Sci Appl. 2021-3-1
Sci Adv. 2019-12-20
IEEE Trans Med Imaging. 2020-4
Nat Commun. 2019-3-11
Neural Netw. 2017-10-10