文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

Interpretable neural networks: principles and applications.

作者信息

Liu Zhuoyang, Xu Feng

机构信息

Key Lab of Information Science of Electromagnetic Waves, Fudan University, Shanghai, China.

Faculty of Math and Computer Science, Weizmann Institute of Science, Rehovot, Israel.

出版信息

Front Artif Intell. 2023 Oct 13;6:974295. doi: 10.3389/frai.2023.974295. eCollection 2023.


DOI:10.3389/frai.2023.974295
PMID:37899962
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10606258/
Abstract

In recent years, with the rapid development of deep learning technology, great progress has been made in computer vision, image recognition, pattern recognition, and speech signal processing. However, due to the black-box nature of deep neural networks (DNNs), one cannot explain the parameters in the deep network and why it can perfectly perform the assigned tasks. The interpretability of neural networks has now become a research hotspot in the field of deep learning. It covers a wide range of topics in speech and text signal processing, image processing, differential equation solving, and other fields. There are subtle differences in the definition of interpretability in different fields. This paper divides interpretable neural network (INN) methods into the following two directions: model decomposition neural networks, and semantic INNs. The former mainly constructs an INN by converting the analytical model of a conventional method into different layers of neural networks and combining the interpretability of the conventional model-based method with the powerful learning capability of the neural network. This type of INNs is further classified into different subtypes depending on which type of models they are derived from, i.e., mathematical models, physical models, and other models. The second type is the interpretable network with visual semantic information for user understanding. Its basic idea is to use the visualization of the whole or partial network structure to assign semantic information to the network structure, which further includes convolutional layer output visualization, decision tree extraction, semantic graph, etc. This type of method mainly uses human visual logic to explain the structure of a black-box neural network. So it is a post-network-design method that tries to assign interpretability to a black-box network structure afterward, as opposed to the pre-network-design method of model-based INNs, which designs interpretable network structure beforehand. This paper reviews recent progress in these areas as well as various application scenarios of INNs and discusses existing problems and future development directions.

摘要
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/b94164ffcd1d/frai-06-974295-g0020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/78859b49193e/frai-06-974295-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/df37c077f6f5/frai-06-974295-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/ad3bae1b9757/frai-06-974295-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/cde19494afac/frai-06-974295-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/7900175ed957/frai-06-974295-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/e7246ca56807/frai-06-974295-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/6030ff77b878/frai-06-974295-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/bb197bdc5705/frai-06-974295-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/d06178f469d1/frai-06-974295-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/74095cbad920/frai-06-974295-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/f036be8d8e7e/frai-06-974295-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/63ebc3919dc8/frai-06-974295-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/67e397301371/frai-06-974295-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/9c5eb8d24cdb/frai-06-974295-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/c0dd4184daf9/frai-06-974295-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/13b5cf4cb593/frai-06-974295-g0016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/093de2b64edb/frai-06-974295-g0017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/09763170b230/frai-06-974295-g0018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/a903b29cdf66/frai-06-974295-g0019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/b94164ffcd1d/frai-06-974295-g0020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/78859b49193e/frai-06-974295-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/df37c077f6f5/frai-06-974295-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/ad3bae1b9757/frai-06-974295-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/cde19494afac/frai-06-974295-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/7900175ed957/frai-06-974295-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/e7246ca56807/frai-06-974295-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/6030ff77b878/frai-06-974295-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/bb197bdc5705/frai-06-974295-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/d06178f469d1/frai-06-974295-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/74095cbad920/frai-06-974295-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/f036be8d8e7e/frai-06-974295-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/63ebc3919dc8/frai-06-974295-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/67e397301371/frai-06-974295-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/9c5eb8d24cdb/frai-06-974295-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/c0dd4184daf9/frai-06-974295-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/13b5cf4cb593/frai-06-974295-g0016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/093de2b64edb/frai-06-974295-g0017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/09763170b230/frai-06-974295-g0018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/a903b29cdf66/frai-06-974295-g0019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f459/10606258/b94164ffcd1d/frai-06-974295-g0020.jpg

相似文献

[1]
Interpretable neural networks: principles and applications.

Front Artif Intell. 2023-10-13

[2]
Research and Application of Ancient Chinese Pattern Restoration Based on Deep Convolutional Neural Network.

Comput Intell Neurosci. 2021

[3]
An Interpretable Deep Hierarchical Semantic Convolutional Neural Network for Lung Nodule Malignancy Classification.

Expert Syst Appl. 2019-8-15

[4]
On Interpretability of Artificial Neural Networks: A Survey.

IEEE Trans Radiat Plasma Med Sci. 2021-11

[5]
Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?

Adv Sci (Weinh). 2022-12

[6]
An attribution graph-based interpretable method for CNNs.

Neural Netw. 2024-11

[7]
An Interactive Visualization for Feature Localization in Deep Neural Networks.

Front Artif Intell. 2020-7-23

[8]
CiwGAN and fiwGAN: Encoding information in acoustic data to model lexical learning with Generative Adversarial Networks.

Neural Netw. 2021-7

[9]
Explaining decisions of graph convolutional neural networks: patient-specific molecular subnetworks responsible for metastasis prediction in breast cancer.

Genome Med. 2021-3-11

[10]
Illuminating the Black Box: Interpreting Deep Neural Network Models for Psychiatric Research.

Front Psychiatry. 2020-10-29

引用本文的文献

[1]
Machine Learning and Artificial Intelligence for Infectious Disease Surveillance, Diagnosis, and Prognosis.

Viruses. 2025-6-23

[2]
Data-based regression models for predicting remifentanil pharmacokinetics.

Indian J Anaesth. 2024-12

[3]
Frontiers in artificial intelligence-directed light-sheet microscopy for uncovering biological phenomena and multi-organ imaging.

View (Beijing). 2024-10

[4]
A modular framework for multi-scale tissue imaging and neuronal segmentation.

Nat Commun. 2024-5-22

本文引用的文献

[1]
On Interpretability of Artificial Neural Networks: A Survey.

IEEE Trans Radiat Plasma Med Sci. 2021-11

[2]
A graph placement methodology for fast chip design.

Nature. 2021-6

[3]
Machine learning-accelerated computational fluid dynamics.

Proc Natl Acad Sci U S A. 2021-5-25

[4]
Unsupervised content-preserving transformation for optical microscopy.

Light Sci Appl. 2021-3-1

[5]
Wave physics as an analog recurrent neural network.

Sci Adv. 2019-12-20

[6]
Deep Unfolded Robust PCA With Application to Clutter Suppression in Ultrasound.

IEEE Trans Med Imaging. 2020-4

[7]
Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction.

Nat Methods. 2019-7-8

[8]
Unmasking Clever Hans predictors and assessing what machines really learn.

Nat Commun. 2019-3-11

[9]
Visualizing deep neural network by alternately image blurring and deblurring.

Neural Netw. 2017-10-10

[10]
A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs.

Science. 2017-10-26

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索