Suppr超能文献

基于知识转移的稀疏深度信念网络

Knowledge Transfer-Based Sparse Deep Belief Network.

作者信息

Yu Jianbo, Liu Guoliang

出版信息

IEEE Trans Cybern. 2023 Dec;53(12):7572-7583. doi: 10.1109/TCYB.2022.3173632. Epub 2023 Nov 29.

Abstract

Deep learning has made remarkable achievements in various applications in recent years. With the increasing computing power and the "black box" problem of neural networks, however, the development of deep neural networks (DNNs) has entered a bottleneck period. This article proposes a novel deep belief network (DBN) based on knowledge transfer and optimization of the network structure. First, a neural-symbolic model is proposed to extract rules to describe the dynamic operation mechanism of the deep network. Second, knowledge fusion is proposed based on the merge and deletion of the extracted rules from the DBN model. Finally, a new DNN, knowledge transfer-based sparse DBN (KT-SDBN) is constructed to generate a sparse network without excessive information loss. In comparison with DBN, KT-SDBN has a more sparse network structure and better learning performance on the existing knowledge and data. The experimental results in the benchmark data indicate that KT-SDBN not only has effective feature learning performance with 30% of the original network parameters but also shows a large compression rate that is far larger than other structure optimization algorithms.

摘要

近年来,深度学习在各种应用中取得了显著成就。然而,随着计算能力的不断提高以及神经网络的“黑箱”问题,深度神经网络(DNN)的发展进入了瓶颈期。本文提出了一种基于知识转移和网络结构优化的新型深度信念网络(DBN)。首先,提出了一种神经符号模型来提取规则,以描述深度网络的动态运行机制。其次,基于从DBN模型中提取的规则的合并和删除提出了知识融合。最后,构建了一种新的DNN,即基于知识转移的稀疏DBN(KT-SDBN),以生成一个稀疏网络,且不会有过多的信息损失。与DBN相比,KT-SDBN具有更稀疏的网络结构,并且在现有知识和数据上具有更好的学习性能。基准数据中的实验结果表明,KT-SDBN不仅在仅使用30%的原始网络参数的情况下具有有效的特征学习性能,而且还显示出远高于其他结构优化算法的大压缩率。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验