Suppr超能文献

用于节能联想记忆的空间排列稀疏循环神经网络。

Spatially Arranged Sparse Recurrent Neural Networks for Energy Efficient Associative Memory.

出版信息

IEEE Trans Neural Netw Learn Syst. 2020 Jan;31(1):24-38. doi: 10.1109/TNNLS.2019.2899344. Epub 2019 Mar 15.

Abstract

The development of hardware neural networks, including neuromorphic hardware, has been accelerated over the past few years. However, it is challenging to operate very large-scale neural networks with low-power hardware devices, partly due to signal transmissions through a massive number of interconnections. Our aim is to deal with the issue of communication cost from an algorithmic viewpoint and study learning algorithms for energy-efficient information processing. Here, we consider two approaches to finding spatially arranged sparse recurrent neural networks with the high cost-performance ratio for associative memory. In the first approach following classical methods, we focus on sparse modular network structures inspired by biological brain networks and examine their storage capacity under an iterative learning rule. We show that incorporating long-range intermodule connections into purely modular networks can enhance the cost-performance ratio. In the second approach, we formulate for the first time an optimization problem where the network sparsity is maximized under the constraints imposed by a pattern embedding condition. We show that there is a tradeoff between the interconnection cost and the computational performance in the optimized networks. We demonstrate that the optimized networks can achieve a better cost-performance ratio compared with those considered in the first approach. We show the effectiveness of the optimization approach mainly using binary patterns and apply it also to gray-scale image restoration. Our results suggest that the presented approaches are useful in seeking more sparse and less costly connectivity of neural networks for the enhancement of energy efficiency in hardware neural networks.

摘要

在过去的几年中,硬件神经网络的发展,包括神经形态硬件,得到了加速。然而,用低功耗硬件设备来操作非常大规模的神经网络是具有挑战性的,部分原因是通过大量的互连进行信号传输。我们的目的是从算法的角度来处理通信成本的问题,并研究用于节能信息处理的学习算法。在这里,我们考虑了两种方法来找到具有高性价比的空间排列稀疏递归神经网络,用于联想记忆。在第一种方法中,我们遵循经典方法,关注受生物脑网络启发的稀疏模块化网络结构,并在迭代学习规则下检查它们的存储容量。我们表明,将远程模块间连接纳入纯模块化网络中可以提高成本效益比。在第二种方法中,我们首次提出了一个优化问题,即在模式嵌入条件下的约束下最大化网络的稀疏性。我们表明,在优化网络中,存在着连接成本和计算性能之间的权衡。我们证明,与第一种方法中考虑的网络相比,优化网络可以实现更好的成本效益比。我们主要使用二进制模式来展示优化方法的有效性,并将其应用于灰度图像恢复。我们的结果表明,所提出的方法在寻求更稀疏、成本更低的神经网络连接方面是有用的,这有助于提高硬件神经网络的能量效率。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验