Suppr超能文献

使用脉冲神经网络构建联想记忆系统。

Constructing an Associative Memory System Using Spiking Neural Network.

作者信息

He Hu, Shang Yingjie, Yang Xu, Di Yingze, Lin Jiajun, Zhu Yimeng, Zheng Wenhao, Zhao Jinfeng, Ji Mengyao, Dong Liya, Deng Ning, Lei Yunlin, Chai Zenghao

机构信息

Institute of Microelectronics, Tsinghua University, Beijing, China.

School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China.

出版信息

Front Neurosci. 2019 Jul 3;13:650. doi: 10.3389/fnins.2019.00650. eCollection 2019.

Abstract

Development of computer science has led to the blooming of artificial intelligence (AI), and neural networks are the core of AI research. Although mainstream neural networks have done well in the fields of image processing and speech recognition, they do not perform well in models aimed at understanding contextual information. In our opinion, the reason for this is that the essence of building a neural network through parameter training is to fit the data to the statistical law through parameter training. Since the neural network built using this approach does not possess memory ability, it cannot reflect the relationship between data with respect to the causality. Biological memory is fundamentally different from the current mainstream digital memory in terms of the storage method. The information stored in digital memory is converted to binary code and written in separate storage units. This physical isolation destroys the correlation of information. Therefore, the information stored in digital memory does not have the recall or association functions of biological memory which can present causality. In this paper, we present the results of our preliminary effort at constructing an associative memory system based on a spiking neural network. We broke the neural network building process into two phases: the Structure Formation Phase and the Parameter Training Phase. The Structure Formation Phase applies a learning method based on Hebb's rule to provoke neurons in the memory layer growing new synapses to connect to neighbor neurons as a response to the specific input spiking sequences fed to the neural network. The aim of this phase is to train the neural network to memorize the specific input spiking sequences. During the Parameter Training Phase, STDP and reinforcement learning are employed to optimize the weight of synapses and thus to find a way to let the neural network recall the memorized specific input spiking sequences. The results show that our memory neural network could memorize different targets and could recall the images it had memorized.

摘要

计算机科学的发展促使人工智能(AI)蓬勃兴起,而神经网络是人工智能研究的核心。尽管主流神经网络在图像处理和语音识别领域表现出色,但在旨在理解上下文信息的模型中却表现不佳。我们认为,造成这种情况的原因是,通过参数训练构建神经网络的本质是通过参数训练使数据符合统计规律。由于使用这种方法构建的神经网络不具备记忆能力,所以它无法反映数据之间因果关系。生物记忆在存储方式上与当前主流的数字记忆有着根本的不同。存储在数字记忆中的信息被转换为二进制代码,并写入单独的存储单元。这种物理隔离破坏了信息的相关性。因此,存储在数字记忆中的信息不具备生物记忆那种能够呈现因果关系的回忆或关联功能。在本文中,我们展示了基于脉冲神经网络构建联想记忆系统的初步成果。我们将神经网络的构建过程分为两个阶段:结构形成阶段和参数训练阶段。结构形成阶段应用基于赫布法则的学习方法,促使记忆层中的神经元生长出新的突触,以连接相邻神经元,作为对输入到神经网络的特定脉冲序列的响应。此阶段的目的是训练神经网络记住特定的输入脉冲序列。在参数训练阶段,采用突触时间依赖性可塑性(STDP)和强化学习来优化突触权重,从而找到让神经网络回忆起已记住的特定输入脉冲序列的方法。结果表明,我们的记忆神经网络能够记住不同的目标,并能回忆起它所记住的图像。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1306/6615473/9600305bc3f9/fnins-13-00650-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验