Department of Electrical and Computer Engineering, University of Massachusetts, Amherst, MA, USA.
Nat Mater. 2019 Apr;18(4):309-323. doi: 10.1038/s41563-019-0291-x. Epub 2019 Mar 20.
With their working mechanisms based on ion migration, the switching dynamics and electrical behaviour of memristive devices resemble those of synapses and neurons, making these devices promising candidates for brain-inspired computing. Built into large-scale crossbar arrays to form neural networks, they perform efficient in-memory computing with massive parallelism by directly using physical laws. The dynamical interactions between artificial synapses and neurons equip the networks with both supervised and unsupervised learning capabilities. Moreover, their ability to interface with analogue signals from sensors without analogue/digital conversions reduces the processing time and energy overhead. Although numerous simulations have indicated the potential of these networks for brain-inspired computing, experimental implementation of large-scale memristive arrays is still in its infancy. This Review looks at the progress, challenges and possible solutions for efficient brain-inspired computation with memristive implementations, both as accelerators for deep learning and as building blocks for spiking neural networks.
基于离子迁移的工作机制,忆阻器的开关动力学和电气行为类似于突触和神经元,这使得这些器件成为脑启发计算的有前途的候选者。这些器件被构建成大规模的交叉阵列,形成神经网络,通过直接利用物理定律,实现高效的内存计算和大规模并行处理。人工突触和神经元之间的动态相互作用使网络具有监督和无监督学习能力。此外,它们能够与传感器的模拟信号接口,而无需模拟/数字转换,从而减少了处理时间和能源开销。尽管大量的模拟表明了这些网络在脑启发计算方面的潜力,但大规模忆阻器阵列的实验实现仍处于起步阶段。这篇综述探讨了利用忆阻器实现高效脑启发计算的进展、挑战和可能的解决方案,包括作为深度学习的加速器和尖峰神经网络的构建模块。