Jin Boyan, Wang Zhenlong, Wang Tianyu, Meng Jialin
School of Integrated Circuits, Shandong University, Jinan 250101, China.
Shenzhen Research Institute of Shandong University, Shenzhen 518100, China.
Research (Wash D C). 2025 Jul 4;8:0758. doi: 10.34133/research.0758. eCollection 2025.
Artificial neural networks have long been studied to emulate the cognitive capabilities of the human brain for artificial intelligence (AI) computing. However, as computational demands intensify, conventional hardware based on transistor and complementary metal oxide semiconductor (CMOS) technology faces substantial limitations due to the separation of memory and processing, a challenge commonly known as the von Neumann bottleneck. In this review, we examine how memristors, which are novel nonvolatile memory devices that exhibit memory-dependent resistance, can be harnessed to build more efficient and scalable neural networks. We provide a comprehensive background on the evolution of neural network models and memristors, as well as introduce the principles of memristive devices, which mimic the dynamic behavior of biological synapses. Various neural network architectures, including convolutional, recurrent, and spiking models, are discussed, highlighting the advantages of integrating memristors for in-memory computing and parallel processing. Our review further examines key mechanisms such as synaptic plasticity, encompassing both long-term potentiation and depression, as well as emerging learning algorithms that leverage memristive behavior. Finally, we identify current challenges, such as achieving ultra-low power consumption, high device uniformity, and seamless system integration, and propose future directions in materials science, device engineering, system integration, and industrialization. These advances suggest that memristor-based neural networks may pave the way for next-generation AI systems that combine low power consumption with high computational performance, ultimately bridging the gap between biological and electronic information processing.
长期以来,人们一直在研究人工神经网络,以模拟人类大脑的认知能力用于人工智能(AI)计算。然而,随着计算需求的加剧,基于晶体管和互补金属氧化物半导体(CMOS)技术的传统硬件由于内存和处理的分离而面临重大限制,这一挑战通常被称为冯·诺依曼瓶颈。在这篇综述中,我们研究了忆阻器(一种新型非易失性存储器件,具有依赖记忆的电阻特性)如何能够被用于构建更高效、可扩展的神经网络。我们提供了神经网络模型和忆阻器发展的全面背景知识,还介绍了忆阻器件的原理,其模仿了生物突触的动态行为。讨论并介绍了包括卷积、循环和脉冲模型在内的各种神经网络架构,突出了集成忆阻器用于内存计算和并行处理的优势。我们的综述进一步研究了关键机制,如突触可塑性,包括长期增强和抑制,以及利用忆阻行为的新兴学习算法。最后,我们确定了当前的挑战,如实现超低功耗、高器件均匀性和无缝系统集成,并提出了材料科学、器件工程、系统集成和产业化方面的未来发展方向。这些进展表明,基于忆阻器的神经网络可能为结合低功耗和高计算性能的下一代人工智能系统铺平道路,最终弥合生物和电子信息处理之间的差距。