Shi Tuo, Gao Lili, Tian Yang, Tang Shuangzhu, Liu Jinchang, Li Yiqi, Zhou Ruixi, Cui Shiyu, Zhang Hui, Li Yu, Wu Zuheng, Zhang Xumeng, Li Taihao, Yan Xiaobing, Liu Qi
Zhejiang Laboratory, Hangzhou, 311122, China.
Frontier Institute of Chip and System, Fudan University, Shanghai, 200433, China.
Nat Commun. 2025 Jan 21;16(1):913. doi: 10.1038/s41467-025-56286-y.
Inspired by biological processes, feature learning techniques, such as deep learning, have achieved great success in various fields. However, since biological organs may operate differently from semiconductor devices, deep models usually require dedicated hardware and are computation-complex. High energy consumption has made deep model growth unsustainable. We present an approach that directly implements feature learning using semiconductor physics to minimize disparity between model and hardware. Following this approach, a feature learning technique based on memristor drift-diffusion kinetics is proposed by leveraging the dynamic response of a single memristor to learn features. The model parameters and computational operations of the kinetics-based network are reduced by up to 2 and 4 orders of magnitude, respectively, compared with deep models. We experimentally implement the proposed network on 180 nm memristor chips for various dimensional pattern classification tasks. Compared with memristor-based deep learning hardware, the memristor kinetics-based hardware can further reduce energy and area consumption significantly. We propose that innovations in hardware physics could create an intriguing solution for intelligent models by balancing model complexity and performance.
受生物过程启发,深度学习等特征学习技术在各个领域都取得了巨大成功。然而,由于生物器官的运作方式可能与半导体器件不同,深度模型通常需要专用硬件且计算复杂。高能耗使得深度模型的发展难以为继。我们提出了一种利用半导体物理直接实现特征学习的方法,以最小化模型与硬件之间的差异。遵循这一方法,通过利用单个忆阻器的动态响应来学习特征,提出了一种基于忆阻器漂移扩散动力学的特征学习技术。与深度模型相比,基于动力学的网络的模型参数和计算操作分别减少了多达2个和4个数量级。我们在180纳米忆阻器芯片上通过实验实现了所提出的网络,用于各种维度模式分类任务。与基于忆阻器的深度学习硬件相比,基于忆阻器动力学的硬件可以进一步显著降低能量和面积消耗。我们认为,硬件物理方面的创新可以通过平衡模型复杂性和性能,为智能模型创造一个引人入胜的解决方案。