School of Electrical Engineering, Graphene/2D Materials Research Center, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea.
Department of Nanotechnology and Advanced Materials Engineering, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul, 05006, Republic of Korea.
Adv Mater. 2023 Jun;35(24):e2300023. doi: 10.1002/adma.202300023. Epub 2023 Apr 27.
With advances in artificial intelligent services, brain-inspired neuromorphic systems with synaptic devices are recently attracting significant interest to circumvent the von Neumann bottleneck. However, the increasing trend of deep neural network parameters causes huge power consumption and large area overhead of a nonlinear neuron electronic circuit, and it incurs a vanishing gradient problem. Here, a memristor-based compact and energy-efficient neuron device is presented to implement a rectifying linear unit (ReLU) activation function. To emulate the volatile and gradual switching of the ReLU function, a copolymer memristor with a hybrid structure is proposed using a copolymer/inorganic bilayer. The functional copolymer film developed by introducing imidazole functional groups enables the formation of nanocluster-type pseudo-conductive filaments by boosting the nucleation of Cu nanoclusters, causing gradual switching. The ReLU neuron device is successfully demonstrated by integrating the memristor with amorphous InGaZnO thin-film transistors, and achieves 0.5 pJ of energy consumption based on sub-10 µA operation current and high-speed switching of 650 ns. Furthermore, device-to-system-level simulation using neuron devices on the MNIST dataset demonstrates that the vanishing gradient problem is effectively resolved by five-layer deep neural networks. The proposed neuron device will enable the implementation of high-density and energy-efficient hardware neuromorphic systems.
随着人工智能服务的进步,具有突触器件的类脑神经形态系统最近引起了人们的极大兴趣,以规避冯·诺依曼瓶颈。然而,深度神经网络参数的不断增加导致非线性神经元电子电路的功耗巨大且面积开销大,并导致梯度消失问题。在这里,提出了一种基于忆阻器的紧凑且节能的神经元器件,用于实现整流线性单元 (ReLU) 激活函数。为了模拟 ReLU 函数的易失性和逐渐切换,提出了一种使用共聚物/无机双层的具有混合结构的共聚物忆阻器。通过引入咪唑官能团开发的功能共聚物薄膜能够通过促进 Cu 纳米团簇的成核来形成纳米团簇型类导电细丝,从而实现逐渐切换。通过将忆阻器与非晶 InGaZnO 薄膜晶体管集成,成功地演示了 ReLU 神经元器件,并基于低于 10 µA 的工作电流和 650 ns 的高速切换实现了 0.5 pJ 的能耗。此外,使用 MNIST 数据集上的神经元器件进行的器件到系统级仿真表明,五层深度神经网络有效地解决了梯度消失问题。所提出的神经元器件将能够实现高密度和高能效的硬件神经形态系统。