IEEE Trans Cybern. 2021 Apr;51(4):1875-1887. doi: 10.1109/TCYB.2019.2912205. Epub 2021 Mar 17.
Existing deep neural networks (DNNs) are computationally expensive and memory intensive, which hinder their further deployment in novel nanoscale devices and applications with lower memory resources or strict latency requirements. In this paper, a novel approach to accelerate on-chip learning systems using memristive quantized neural networks (M-QNNs) is presented. A real problem of multilevel memristive synaptic weights due to device-to-device (D2D) and cycle-to-cycle (C2C) variations is considered. Different levels of Gaussian noise are added to the memristive model during each adjustment. Another method of using memristors with binary states to build M-QNNs is presented, which suffers from fewer D2D and C2C variations compared with using multilevel memristors. Furthermore, methods of solving the sneak path issues in the memristive crossbar arrays are proposed. The M-QNN approach is evaluated on two image classification datasets, that is, ten-digit number and handwritten images of mixed National Institute of Standards and Technology (MNIST). In addition, input images with different levels of zero-mean Gaussian noise are tested to verify the robustness of the proposed method. Another highlight of the proposed method is that it can significantly reduce computational time and memory during the process of image recognition.
现有的深度神经网络(DNN)计算成本高,内存消耗大,这阻碍了它们在具有较低内存资源或严格延迟要求的新型纳米设备和应用中的进一步部署。在本文中,提出了一种使用忆阻器量化神经网络(M-QNN)加速片上学习系统的新方法。考虑了由于器件到器件(D2D)和循环到循环(C2C)变化而导致的多级忆阻突触权重的实际问题。在每次调整期间,向忆阻器模型中添加不同水平的高斯噪声。还提出了一种使用具有二进制状态的忆阻器构建 M-QNN 的方法,与使用多级忆阻器相比,该方法受 D2D 和 C2C 变化的影响较小。此外,还提出了在忆阻器交叉阵列中解决 sneak path 问题的方法。在两个图像分类数据集(即十位数字和混合国家标准与技术研究所(MNIST)的手写图像)上评估了 M-QNN 方法。此外,还测试了具有不同零均值高斯噪声水平的输入图像,以验证所提出方法的鲁棒性。所提出方法的另一个亮点是它可以在图像识别过程中显著减少计算时间和内存。