Carvajal Gonzalo, Figueroa Miguel, Sbarbaro Daniel, Valenzuela Waldo
Department of Electrical Engineering, Universidad de Concepción, Concepción, Chile.
IEEE Trans Neural Netw. 2011 Jul;22(7):1046-60. doi: 10.1109/TNN.2011.2136358. Epub 2011 May 27.
Analog very large scale integration implementations of neural networks can compute using a fraction of the size and power required by their digital counterparts. However, intrinsic limitations of analog hardware, such as device mismatch, charge leakage, and noise, reduce the accuracy of analog arithmetic circuits, degrading the performance of large-scale adaptive systems. In this paper, we present a detailed mathematical analysis that relates different parameters of the hardware limitations to specific effects on the convergence properties of linear perceptrons trained with the least-mean-square (LMS) algorithm. Using this analysis, we derive design guidelines and introduce simple on-chip calibration techniques to improve the accuracy of analog neural networks with a small cost in die area and power dissipation. We validate our analysis by evaluating the performance of a mixed-signal complementary metal-oxide-semiconductor implementation of a 32-input perceptron trained with LMS.
神经网络的模拟超大规模集成实现方式在计算时所需的尺寸和功耗仅为其数字对应方式的一小部分。然而,模拟硬件的固有局限性,如器件失配、电荷泄漏和噪声,会降低模拟算术电路的精度,从而降低大规模自适应系统的性能。在本文中,我们进行了详细的数学分析,将硬件局限性的不同参数与对采用最小均方(LMS)算法训练的线性感知器收敛特性的特定影响联系起来。利用这一分析,我们得出了设计准则,并引入了简单的片上校准技术,以在芯片面积和功耗增加不多的情况下提高模拟神经网络的精度。我们通过评估一个采用LMS算法训练的32输入感知器的混合信号互补金属氧化物半导体实现的性能,来验证我们的分析。