Esmali Nojehdeh Mohammadreza, Altun Mustafa
Electronics and Communication Engineering, Istanbul Technical University, Istanbul, Turkey.
Circuits Syst Signal Process. 2023 Apr 24:1-25. doi: 10.1007/s00034-023-02363-w.
In this paper, we explore efficient hardware implementation of feedforward artificial neural networks (ANNs) using approximate adders and multipliers. Due to a large area requirement in a parallel architecture, the ANNs are implemented under the time-multiplexed architecture where computing resources are re-used in the multiply accumulate (MAC) blocks. The efficient hardware implementation of ANNs is realized by replacing the exact adders and multipliers in the MAC blocks by the approximate ones taking into account the hardware accuracy. Additionally, an algorithm to determine the approximate level of multipliers and adders due to the expected accuracy is proposed. As an application, the MNIST and SVHN databases are considered. To examine the efficiency of the proposed method, various architectures and structures of ANNs are realized. Experimental results show that the ANNs designed using the proposed approximate multiplier have a smaller area and consume less energy than those designed using previously proposed prominent approximate multipliers. It is also observed that the use of both approximate adders and multipliers yields, respectively, up to 50% and 10% reduction in energy consumption and area of the ANN design with a small deviation or better hardware accuracy when compared to the exact adders and multipliers.
在本文中,我们探索使用近似加法器和乘法器对前馈人工神经网络(ANN)进行高效的硬件实现。由于并行架构中对面积的要求较大,ANN是在时分复用架构下实现的,其中计算资源在乘法累加(MAC)模块中被重复使用。通过在MAC模块中用考虑硬件精度的近似加法器和乘法器替换精确的加法器和乘法器,实现了ANN的高效硬件实现。此外,还提出了一种根据预期精度确定乘法器和加法器近似程度的算法。作为一个应用实例,考虑了MNIST和SVHN数据库。为了检验所提方法的效率,实现了各种ANN的架构和结构。实验结果表明,与使用先前提出的著名近似乘法器设计的ANN相比,使用所提近似乘法器设计的ANN面积更小且能耗更低。还观察到,与精确加法器和乘法器相比,同时使用近似加法器和乘法器分别可使ANN设计的能耗和面积降低高达50%和10%,且偏差较小或硬件精度更高。