Xu Shaofu, Wang Jing, Wang Rui, Chen Jiangping, Zou Weiwen
Opt Express. 2019 Jul 8;27(14):19778-19787. doi: 10.1364/OE.27.019778.
Optical neural networks (ONNs) have become competitive candidates for the next generation of high-performance neural network accelerators because of their low power consumption and high-speed nature. Beyond fully-connected neural networks demonstrated in pioneer works, optical computing hardwares can also conduct convolutional neural networks (CNNs) by hardware reusing. Following this concept, we propose an optical convolution unit (OCU) architecture. By reusing the OCU architecture with different inputs and weights, convolutions with arbitrary input sizes can be done. A proof-of-concept experiment is carried out by cascaded acousto-optical modulator arrays. When the neural network parameters are ex-situ trained, the OCU conducts convolutions with SDR up to 28.22 dBc and performs well on inferences of typical CNN tasks. Furthermore, we conduct in situ training and get higher SDR at 36.27 dBc, verifying the OCU could be further refined by in situ training. Besides the effectiveness and high accuracy, the simplified OCU architecture served as a building block could be easily duplicated and integrated to future chip-scale optical CNNs.
由于功耗低、速度快,光学神经网络(ONNs)已成为下一代高性能神经网络加速器的有力候选者。除了在先驱工作中展示的全连接神经网络外,光学计算硬件还可以通过硬件复用进行卷积神经网络(CNNs)。遵循这一概念,我们提出了一种光学卷积单元(OCU)架构。通过对不同的输入和权重复用OCU架构,可以完成任意输入大小的卷积。通过级联声光调制器阵列进行了概念验证实验。当神经网络参数进行非原位训练时,OCU进行的卷积的杂散抑制比(SDR)高达28.22 dBc,并且在典型CNN任务的推理中表现良好。此外,我们进行了原位训练,并在36.27 dBc时获得了更高的SDR,验证了OCU可以通过原位训练进一步优化。除了有效性和高精度外,作为构建模块的简化OCU架构可以很容易地复制并集成到未来的芯片级光学CNN中。