Suppr超能文献

基于级联声光调制器阵列的用于卷积神经网络的高精度光学卷积单元架构

High-accuracy optical convolution unit architecture for convolutional neural networks by cascaded acousto-optical modulator arrays.

作者信息

Xu Shaofu, Wang Jing, Wang Rui, Chen Jiangping, Zou Weiwen

出版信息

Opt Express. 2019 Jul 8;27(14):19778-19787. doi: 10.1364/OE.27.019778.

Abstract

Optical neural networks (ONNs) have become competitive candidates for the next generation of high-performance neural network accelerators because of their low power consumption and high-speed nature. Beyond fully-connected neural networks demonstrated in pioneer works, optical computing hardwares can also conduct convolutional neural networks (CNNs) by hardware reusing. Following this concept, we propose an optical convolution unit (OCU) architecture. By reusing the OCU architecture with different inputs and weights, convolutions with arbitrary input sizes can be done. A proof-of-concept experiment is carried out by cascaded acousto-optical modulator arrays. When the neural network parameters are ex-situ trained, the OCU conducts convolutions with SDR up to 28.22 dBc and performs well on inferences of typical CNN tasks. Furthermore, we conduct in situ training and get higher SDR at 36.27 dBc, verifying the OCU could be further refined by in situ training. Besides the effectiveness and high accuracy, the simplified OCU architecture served as a building block could be easily duplicated and integrated to future chip-scale optical CNNs.

摘要

由于功耗低、速度快,光学神经网络(ONNs)已成为下一代高性能神经网络加速器的有力候选者。除了在先驱工作中展示的全连接神经网络外,光学计算硬件还可以通过硬件复用进行卷积神经网络(CNNs)。遵循这一概念,我们提出了一种光学卷积单元(OCU)架构。通过对不同的输入和权重复用OCU架构,可以完成任意输入大小的卷积。通过级联声光调制器阵列进行了概念验证实验。当神经网络参数进行非原位训练时,OCU进行的卷积的杂散抑制比(SDR)高达28.22 dBc,并且在典型CNN任务的推理中表现良好。此外,我们进行了原位训练,并在36.27 dBc时获得了更高的SDR,验证了OCU可以通过原位训练进一步优化。除了有效性和高精度外,作为构建模块的简化OCU架构可以很容易地复制并集成到未来的芯片级光学CNN中。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验