Suppr超能文献

大核超材料神经网络的数字建模

Digital Modeling on Large Kernel Metamaterial Neural Network.

作者信息

Liu Quan, Zheng Hanyu, Swartz Brandon T, Lee Ho Hin, Asad Zuhayr, Kravchenko Ivan, Valentine Jason G, Huo Yuankai

机构信息

Vanderbilt University, Nashville, TN 37212, USA.

Vanderbilt University, Nashville, TN 37212, USA.; Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA.

出版信息

J Imaging Sci Technol. 2023 Nov-Dec;67(6). doi: 10.2352/j.imagingsci.technol.2023.67.6.060404.

Abstract

Deep neural networks (DNNs) utilized recently are physically deployed with computational units (e.g., CPUs and GPUs). Such a design might lead to a heavy computational burden, significant latency, and intensive power consumption, which are critical limitations in applications such as the Internet of Things (IoT), edge computing, and the usage of drones. Recent advances in optical computational units (e.g., metamaterial) have shed light on energy-free and light-speed neural networks. However, the digital design of the metamaterial neural network (MNN) is fundamentally limited by its physical limitations, such as precision, noise, and bandwidth during fabrication. Moreover, the unique advantages of MNN's (e.g., light-speed computation) are not fully explored via standard 3×3 convolution kernels. In this paper, we propose a novel large kernel metamaterial neural network (LMNN) that maximizes the digital capacity of the state-of-the-art (SOTA) MNN with model re-parametrization and network compression, while also considering the optical limitation explicitly. The new digital learning scheme can maximize the learning capacity of MNN while modeling the physical restrictions of meta-optic. With the proposed LMNN, the computation cost of the convolutional front-end can be offloaded into fabricated optical hardware. The experimental results on two publicly available datasets demonstrate that the optimized hybrid design improved classification accuracy while reducing computational latency. The development of the proposed LMNN is a promising step towards the ultimate goal of energy-free and light-speed AI.

摘要

最近使用的深度神经网络(DNN)是通过计算单元(如CPU和GPU)进行物理部署的。这样的设计可能会导致沉重的计算负担、显著的延迟和高能耗,这些都是物联网(IoT)、边缘计算和无人机应用中的关键限制。光学计算单元(如超材料)的最新进展为无能量和光速神经网络带来了曙光。然而,超材料神经网络(MNN)的数字设计从根本上受到其物理限制的制约,例如制造过程中的精度、噪声和带宽。此外,通过标准的3×3卷积核并不能充分挖掘MNN的独特优势(如光速计算)。在本文中,我们提出了一种新颖的大核超材料神经网络(LMNN),它通过模型重新参数化和网络压缩来最大化最先进(SOTA)MNN的数字容量,同时还明确考虑了光学限制。这种新的数字学习方案在对元光学的物理限制进行建模的同时,可以最大化MNN的学习能力。利用所提出的LMNN,卷积前端的计算成本可以卸载到制造的光学硬件中。在两个公开可用数据集上的实验结果表明,优化后的混合设计在降低计算延迟的同时提高了分类准确率。所提出的LMNN的发展是朝着无能量和光速人工智能这一最终目标迈出的有希望的一步。

相似文献

1
Digital Modeling on Large Kernel Metamaterial Neural Network.大核超材料神经网络的数字建模
J Imaging Sci Technol. 2023 Nov-Dec;67(6). doi: 10.2352/j.imagingsci.technol.2023.67.6.060404.
6
Meta-optic accelerators for object classifiers.用于对象分类器的元光学加速器。
Sci Adv. 2022 Jul 29;8(30):eabo6410. doi: 10.1126/sciadv.abo6410. Epub 2022 Jul 27.
10
Cost-effective stochastic MAC circuits for deep neural networks.用于深度神经网络的经济高效随机 MAC 电路。
Neural Netw. 2019 Sep;117:152-162. doi: 10.1016/j.neunet.2019.04.017. Epub 2019 May 20.

引用本文的文献

1
Spatially varying nanophotonic neural networks.空间可变纳米光子神经网络。
Sci Adv. 2024 Nov 8;10(45):eadp0391. doi: 10.1126/sciadv.adp0391.

本文引用的文献

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验