• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用空间非相干衍射处理器的通用线性强度变换。

Universal linear intensity transformations using spatially incoherent diffractive processors.

作者信息

Rahman Md Sadman Sakib, Yang Xilin, Li Jingxi, Bai Bijie, Ozcan Aydogan

机构信息

Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.

Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.

出版信息

Light Sci Appl. 2023 Aug 15;12(1):195. doi: 10.1038/s41377-023-01234-y.

DOI:10.1038/s41377-023-01234-y
PMID:37582771
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10427714/
Abstract

Under spatially coherent light, a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view (FOVs) if the total number (N) of optimizable phase-only diffractive features is ≥~2NN, where N and N refer to the number of useful pixels at the input and the output FOVs, respectively. Here we report the design of a spatially incoherent diffractive optical processor that can approximate any arbitrary linear transformation in time-averaged intensity between its input and output FOVs. Under spatially incoherent monochromatic light, the spatially varying intensity point spread function (H) of a diffractive network, corresponding to a given, arbitrarily-selected linear intensity transformation, can be written as H(m, n; m', n') = |h(m, n; m', n')|, where h is the spatially coherent point spread function of the same diffractive network, and (m, n) and (m', n') define the coordinates of the output and input FOVs, respectively. Using numerical simulations and deep learning, supervised through examples of input-output profiles, we demonstrate that a spatially incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation between its input and output if N ≥ ~2NN. We also report the design of spatially incoherent diffractive networks for linear processing of intensity information at multiple illumination wavelengths, operating simultaneously. Finally, we numerically demonstrate a diffractive network design that performs all-optical classification of handwritten digits under spatially incoherent illumination, achieving a test accuracy of >95%. Spatially incoherent diffractive networks will be broadly useful for designing all-optical visual processors that can work under natural light.

摘要

在空间相干光下,如果可优化的纯相位衍射特征总数(N)≥2NN(其中N和N分别指输入和输出视场(FOV)中有用像素的数量),则由结构化表面组成的衍射光学网络可被设计为在其输入和输出视场之间执行任何任意复值线性变换。在此,我们报告了一种空间非相干衍射光学处理器的设计,该处理器可在其输入和输出视场之间的时间平均强度上近似任何任意线性变换。在空间非相干单色光下,对应于给定的、任意选择的线性强度变换的衍射网络的空间变化强度点扩散函数(H)可写为H(m, n; m', n') = |h(m, n; m', n')|,其中h是同一衍射网络的空间相干点扩散函数,(m, n)和(m', n')分别定义输出和输入视场的坐标。通过数值模拟和深度学习,并以输入 - 输出轮廓示例进行监督,我们证明如果N≥2NN,则空间非相干衍射网络可被训练为全光地在其输入和输出之间执行任何任意线性强度变换。我们还报告了用于在多个照明波长下同时对强度信息进行线性处理的空间非相干衍射网络的设计。最后,我们通过数值演示了一种衍射网络设计,该设计在空间非相干照明下对手写数字进行全光分类,测试准确率超过95%。空间非相干衍射网络对于设计可在自然光下工作的全光视觉处理器将具有广泛的用途。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/fa59a9ff2854/41377_2023_1234_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/4bd680eff417/41377_2023_1234_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/9fdee567f319/41377_2023_1234_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/1aa2cd4c2bd8/41377_2023_1234_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/9a7465f3ae60/41377_2023_1234_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/61959a5211b0/41377_2023_1234_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/5eaa5ad3df19/41377_2023_1234_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/e910bd268017/41377_2023_1234_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/9a0541a48c40/41377_2023_1234_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/44f8f08fd15d/41377_2023_1234_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/e9c6a8d07050/41377_2023_1234_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/3c0b21169195/41377_2023_1234_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/9a2b93373b3b/41377_2023_1234_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/583b1871f9e0/41377_2023_1234_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/a2a7194c1ca5/41377_2023_1234_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/167446bfcbfb/41377_2023_1234_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/fa59a9ff2854/41377_2023_1234_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/4bd680eff417/41377_2023_1234_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/9fdee567f319/41377_2023_1234_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/1aa2cd4c2bd8/41377_2023_1234_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/9a7465f3ae60/41377_2023_1234_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/61959a5211b0/41377_2023_1234_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/5eaa5ad3df19/41377_2023_1234_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/e910bd268017/41377_2023_1234_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/9a0541a48c40/41377_2023_1234_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/44f8f08fd15d/41377_2023_1234_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/e9c6a8d07050/41377_2023_1234_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/3c0b21169195/41377_2023_1234_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/9a2b93373b3b/41377_2023_1234_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/583b1871f9e0/41377_2023_1234_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/a2a7194c1ca5/41377_2023_1234_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/167446bfcbfb/41377_2023_1234_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bf7b/10427714/fa59a9ff2854/41377_2023_1234_Fig16_HTML.jpg

相似文献

1
Universal linear intensity transformations using spatially incoherent diffractive processors.使用空间非相干衍射处理器的通用线性强度变换。
Light Sci Appl. 2023 Aug 15;12(1):195. doi: 10.1038/s41377-023-01234-y.
2
All-optical synthesis of an arbitrary linear transformation using diffractive surfaces.利用衍射表面进行任意线性变换的全光合成。
Light Sci Appl. 2021 Sep 24;10(1):196. doi: 10.1038/s41377-021-00623-5.
3
Polarization multiplexed diffractive computing: all-optical implementation of a group of linear transformations through a polarization-encoded diffractive network.偏振复用衍射计算:通过偏振编码衍射网络对一组线性变换进行全光实现。
Light Sci Appl. 2022 May 26;11(1):153. doi: 10.1038/s41377-022-00849-x.
4
Universal Polarization Transformations: Spatial Programming of Polarization Scattering Matrices Using a Deep Learning-Designed Diffractive Polarization Transformer.通用偏振变换:使用深度学习设计的衍射偏振变换器对偏振散射矩阵进行空间编程
Adv Mater. 2023 Dec;35(51):e2303395. doi: 10.1002/adma.202303395. Epub 2023 Nov 10.
5
All-optical information-processing capacity of diffractive surfaces.衍射表面的全光信息处理能力。
Light Sci Appl. 2021 Jan 28;10(1):25. doi: 10.1038/s41377-020-00439-9.
6
Data-Class-Specific All-Optical Transformations and Encryption.数据类特定全光变换与加密
Adv Mater. 2023 Aug;35(31):e2212091. doi: 10.1002/adma.202212091. Epub 2023 Jun 20.
7
Classification and reconstruction of spatially overlapping phase images using diffractive optical networks.利用衍射光学网络对空间重叠相位图像进行分类与重建。
Sci Rep. 2022 May 19;12(1):8446. doi: 10.1038/s41598-022-12020-y.
8
Design of task-specific optical systems using broadband diffractive neural networks.使用宽带衍射神经网络设计特定任务光学系统。
Light Sci Appl. 2019 Dec 2;8:112. doi: 10.1038/s41377-019-0223-1. eCollection 2019.
9
Pyramid diffractive optical networks for unidirectional image magnification and demagnification.用于单向图像放大和缩小的金字塔衍射光学网络。
Light Sci Appl. 2024 Jul 31;13(1):178. doi: 10.1038/s41377-024-01543-w.
10
All-optical image classification through unknown random diffusers using a single-pixel diffractive network.使用单像素衍射网络通过未知随机漫射器进行全光图像分类。
Light Sci Appl. 2023 Mar 9;12(1):69. doi: 10.1038/s41377-023-01116-3.

引用本文的文献

1
Broadband unidirectional visible imaging using wafer-scale nano-fabrication of multi-layer diffractive optical processors.利用多层衍射光学处理器的晶圆级纳米制造实现宽带单向可见光成像。
Light Sci Appl. 2025 Aug 11;14(1):267. doi: 10.1038/s41377-025-01971-2.
2
Universal point spread function engineering for 3D optical information processing.用于三维光学信息处理的通用点扩散函数工程
Light Sci Appl. 2025 Jun 12;14(1):212. doi: 10.1038/s41377-025-01887-x.
3
Photonic diffractive generators through sampling noises from scattering media.

本文引用的文献

1
Mechanical-scan-free multicolor super-resolution imaging with diffractive spot array illumination.采用衍射光斑阵列照明的无机械扫描多色超分辨率成像
Nat Commun. 2024 May 16;15(1):4135. doi: 10.1038/s41467-024-48482-z.
2
Data-Class-Specific All-Optical Transformations and Encryption.数据类特定全光变换与加密
Adv Mater. 2023 Aug;35(31):e2212091. doi: 10.1002/adma.202212091. Epub 2023 Jun 20.
3
Direct retrieval of Zernike-based pupil functions using integrated diffractive deep neural networks.使用集成衍射深度神经网络直接检索基于泽尼克的瞳孔函数。
通过对散射介质中的采样噪声生成光子衍射发生器。
Nat Commun. 2024 Dec 6;15(1):10643. doi: 10.1038/s41467-024-55058-4.
4
Opto-intelligence spectrometer using diffractive neural networks.采用衍射神经网络的光智能光谱仪。
Nanophotonics. 2024 Jul 2;13(20):3883-3893. doi: 10.1515/nanoph-2024-0233. eCollection 2024 Aug.
5
Information processing at the speed of light.以光速进行信息处理。
Front Optoelectron. 2024 Sep 29;17(1):33. doi: 10.1007/s12200-024-00133-3.
6
Optical neural networks: progress and challenges.光学神经网络:进展与挑战。
Light Sci Appl. 2024 Sep 20;13(1):263. doi: 10.1038/s41377-024-01590-3.
7
Pyramid diffractive optical networks for unidirectional image magnification and demagnification.用于单向图像放大和缩小的金字塔衍射光学网络。
Light Sci Appl. 2024 Jul 31;13(1):178. doi: 10.1038/s41377-024-01543-w.
8
Nonlinear encoding in diffractive information processing using linear optical materials.使用线性光学材料的衍射信息处理中的非线性编码
Light Sci Appl. 2024 Jul 23;13(1):173. doi: 10.1038/s41377-024-01529-8.
9
Nanowatt all-optical 3D perception for mobile robotics.用于移动机器人的纳瓦全光三维感知
Sci Adv. 2024 Jul 5;10(27):eadn2031. doi: 10.1126/sciadv.adn2031.
10
Information-hiding cameras: Optical concealment of object information into ordinary images.信息隐藏相机:将物体信息光学隐藏于普通图像之中。
Sci Adv. 2024 Jun 14;10(24):eadn9420. doi: 10.1126/sciadv.adn9420. Epub 2024 Jun 12.
Nat Commun. 2022 Dec 7;13(1):7531. doi: 10.1038/s41467-022-35349-4.
4
Metasurface-enabled on-chip multiplexed diffractive neural networks in the visible.可见光波段基于超表面的片上复用衍射神经网络。
Light Sci Appl. 2022 May 27;11(1):158. doi: 10.1038/s41377-022-00844-2.
5
Polarization multiplexed diffractive computing: all-optical implementation of a group of linear transformations through a polarization-encoded diffractive network.偏振复用衍射计算:通过偏振编码衍射网络对一组线性变换进行全光实现。
Light Sci Appl. 2022 May 26;11(1):153. doi: 10.1038/s41377-022-00849-x.
6
All-optical synthesis of an arbitrary linear transformation using diffractive surfaces.利用衍射表面进行任意线性变换的全光合成。
Light Sci Appl. 2021 Sep 24;10(1):196. doi: 10.1038/s41377-021-00623-5.
7
Spectrally encoded single-pixel machine vision using diffractive networks.使用衍射网络的光谱编码单像素机器视觉。
Sci Adv. 2021 Mar 26;7(13). doi: 10.1126/sciadv.abd7690. Print 2021 Mar.
8
Ensemble learning of diffractive optical networks.衍射光学网络的集成学习
Light Sci Appl. 2021 Jan 11;10(1):14. doi: 10.1038/s41377-020-00446-w.
9
Terahertz pulse shaping using diffractive surfaces.使用衍射表面的太赫兹脉冲整形
Nat Commun. 2021 Jan 4;12(1):37. doi: 10.1038/s41467-020-20268-z.
10
Analysis of Diffractive Optical Neural Networks and Their Integration with Electronic Neural Networks.衍射光学神经网络及其与电子神经网络集成的分析
IEEE J Sel Top Quantum Electron. 2020 Jan-Feb;26(1). doi: 10.1109/JSTQE.2019.2921376. Epub 2019 Jun 6.