Rahman Md Sadman Sakib, Yang Xilin, Li Jingxi, Bai Bijie, Ozcan Aydogan
Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
Light Sci Appl. 2023 Aug 15;12(1):195. doi: 10.1038/s41377-023-01234-y.
Under spatially coherent light, a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view (FOVs) if the total number (N) of optimizable phase-only diffractive features is ≥~2NN, where N and N refer to the number of useful pixels at the input and the output FOVs, respectively. Here we report the design of a spatially incoherent diffractive optical processor that can approximate any arbitrary linear transformation in time-averaged intensity between its input and output FOVs. Under spatially incoherent monochromatic light, the spatially varying intensity point spread function (H) of a diffractive network, corresponding to a given, arbitrarily-selected linear intensity transformation, can be written as H(m, n; m', n') = |h(m, n; m', n')|, where h is the spatially coherent point spread function of the same diffractive network, and (m, n) and (m', n') define the coordinates of the output and input FOVs, respectively. Using numerical simulations and deep learning, supervised through examples of input-output profiles, we demonstrate that a spatially incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation between its input and output if N ≥ ~2NN. We also report the design of spatially incoherent diffractive networks for linear processing of intensity information at multiple illumination wavelengths, operating simultaneously. Finally, we numerically demonstrate a diffractive network design that performs all-optical classification of handwritten digits under spatially incoherent illumination, achieving a test accuracy of >95%. Spatially incoherent diffractive networks will be broadly useful for designing all-optical visual processors that can work under natural light.
在空间相干光下,如果可优化的纯相位衍射特征总数(N)≥2NN(其中N和N分别指输入和输出视场(FOV)中有用像素的数量),则由结构化表面组成的衍射光学网络可被设计为在其输入和输出视场之间执行任何任意复值线性变换。在此,我们报告了一种空间非相干衍射光学处理器的设计,该处理器可在其输入和输出视场之间的时间平均强度上近似任何任意线性变换。在空间非相干单色光下,对应于给定的、任意选择的线性强度变换的衍射网络的空间变化强度点扩散函数(H)可写为H(m, n; m', n') = |h(m, n; m', n')|,其中h是同一衍射网络的空间相干点扩散函数,(m, n)和(m', n')分别定义输出和输入视场的坐标。通过数值模拟和深度学习,并以输入 - 输出轮廓示例进行监督,我们证明如果N≥2NN,则空间非相干衍射网络可被训练为全光地在其输入和输出之间执行任何任意线性强度变换。我们还报告了用于在多个照明波长下同时对强度信息进行线性处理的空间非相干衍射网络的设计。最后,我们通过数值演示了一种衍射网络设计,该设计在空间非相干照明下对手写数字进行全光分类,测试准确率超过95%。空间非相干衍射网络对于设计可在自然光下工作的全光视觉处理器将具有广泛的用途。