Mengu Deniz, Veli Muhammed, Rivenson Yair, Ozcan Aydogan
Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, 90095, USA.
Bioengineering Department, University of California, Los Angeles, Los Angeles, CA, 90095, USA.
Sci Rep. 2022 May 19;12(1):8446. doi: 10.1038/s41598-022-12020-y.
Diffractive optical networks unify wave optics and deep learning to all-optically compute a given machine learning or computational imaging task as the light propagates from the input to the output plane. Here, we report the design of diffractive optical networks for the classification and reconstruction of spatially overlapping, phase-encoded objects. When two different phase-only objects spatially overlap, the individual object functions are perturbed since their phase patterns are summed up. The retrieval of the underlying phase images from solely the overlapping phase distribution presents a challenging problem, the solution of which is generally not unique. We show that through a task-specific training process, passive diffractive optical networks composed of successive transmissive layers can all-optically and simultaneously classify two different randomly-selected, spatially overlapping phase images at the input. After trained with ~ 550 million unique combinations of phase-encoded handwritten digits from the MNIST dataset, our blind testing results reveal that the diffractive optical network achieves an accuracy of > 85.8% for all-optical classification of two overlapping phase images of new handwritten digits. In addition to all-optical classification of overlapping phase objects, we also demonstrate the reconstruction of these phase images based on a shallow electronic neural network that uses the highly compressed output of the diffractive optical network as its input (with e.g., ~ 20-65 times less number of pixels) to rapidly reconstruct both of the phase images, despite their spatial overlap and related phase ambiguity. The presented phase image classification and reconstruction framework might find applications in e.g., computational imaging, microscopy and quantitative phase imaging fields.
衍射光学网络将波动光学和深度学习结合起来,在光从输入平面传播到输出平面的过程中,以全光方式计算给定的机器学习或计算成像任务。在此,我们报告了用于对空间重叠的相位编码物体进行分类和重建的衍射光学网络的设计。当两个不同的纯相位物体在空间上重叠时,由于它们的相位图案相加,各个物体函数会受到干扰。仅从重叠的相位分布中检索潜在的相位图像是一个具有挑战性的问题,其解决方案通常不是唯一的。我们表明,通过特定任务的训练过程,由连续透射层组成的无源衍射光学网络可以在输入处全光且同时对两个不同的随机选择的、空间重叠的相位图像进行分类。在使用来自MNIST数据集的约5.5亿个独特的相位编码手写数字组合进行训练后,我们的盲测结果表明,衍射光学网络对新手写数字的两个重叠相位图像进行全光分类的准确率超过85.8%。除了对重叠相位物体进行全光分类外,我们还展示了基于浅层电子神经网络对这些相位图像的重建,该网络使用衍射光学网络的高度压缩输出作为输入(例如,像素数量减少约20 - 65倍),以快速重建两个相位图像,尽管它们存在空间重叠和相关的相位模糊性。所提出的相位图像分类和重建框架可能会在例如计算成像、显微镜和定量相位成像领域找到应用。