Chang Chenliang, Wang Di, Zhu Dongchen, Li Jiamao, Xia Jun, Zhang Xiaolin
Opt Lett. 2022 Mar 15;47(6):1482-1485. doi: 10.1364/OL.453580.
We propose a deep-learning-based approach to producing computer-generated holograms (CGHs) of real-world scenes. We design an end-to-end convolutional neural network (the Stereo-to-Hologram Network, SHNet) framework that takes a stereo image pair as input and efficiently synthesizes a monochromatic 3D complex hologram as output. The network is able to rapidly and straightforwardly calculate CGHs from the directly recorded images of real-world scenes, eliminating the need for time-consuming intermediate depth recovery and diffraction-based computations. We demonstrate the 3D reconstructions with clear depth cues obtained from the SHNet-based CGHs by both numerical simulations and optical holographic virtual reality display experiments.
我们提出了一种基于深度学习的方法来生成真实场景的计算机生成全息图(CGH)。我们设计了一个端到端的卷积神经网络(立体到全息网络,SHNet)框架,该框架以立体图像对作为输入,并有效地合成单色3D复全息图作为输出。该网络能够从真实场景的直接记录图像中快速直接地计算CGH,无需耗时的中间深度恢复和基于衍射的计算。我们通过数值模拟和光学全息虚拟现实显示实验,展示了从基于SHNet的CGH获得的具有清晰深度线索的3D重建。