Chang Chenliang, Zhu Dongchen, Li Jiamao, Wang Di, Xia Jun, Zhang Xiaolin
Opt Lett. 2022 May 1;47(9):2202-2205. doi: 10.1364/OL.452488.
To compute a high-quality computer-generated hologram (CGH) for true 3D real scenes, a huge amount of 3D data must be physically acquired and provided depending on specific devices or 3D rendering techniques. Here, we propose a computational framework for generating a CGH from a single image based on the idea of 2D-to-3D wavefront conversion. We devise a deep view synthesis neural network to synthesize light-field contents from a single image and convert the light-field data to the diffractive wavefront of the hologram using a ray-wave algorithm. The method is able to achieve extremely straightforward 3D CGH generation from hand-accessible 2D image content and outperforms existing real-world-based CGH computation, which inevitably relies on a high-cost depth camera and cumbersome 3D data rendering. We experimentally demonstrate 3D reconstructions of indoor and outdoor scenes from a single image enabled phase-only CGH.
为了针对真实的3D真实场景计算高质量的计算机生成全息图(CGH),必须根据特定设备或3D渲染技术实际获取并提供大量的3D数据。在此,我们基于二维到三维波前转换的思想,提出了一种从单幅图像生成CGH的计算框架。我们设计了一个深度视图合成神经网络,以从单幅图像合成光场内容,并使用光线-波算法将光场数据转换为全息图的衍射波前。该方法能够从易于获取的二维图像内容实现极其直接的三维CGH生成,并且优于现有的基于真实场景的CGH计算方法,后者不可避免地依赖于高成本的深度相机和繁琐的3D数据渲染。我们通过实验展示了从单幅图像生成的仅含相位的CGH对室内和室外场景的三维重建。