Rizvi Saad, Cao Jie, Zhang Kaiyu, Hao Qun
School of Optics and Photonics, Beijing Institute of Technology, Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing, 100081, China.
Sci Rep. 2020 Jul 9;10(1):11400. doi: 10.1038/s41598-020-68401-8.
The potential of random pattern based computational ghost imaging (CGI) for real-time applications has been offset by its long image reconstruction time and inefficient reconstruction of complex diverse scenes. To overcome these problems, we propose a fast image reconstruction framework for CGI, called "DeepGhost", using deep convolutional autoencoder network to achieve real-time imaging at very low sampling rates (10-20%). By transferring prior-knowledge from STL-10 dataset to physical-data driven network, the proposed framework can reconstruct complex unseen targets with high accuracy. The experimental results show that the proposed method outperforms existing deep learning and state-of-the-art compressed sensing methods used for ghost imaging under similar conditions. The proposed method employs deep architecture with fast computation, and tackles the shortcomings of existing schemes i.e., inappropriate architecture, training on limited data under controlled settings, and employing shallow network for fast computation.
基于随机模式的计算鬼成像(CGI)在实时应用方面的潜力因图像重建时间长和复杂多样场景的重建效率低而受到影响。为了克服这些问题,我们提出了一种用于CGI的快速图像重建框架,称为“深度鬼成像(DeepGhost)”,它使用深度卷积自动编码器网络,以非常低的采样率(10%-20%)实现实时成像。通过将来自STL-10数据集的先验知识转移到物理数据驱动的网络中,所提出的框架能够高精度地重建复杂的未见目标。实验结果表明,在相似条件下,该方法优于现有的用于鬼成像的深度学习方法和最先进的压缩感知方法。该方法采用计算速度快的深度架构,克服了现有方案的缺点,即架构不合适、在受控设置下对有限数据进行训练以及采用浅层网络进行快速计算。