Wang Huayi, Bian Liheng, Zhang Jun
Opt Express. 2021 Feb 15;29(4):4866-4874. doi: 10.1364/OE.416481.
Single-pixel imaging (SPI) has drawn wide attentions due to its high signal-to-noise ratio and wide working spectrum, providing a feasible solution when array sensors are expensive or not available. In the conventional SPI, the target's depth information is lost in the acquisition process due to the 3D-to-1D projection. In this work, we report an efficient depth acquisition method that enables the existing SPI systems to obtain reflectance and depth information without any additional hardware. The technique employs a multiplexed illumination strategy that contains both random and sinusoidal codes, which simultaneously encode the target's spatial and depth information into the single measurement sequence. In the reconstruction phase, we build a convolutional neural network to decode both spatial and depth information from the 1D measurements. Compared to the conventional scene acquisition method, the end-to-end deep-learning reconstruction reduces both sampling ratio (30%) and computational complexity (two orders of magnitude). Both simulations and experiments validate the method's effectiveness and high efficiency for additional depth acquisition in single-pixel imaging without additional hardware.
单像素成像(SPI)因其高信噪比和宽工作光谱而受到广泛关注,当阵列传感器昂贵或无法获取时,它提供了一种可行的解决方案。在传统的SPI中,由于三维到一维的投影,目标的深度信息在采集过程中丢失。在这项工作中,我们报告了一种高效的深度采集方法,该方法使现有的SPI系统无需任何额外硬件即可获取反射率和深度信息。该技术采用了一种包含随机码和正弦码的复用照明策略,该策略将目标的空间和深度信息同时编码到单个测量序列中。在重建阶段,我们构建了一个卷积神经网络,从一维测量中解码空间和深度信息。与传统的场景采集方法相比,端到端的深度学习重建降低了采样率(30%)和计算复杂度(两个数量级)。仿真和实验均验证了该方法在无需额外硬件的单像素成像中进行额外深度采集的有效性和高效性。