Wang Jier, Li Jie, Wu Yifan, Yu Hengwei, Cui Lebei, Sun Miao, Chiang Patrick Yin
State Key Laboratory of ASIC and System, Fudan University, Shanghai 201203, China.
College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China.
Sensors (Basel). 2023 Aug 3;23(15):6927. doi: 10.3390/s23156927.
Light detection and ranging (LiDAR) technology, a cutting-edge advancement in mobile applications, presents a myriad of compelling use cases, including enhancing low-light photography, capturing and sharing 3D images of fascinating objects, and elevating the overall augmented reality (AR) experience. However, its widespread adoption has been hindered by the prohibitive costs and substantial power consumption associated with its implementation in mobile devices. To surmount these obstacles, this paper proposes a low-power, low-cost, single-photon avalanche detector (SPAD)-based system-on-chip (SoC) which packages the microlens arrays (MLAs) and a lightweight RGB-guided sparse depth imaging completion neural network for 3D LiDAR imaging. The proposed SoC integrates an 8 × 8 SPAD macropixel array with time-to-digital converters (TDCs) and a charge pump, fabricated using a 180 nm bipolar-CMOS-DMOS (BCD) process. Initially, the primary function of this SoC was limited to serving as a ranging sensor. A random MLA-based homogenizing diffuser efficiently transforms Gaussian beams into flat-topped beams with a 45° field of view (FOV), enabling flash projection at the transmitter. To further enhance resolution and broaden application possibilities, a lightweight neural network employing RGB-guided sparse depth complementation is proposed, enabling a substantial expansion of image resolution from 8 × 8 to quarter video graphics array level (QVGA; 256 × 256). Experimental results demonstrate the effectiveness and stability of the hardware encompassing the SoC and optical system, as well as the lightweight features and accuracy of the algorithmic neural network. The state-of-the-art SoC-neural network solution offers a promising and inspiring foundation for developing consumer-level 3D imaging applications on mobile devices.
光探测与测距(LiDAR)技术是移动应用领域的一项前沿进展,具有众多引人注目的用例,包括增强低光摄影效果、捕捉并分享迷人物体的3D图像,以及提升整体增强现实(AR)体验。然而,其在移动设备中的广泛应用受到了实施成本过高和功耗巨大的阻碍。为克服这些障碍,本文提出了一种基于低功耗、低成本单光子雪崩探测器(SPAD)的片上系统(SoC),该系统集成了微透镜阵列(MLA)和用于3D LiDAR成像的轻量级RGB引导稀疏深度成像完成神经网络。所提出的SoC集成了一个8×8的SPAD宏像素阵列以及时间数字转换器(TDC)和电荷泵,采用180纳米双极互补金属氧化物半导体-双扩散金属氧化物半导体(BCD)工艺制造。最初,该SoC的主要功能仅限于作为测距传感器。基于随机MLA的均匀化漫射器可有效地将高斯光束转换为视场(FOV)为45°的平顶光束,从而在发射器处实现闪光投影。为进一步提高分辨率并拓宽应用可能性,提出了一种采用RGB引导稀疏深度互补的轻量级神经网络,可将图像分辨率从8×8大幅扩展至四分之一视频图形阵列级别(QVGA;256×256)。实验结果证明了包含SoC和光学系统的硬件的有效性和稳定性,以及算法神经网络的轻量级特性和准确性。这种先进的SoC-神经网络解决方案为在移动设备上开发消费级3D成像应用提供了一个充满希望且鼓舞人心的基础。