Suppr超能文献

物理学来救援:用于高速成像的深度非视距重建

Physics to the Rescue: Deep Non-Line-of-Sight Reconstruction for High-Speed Imaging.

作者信息

Mu Fangzhou, Mo Sicheng, Peng Jiayong, Liu Xiaochun, Nam Ji Hyun, Raghavan Siddeshwar, Velten Andreas, Li Yin

出版信息

IEEE Trans Pattern Anal Mach Intell. 2025 Aug;47(8):6146-6158. doi: 10.1109/TPAMI.2022.3203383.

Abstract

Computational approach to imaging around the corner, or non-line-of-sight (NLOS) imaging, is becoming a reality thanks to major advances in imaging hardware and reconstruction algorithms. A recent development towards practical NLOS imaging, (Nam et al. 2021) demonstrated a high-speed non-confocal imaging system that operates at 5Hz, 100x faster than the prior art. This enormous gain in acquisition rate, however, necessitates numerous approximations in light transport, breaking many existing NLOS reconstruction methods that assume an idealized image formation model. To bridge the gap, we present a novel deep model that incorporates the complementary physics priors of wave propagation and volume rendering into a neural network for high-quality and robust NLOS reconstruction. This orchestrated design regularizes the solution space by relaxing the image formation model, resulting in a deep model that generalizes well on real captures despite being exclusively trained on synthetic data. Further, we devise a unified learning framework that enables our model to be flexibly trained using diverse supervision signals, including target intensity images or even raw NLOS transient measurements. Once trained, our model renders both intensity and depth images at inference time in a single forward pass, capable of processing more than 5 captures per second on a high-end GPU. Through extensive qualitative and quantitative experiments, we show that our method outperforms prior physics and learning based approaches on both synthetic and real measurements. We anticipate that our method along with the fast capturing system will accelerate future development of NLOS imaging for real world applications that require high-speed imaging.

摘要

拐角成像或非视距(NLOS)成像的计算方法,正由于成像硬件和重建算法的重大进展而成为现实。最近在实用NLOS成像方面的一项进展,(Nam等人,2021年)展示了一种以5Hz运行的高速非共焦成像系统,比现有技术快100倍。然而,采集速率的这种巨大提升,在光传输方面需要大量近似,打破了许多现有的NLOS重建方法,这些方法假设了理想化的图像形成模型。为了弥合这一差距,我们提出了一种新颖的深度模型,该模型将波传播和体绘制的互补物理先验纳入神经网络,以进行高质量和稳健的NLOS重建。这种精心设计的架构通过放宽图像形成模型来规范解空间,从而产生一个深度模型,尽管该模型仅在合成数据上进行训练,但在真实捕获数据上具有良好的泛化能力。此外,我们设计了一个统一的学习框架,使我们的模型能够使用多种监督信号进行灵活训练,包括目标强度图像甚至原始NLOS瞬态测量。一旦训练完成,我们的模型在推理时通过单次前向传播就能渲染强度和深度图像,在高端GPU上每秒能够处理超过5次捕获。通过广泛的定性和定量实验,我们表明我们的方法在合成测量和真实测量上均优于先前基于物理和学习方法。我们预计,我们的方法与快速捕获系统将加速NLOS成像在需要高速成像的实际应用中的未来发展。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验