Suppr超能文献

使用超成像相机的像差稳健单目被动深度感知

Aberration-robust monocular passive depth sensing using a meta-imaging camera.

作者信息

Cao Zhexuan, Li Ning, Zhu Laiyu, Wu Jiamin, Dai Qionghai, Qiao Hui

机构信息

Department of Automation, Tsinghua University, Beijing, 100084, China.

Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China.

出版信息

Light Sci Appl. 2024 Sep 5;13(1):236. doi: 10.1038/s41377-024-01609-9.

Abstract

Depth sensing plays a crucial role in various applications, including robotics, augmented reality, and autonomous driving. Monocular passive depth sensing techniques have come into their own for the cost-effectiveness and compact design, offering an alternative to the expensive and bulky active depth sensors and stereo vision systems. While the light-field camera can address the defocus ambiguity inherent in 2D cameras and achieve unambiguous depth perception, it compromises the spatial resolution and usually struggles with the effect of optical aberration. In contrast, our previously proposed meta-imaging sensor has overcome such hurdles by reconciling the spatial-angular resolution trade-off and achieving the multi-site aberration correction for high-resolution imaging. Here, we present a compact meta-imaging camera and an analytical framework for the quantification of monocular depth sensing precision by calculating the Cramér-Rao lower bound of depth estimation. Quantitative evaluations reveal that the meta-imaging camera exhibits not only higher precision over a broader depth range than the light-field camera but also superior robustness against changes in signal-background ratio. Moreover, both the simulation and experimental results demonstrate that the meta-imaging camera maintains the capability of providing precise depth information even in the presence of aberrations. Showing the promising compatibility with other point-spread-function engineering methods, we anticipate that the meta-imaging camera may facilitate the advancement of monocular passive depth sensing in various applications.

摘要

深度感知在包括机器人技术、增强现实和自动驾驶在内的各种应用中起着至关重要的作用。单目被动深度感知技术因其成本效益和紧凑设计而崭露头角,为昂贵且笨重的主动深度传感器和立体视觉系统提供了一种替代方案。虽然光场相机可以解决二维相机固有的散焦模糊问题并实现明确的深度感知,但它会牺牲空间分辨率,并且通常难以应对光学像差的影响。相比之下,我们之前提出的元成像传感器通过协调空间角分辨率的权衡并实现用于高分辨率成像的多位置像差校正,克服了这些障碍。在此,我们展示了一种紧凑的元成像相机以及一个通过计算深度估计的克拉美罗下界来量化单目深度感知精度的分析框架。定量评估表明,元成像相机不仅在比光场相机更宽的深度范围内具有更高的精度,而且在信号背景比变化时具有更强的鲁棒性。此外,模拟和实验结果均表明,即使存在像差,元成像相机仍保持提供精确深度信息的能力。鉴于与其他点扩散函数工程方法具有良好的兼容性,我们预计元成像相机可能会推动单目被动深度感知在各种应用中的发展。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff64/11377717/389516955b23/41377_2024_1609_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验