Mohan M R Mahesh, Nithin G K, Rajagopalan A N
IEEE Trans Image Process. 2021;30:4479-4491. doi: 10.1109/TIP.2021.3072856. Epub 2021 Apr 22.
Dual-lens (DL) cameras capture depth information, and hence enable several important vision applications. Most present-day DL cameras employ unconstrained settings in the two views in order to support extended functionalities. But a natural hindrance to their working is the ubiquitous motion blur encountered due to camera motion, object motion, or both. However, there exists not a single work for the prospective unconstrained DL cameras that addresses this problem (so called dynamic scene deblurring). Due to the unconstrained settings, degradations in the two views need not be the same, and consequently, naive deblurring approaches produce inconsistent left-right views and disrupt scene-consistent disparities. In this paper, we address this problem using Deep Learning and make three important contributions. First, we address the root cause of view-inconsistency in standard deblurring architectures using a Coherent Fusion Module. Second, we address an inherent problem in unconstrained DL deblurring that disrupts scene-consistent disparities by introducing a memory-efficient Adaptive Scale-space Approach. This signal processing formulation allows accommodation of different image-scales in the same network without increasing the number of parameters. Finally, we propose a module to address the Space-variant and Image-dependent nature of dynamic scene blur. We experimentally show that our proposed techniques have substantial practical merit.
双镜头(DL)相机能够捕捉深度信息,从而实现多种重要的视觉应用。当前大多数DL相机在两个视图中采用无约束设置,以支持扩展功能。但其工作中一个自然的障碍是由于相机运动、物体运动或两者兼而有之而普遍存在的运动模糊。然而,对于预期的无约束DL相机,目前还没有一项工作能解决这个问题(即所谓的动态场景去模糊)。由于无约束设置,两个视图中的退化情况不一定相同,因此,简单的去模糊方法会产生不一致的左右视图,并破坏场景一致的视差。在本文中,我们使用深度学习解决这个问题,并做出了三项重要贡献。首先,我们使用相干融合模块解决标准去模糊架构中视图不一致的根本原因。其次,我们通过引入一种内存高效的自适应尺度空间方法,解决了无约束DL去模糊中一个破坏场景一致视差的固有问题。这种信号处理公式允许在同一网络中适应不同的图像尺度,而不增加参数数量。最后,我们提出了一个模块来解决动态场景模糊的空间变化和图像依赖特性。我们通过实验表明,我们提出的技术具有很大的实际价值。