Quan Yuhui, Wu Zicong, Xu Ruotao, Ji Hui
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):11361-11377. doi: 10.1109/TPAMI.2024.3457856. Epub 2024 Nov 6.
This paper proposes an end-to-end deep learning approach for removing defocus blur from a single defocused image. Defocus blur is a common issue in digital photography that poses a challenge due to its spatially-varying and large blurring effect. The proposed approach addresses this challenge by employing a pixel-wise Gaussian kernel mixture (GKM) model to accurately yet compactly parameterize spatially-varying defocus point spread functions (PSFs), which is motivated by the isotropy in defocus PSFs. We further propose a grouped GKM (GGKM) model that decouples the coefficients in GKM, so as to improve the modeling accuracy with an economic manner. Afterward, a deep neural network called GGKMNet is then developed by unrolling a fixed-point iteration process of GGKM-based image deblurring, which avoids the efficiency issues in existing unrolling DNNs. Using a lightweight scale-recurrent architecture with a coarse-to-fine estimation scheme to predict the coefficients in GGKM, the GGKMNet can efficiently recover an all-in-focus image from a defocused one. Such advantages are demonstrated with extensive experiments on five benchmark datasets, where the GGKMNet outperforms existing defocus deblurring methods in restoration quality, as well as showing advantages in terms of model complexity and computational efficiency.
本文提出了一种用于从单张散焦图像中去除散焦模糊的端到端深度学习方法。散焦模糊是数字摄影中的一个常见问题,由于其空间变化和大的模糊效果而构成挑战。所提出的方法通过采用逐像素高斯核混合(GKM)模型来准确而紧凑地参数化空间变化的散焦点扩散函数(PSF),这是受散焦PSF中的各向同性启发。我们进一步提出了一种分组GKM(GGKM)模型,该模型将GKM中的系数解耦,从而以经济的方式提高建模精度。随后,通过展开基于GGKM的图像去模糊的定点迭代过程,开发了一个名为GGKMNet的深度神经网络,这避免了现有展开式深度神经网络中的效率问题。GGKMNet使用具有从粗到细估计方案的轻量级尺度循环架构来预测GGKM中的系数,能够从散焦图像中有效地恢复全聚焦图像。在五个基准数据集上进行的广泛实验证明了这些优势,其中GGKMNet在恢复质量方面优于现有的散焦去模糊方法,并且在模型复杂性和计算效率方面也显示出优势。