Suppr超能文献

通过细分上下文模型联合学习实现稳健的面部图像超分辨率

Robust Face Image Super-Resolution via Joint Learning of Subdivided Contextual Model.

作者信息

Chen Liang, Pan Jinshan, Li Qing

出版信息

IEEE Trans Image Process. 2019 Dec;28(12):5897-5909. doi: 10.1109/TIP.2019.2920510. Epub 2019 Jun 10.

Abstract

In this paper, we focus on restoring high-resolution facial images under noisy low-resolution scenarios. This problem is a challenging problem as the most important structures and details of captured facial images are missing. To address this problem, we propose a novel local patch-based face super-resolution (FSR) method via the joint learning of the contextual model. The contextual model is based on the topology consisting of contextual sub-patches, which provide more useful structural information than the commonly used local contextual structures due to the finer patch size. In this way, the contextual models are able to recover the missing local structures in target patches. In order to further strengthen the structural compensation function of contextual topology, we introduce the recognition feature as additional regularity. Based on the contextual model, we formulate the super-resolved procedure as a contextual joint representation with respect to the target patch and its adjacent patches. The high-resolution image is obtained by weighting contextual estimations. Both quantitative and qualitative validations show that the proposed method performs favorably against state-of-the-art algorithms.

摘要

在本文中,我们专注于在有噪声的低分辨率场景下恢复高分辨率面部图像。由于捕获的面部图像中最重要的结构和细节缺失,这个问题具有挑战性。为了解决这个问题,我们通过上下文模型的联合学习提出了一种新颖的基于局部补丁的面部超分辨率(FSR)方法。上下文模型基于由上下文子补丁组成的拓扑结构,由于补丁尺寸更精细,它比常用的局部上下文结构提供了更多有用的结构信息。通过这种方式,上下文模型能够恢复目标补丁中缺失的局部结构。为了进一步加强上下文拓扑的结构补偿功能,我们引入识别特征作为额外的正则化。基于上下文模型,我们将超分辨率过程表述为关于目标补丁及其相邻补丁的上下文联合表示。通过对上下文估计进行加权来获得高分辨率图像。定量和定性验证均表明,所提出的方法优于现有算法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验