IEEE Trans Pattern Anal Mach Intell. 2020 Nov;42(11):2809-2824. doi: 10.1109/TPAMI.2019.2915301. Epub 2019 May 7.
Face hallucination is a domain-specific super-resolution problem that aims to generate a high-resolution (HR) face image from a low-resolution (LR) input. In contrast to the existing patch-wise super-resolution models that divide a face image into regular patches and independently apply LR to HR mapping to each patch, we implement deep reinforcement learning and develop a novel attention-aware face hallucination (Attention-FH) framework, which recurrently learns to attend a sequence of patches and performs facial part enhancement by fully exploiting the global interdependency of the image. Specifically, our proposed framework incorporates two components: a recurrent policy network for dynamically specifying a new attended region at each time step based on the status of the super-resolved image and the past attended region sequence, and a local enhancement network for selected patch hallucination and global state updating. The Attention-FH model jointly learns the recurrent policy network and local enhancement network through maximizing a long-term reward that reflects the hallucination result with respect to the whole HR image. Extensive experiments demonstrate that our Attention-FH significantly outperforms the state-of-the-art methods on in-the-wild face images with large pose and illumination variations.
人脸幻觉是一个特定领域的超分辨率问题,旨在从低分辨率 (LR) 输入生成高分辨率 (HR) 人脸图像。与现有的基于补丁的超分辨率模型不同,后者将人脸图像划分为规则补丁,并独立地将 LR 应用于每个补丁的 HR 映射,我们实现了深度强化学习,并开发了一种新颖的注意感知人脸幻觉 (Attention-FH) 框架,该框架通过充分利用图像的全局相关性,反复学习注意一个序列的补丁,并进行面部部分增强。具体来说,我们提出的框架包含两个组件:一个基于超分辨率图像和过去的注意区域序列的状态,在每个时间步动态指定新的注意区域的递归策略网络,以及一个用于选择补丁幻觉和全局状态更新的局部增强网络。Attention-FH 模型通过最大化反映整个 HR 图像的幻觉结果的长期奖励,联合学习递归策略网络和局部增强网络。广泛的实验表明,我们的 Attention-FH 在具有大姿态和光照变化的野外人脸图像上显著优于最先进的方法。