Wang Shuozhi, Mei Jianqiang, Yang Lichao, Zhao Yifan
School of Aerospace, Transport and Manufacturing, Cranfield University, Bedford MK43 0AL, UK.
School of Electronic Engineering, Tianjin University of Technology and Education, Tianjin 300222, China.
Sensors (Basel). 2021 Nov 10;21(22):7471. doi: 10.3390/s21227471.
The measurement accuracy and reliability of thermography is largely limited by a relatively low spatial-resolution of infrared (IR) cameras in comparison to digital cameras. Using a high-end IR camera to achieve high spatial-resolution can be costly or sometimes infeasible due to the high sample rate required. Therefore, there is a strong demand to improve the quality of IR images, particularly on edges, without upgrading the hardware in the context of surveillance and industrial inspection systems. This paper proposes a novel Conditional Generative Adversarial Networks (CGAN)-based framework to enhance IR edges by learning high-frequency features from corresponding visual images. A dual-discriminator, focusing on edge and content/background, is introduced to guide the cross imaging modality learning procedure of the U-Net generator in high and low frequencies respectively. Results demonstrate that the proposed framework can effectively enhance barely visible edges in IR images without introducing artefacts, meanwhile the content information is well preserved. Different from most similar studies, this method only requires IR images for testing, which will increase the applicability of some scenarios where only one imaging modality is available, such as active thermography.
与数码相机相比,红外(IR)相机相对较低的空间分辨率在很大程度上限制了热成像的测量精度和可靠性。由于所需的高采样率,使用高端IR相机来实现高空间分辨率可能成本高昂,有时甚至不可行。因此,在监控和工业检测系统的背景下,强烈需要在不升级硬件的情况下提高IR图像的质量,特别是边缘质量。本文提出了一种基于条件生成对抗网络(CGAN)的新颖框架,通过从相应的视觉图像中学习高频特征来增强IR边缘。引入了一个双判别器,分别关注边缘和内容/背景,以在高频和低频中指导U-Net生成器的交叉成像模态学习过程。结果表明,所提出的框架可以有效地增强IR图像中几乎不可见的边缘而不引入伪影,同时内容信息得到很好的保留。与大多数类似研究不同,该方法仅需要IR图像进行测试,这将增加一些只有一种成像模态可用的场景的适用性,例如主动热成像。