Zhao Ming, Yang Rui, Hu Min, Liu Botao
School of Computer Science, Yangtze University, Jingzhou 434023, China.
Sensors (Basel). 2024 Jan 21;24(2):0. doi: 10.3390/s24020673.
The present study proposes a novel deep-learning model for remote sensing image enhancement. It maintains image details while enhancing brightness in the feature extraction module. An improved hierarchical model named Global Spatial Attention Network (GSA-Net), based on U-Net for image enhancement, is proposed to improve the model's performance. To circumvent the issue of insufficient sample data, gamma correction is applied to create low-light images, which are then used as training examples. A loss function is constructed using the Structural Similarity () and Peak Signal-to-Noise Ratio () indices. The GSA-Net network and loss function are utilized to restore images obtained via low-light remote sensing. This proposed method was tested on the Northwestern Polytechnical University Very-High-Resolution 10 (NWPU VHR-10) dataset, and its overall superiority was demonstrated in comparison with other state-of-the-art algorithms using various objective assessment indicators, such as , , and Learned Perceptual Image Patch Similarity (LPIPS). Furthermore, in high-level visual tasks such as object detection, this novel method provides better remote sensing images with distinct details and higher contrast than the competing methods.
本研究提出了一种用于遥感图像增强的新型深度学习模型。它在特征提取模块中增强亮度的同时保留图像细节。提出了一种基于U-Net的改进分层模型——全局空间注意力网络(GSA-Net)用于图像增强,以提高模型性能。为解决样本数据不足的问题,应用伽马校正来创建低光图像,然后将其用作训练示例。使用结构相似性(SSIM)和峰值信噪比(PSNR)指数构建损失函数。利用GSA-Net网络和损失函数来恢复通过低光遥感获得的图像。该方法在西北工业大学超高分辨率10(NWPU VHR-10)数据集上进行了测试,并使用各种客观评估指标(如SSIM、PSNR和学习感知图像块相似度(LPIPS))与其他先进算法相比,证明了其总体优越性。此外,在诸如目标检测等高级视觉任务中,这种新方法比竞争方法提供了具有更清晰细节和更高对比度的更好遥感图像。