He Min, Wang Rugang, Zhang Mingyang, Lv Feiyang, Wang Yuanyuan, Zhou Feng, Bian Xuesheng
School of Information Engineering, Yancheng Institute of Technology, Yancheng, 224051, China.
Sci Rep. 2025 Apr 9;15(1):12151. doi: 10.1038/s41598-025-95329-8.
Contemporary algorithms for enhancing images in low-light conditions prioritize improving brightness and contrast but often neglect improving image details. This study introduces the Swin Transformer-based Light-enhancing Generative Adversarial Network (SwinLightGAN), a novel generative adversarial network (GAN) that effectively enhances image details under low-light conditions. The network integrates a generator model based on a Residual Jumping U-shaped Network (U-Net) architecture for precise local detail extraction with an illumination network enhanced by Shifted Window Transformer (Swin Transformer) technology that captures multi-scale spatial features and global contexts. This combination produces high-quality images that resemble those taken in normal lighting conditions, retaining intricate details. Through adversarial training that employs discriminators operating at multiple scales and a blend of loss functions, SwinLightGAN ensures a seamless distinction between generated and authentic images, ensuring superior enhancement quality. Extensive experimental analysis on multiple unpaired datasets demonstrates SwinLightGAN's outstanding performance. The system achieves Naturalness Image Quality Evaluator (NIQE) scores ranging from 5.193 to 5.397, Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) scores from 28.879 to 32.040, and Patch-based Image Quality Evaluator (PIQE) scores from 38.280 to 44.479, highlighting its efficacy in delivering high-quality enhancements across diverse metrics.
当代用于在低光照条件下增强图像的算法优先考虑提高亮度和对比度,但往往忽视改善图像细节。本研究介绍了基于Swin Transformer的光增强生成对抗网络(SwinLightGAN),这是一种新型的生成对抗网络(GAN),可在低光照条件下有效增强图像细节。该网络集成了基于残差跳跃U形网络(U-Net)架构的生成器模型,用于精确的局部细节提取,以及由移位窗口变压器(Swin Transformer)技术增强的照明网络,该技术可捕获多尺度空间特征和全局上下文。这种组合产生的高质量图像类似于在正常光照条件下拍摄的图像,保留了复杂的细节。通过采用多尺度操作的鉴别器和混合损失函数的对抗训练,SwinLightGAN确保了生成图像和真实图像之间的无缝区分,确保了卓越的增强质量。对多个未配对数据集的广泛实验分析证明了SwinLightGAN的出色性能。该系统的自然图像质量评估器(NIQE)分数在5.193至5.397之间,盲/无参考图像空间质量评估器(BRISQUE)分数在28.879至32.040之间,基于补丁的图像质量评估器(PIQE)分数在38.280至44.479之间,突出了其在多种指标上提供高质量增强的功效。