Huang Yiyao, Zhu Xiaobao, Yuan Fenglian, Shi Jing, U Kintak, Qin Junshuo, Kong Xiangjie, Peng Yiran
Faculty of Innovation Engineering, Macau University of Science and Technology, Macau 999078, China.
School of Information Engineering, Nanchang Hangkong University, Nanchang 330063, China.
Sensors (Basel). 2025 Apr 14;25(8):2464. doi: 10.3390/s25082464.
Currently, most images captured by high-pixel devices such as mobile phones, camcorders, and drones are in RGGB format. However, image quality in extremely dark scenes often needs improvement. Traditional methods for processing these dark RGGB images typically rely on end-to-end U-Net networks and their enhancement techniques, which require substantial resources and processing time. To tackle this issue, we first converted RGGB images into RGB three-channel images by subtracting the black level and applying linear interpolation. During the training stage, we leveraged the computational efficiency of the state-space model (SSM) and developed a Mamba U-Net end-to-end model to enhance the restoration of extremely dark RGGB images. We utilized the see-in-the-dark (SID) dataset for training, assessing the effectiveness of our approach. Experimental results indicate that our method significantly reduces resource consumption compared to existing single-step training and prior multi-step training techniques, while achieving improved peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) outcomes.
目前,手机、摄像机和无人机等高像素设备拍摄的大多数图像都是RGGB格式。然而,极暗场景下的图像质量往往有待提高。处理这些暗RGGB图像的传统方法通常依赖于端到端的U-Net网络及其增强技术,这需要大量资源和处理时间。为了解决这个问题,我们首先通过减去黑电平并应用线性插值将RGGB图像转换为RGB三通道图像。在训练阶段,我们利用状态空间模型(SSM)的计算效率,开发了一种曼巴U-Net端到端模型来增强极暗RGGB图像的恢复。我们使用暗环境可见(SID)数据集进行训练,评估我们方法的有效性。实验结果表明,与现有的单步训练和先前的多步训练技术相比,我们的方法显著降低了资源消耗,同时实现了更高的峰值信噪比(PSNR)和结构相似性(SSIM)结果。