• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于重建极暗RGGB图像的曼巴U型网络模型

A Mamba U-Net Model for Reconstruction of Extremely Dark RGGB Images.

作者信息

Huang Yiyao, Zhu Xiaobao, Yuan Fenglian, Shi Jing, U Kintak, Qin Junshuo, Kong Xiangjie, Peng Yiran

机构信息

Faculty of Innovation Engineering, Macau University of Science and Technology, Macau 999078, China.

School of Information Engineering, Nanchang Hangkong University, Nanchang 330063, China.

出版信息

Sensors (Basel). 2025 Apr 14;25(8):2464. doi: 10.3390/s25082464.

DOI:10.3390/s25082464
PMID:40285153
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12030951/
Abstract

Currently, most images captured by high-pixel devices such as mobile phones, camcorders, and drones are in RGGB format. However, image quality in extremely dark scenes often needs improvement. Traditional methods for processing these dark RGGB images typically rely on end-to-end U-Net networks and their enhancement techniques, which require substantial resources and processing time. To tackle this issue, we first converted RGGB images into RGB three-channel images by subtracting the black level and applying linear interpolation. During the training stage, we leveraged the computational efficiency of the state-space model (SSM) and developed a Mamba U-Net end-to-end model to enhance the restoration of extremely dark RGGB images. We utilized the see-in-the-dark (SID) dataset for training, assessing the effectiveness of our approach. Experimental results indicate that our method significantly reduces resource consumption compared to existing single-step training and prior multi-step training techniques, while achieving improved peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) outcomes.

摘要

目前,手机、摄像机和无人机等高像素设备拍摄的大多数图像都是RGGB格式。然而,极暗场景下的图像质量往往有待提高。处理这些暗RGGB图像的传统方法通常依赖于端到端的U-Net网络及其增强技术,这需要大量资源和处理时间。为了解决这个问题,我们首先通过减去黑电平并应用线性插值将RGGB图像转换为RGB三通道图像。在训练阶段,我们利用状态空间模型(SSM)的计算效率,开发了一种曼巴U-Net端到端模型来增强极暗RGGB图像的恢复。我们使用暗环境可见(SID)数据集进行训练,评估我们方法的有效性。实验结果表明,与现有的单步训练和先前的多步训练技术相比,我们的方法显著降低了资源消耗,同时实现了更高的峰值信噪比(PSNR)和结构相似性(SSIM)结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/e2839c3b0de3/sensors-25-02464-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/43b321d5a184/sensors-25-02464-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/c2c4268692b9/sensors-25-02464-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/8281b9c41bf2/sensors-25-02464-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/c734112e888a/sensors-25-02464-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/af4482dae81e/sensors-25-02464-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/e3aaad2a067a/sensors-25-02464-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/edb287e9571a/sensors-25-02464-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/2e76fbeb1973/sensors-25-02464-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/be03617bdeae/sensors-25-02464-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/a204a3e3cf0d/sensors-25-02464-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/e2839c3b0de3/sensors-25-02464-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/43b321d5a184/sensors-25-02464-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/c2c4268692b9/sensors-25-02464-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/8281b9c41bf2/sensors-25-02464-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/c734112e888a/sensors-25-02464-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/af4482dae81e/sensors-25-02464-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/e3aaad2a067a/sensors-25-02464-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/edb287e9571a/sensors-25-02464-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/2e76fbeb1973/sensors-25-02464-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/be03617bdeae/sensors-25-02464-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/a204a3e3cf0d/sensors-25-02464-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2453/12030951/e2839c3b0de3/sensors-25-02464-g011.jpg

相似文献

1
A Mamba U-Net Model for Reconstruction of Extremely Dark RGGB Images.用于重建极暗RGGB图像的曼巴U型网络模型
Sensors (Basel). 2025 Apr 14;25(8):2464. doi: 10.3390/s25082464.
2
A two-stage HDR reconstruction pipeline for extreme dark-light RGGB images.一种用于极暗-极亮RGGB图像的两阶段高动态范围重建管道。
Sci Rep. 2025 Jan 22;15(1):2847. doi: 10.1038/s41598-025-87412-x.
3
Sinogram-characteristic-informed network for efficient restoration of low-dose SPECT projection data.基于汉字特征的网络用于低剂量单光子发射计算机断层扫描投影数据的高效恢复
Med Phys. 2025 Jan;52(1):414-432. doi: 10.1002/mp.17459. Epub 2024 Oct 14.
4
[Mitigating metal artifacts from cobalt-chromium alloy crowns in cone-beam CT images through deep learning techniques].[通过深度学习技术减轻锥束CT图像中钴铬合金牙冠的金属伪影]
Zhonghua Kou Qiang Yi Xue Za Zhi. 2024 Jan 9;59(1):71-79. doi: 10.3760/cma.j.cn112144-20231030-00228.
5
[Mitigating metal artifacts in cone-beam CT images through deep learning techniques].[通过深度学习技术减轻锥束CT图像中的金属伪影]
Zhonghua Kou Qiang Yi Xue Za Zhi. 2023 Dec 29;59(1):71-79. doi: 10.3760/cma.j.cn112144-20231030-00233.
6
Bidirectional Copy-Paste Mamba for Enhanced Semi-Supervised Segmentation of Transvaginal Uterine Ultrasound Images.用于增强经阴道子宫超声图像半监督分割的双向复制粘贴曼巴算法
Diagnostics (Basel). 2024 Jul 3;14(13):1423. doi: 10.3390/diagnostics14131423.
7
Efficient Haze Removal from a Single Image Using a DCP-Based Lightweight U-Net Neural Network Model.使用基于暗通道先验的轻量级U-Net神经网络模型从单张图像中高效去除雾霾
Sensors (Basel). 2024 Jun 9;24(12):3746. doi: 10.3390/s24123746.
8
A Deep Learning Framework for Cardiac MR Under-Sampled Image Reconstruction with a Hybrid Spatial and -Space Loss Function.一种基于混合空间和时空损失函数的心脏磁共振欠采样图像重建深度学习框架。
Diagnostics (Basel). 2023 Mar 15;13(6):1120. doi: 10.3390/diagnostics13061120.
9
A novel denoising method for CT images based on U-net and multi-attention.一种基于U-net和多重注意力机制的CT图像去噪新方法。
Comput Biol Med. 2023 Jan;152:106387. doi: 10.1016/j.compbiomed.2022.106387. Epub 2022 Dec 1.
10
Deep learning in computed tomography super resolution using multi-modality data training.深度学习在基于多模态数据训练的 CT 超分辨率中的应用。
Med Phys. 2024 Apr;51(4):2846-2860. doi: 10.1002/mp.16825. Epub 2023 Nov 16.

本文引用的文献

1
A two-stage HDR reconstruction pipeline for extreme dark-light RGGB images.一种用于极暗-极亮RGGB图像的两阶段高动态范围重建管道。
Sci Rep. 2025 Jan 22;15(1):2847. doi: 10.1038/s41598-025-87412-x.
2
Towards Low Light Enhancement With RAW Images.利用原始图像实现低光增强
IEEE Trans Image Process. 2022;31:1391-1405. doi: 10.1109/TIP.2022.3140610. Epub 2022 Jan 25.
3
Sharp U-Net: Depthwise convolutional network for biomedical image segmentation.Sharp U-Net:用于生物医学图像分割的深度卷积网络。
Comput Biol Med. 2021 Sep;136:104699. doi: 10.1016/j.compbiomed.2021.104699. Epub 2021 Jul 29.
4
Adaptively partitioned block-based contrast enhancement and its application to low light-level video surveillance.基于自适应分区块的对比度增强及其在低光照视频监控中的应用。
Springerplus. 2015 Aug 19;4:431. doi: 10.1186/s40064-015-1226-x. eCollection 2015.
5
A multiscale retinex for bridging the gap between color images and the human observation of scenes.一种多尺度反射率模型,用于弥合彩色图像与人对场景的观察之间的差距。
IEEE Trans Image Process. 1997;6(7):965-76. doi: 10.1109/83.597272.
6
Properties and performance of a center/surround retinex.中心/环绕视网膜色彩恒常模型的特性和性能。
IEEE Trans Image Process. 1997;6(3):451-62. doi: 10.1109/83.557356.
7
Image quality assessment: from error visibility to structural similarity.图像质量评估:从误差可见性到结构相似性。
IEEE Trans Image Process. 2004 Apr;13(4):600-12. doi: 10.1109/tip.2003.819861.
8
Lightness and retinex theory.明度与视网膜理论。
J Opt Soc Am. 1971 Jan;61(1):1-11. doi: 10.1364/josa.61.000001.