• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Modumer:用于图像恢复的调制变压器。

Modumer: Modulating Transformer for Image Restoration.

作者信息

Cui Yuning, Liu Mingyu, Ren Wenqi, Knoll Alois

出版信息

IEEE Trans Neural Netw Learn Syst. 2025 May 1;PP. doi: 10.1109/TNNLS.2025.3561924.

DOI:10.1109/TNNLS.2025.3561924
PMID:40310737
Abstract

Image restoration aims to recover clean images from degraded counterparts. While Transformer-based approaches have achieved significant advancements in this field, they are limited by high complexity and their inability to capture omni-range dependencies, hindering their overall performance. In this work, we develop Modumer for effective and efficient image restoration by revisiting the Transformer block and modulation design, which processes input through a convolutional block and projection layers and fuses features via elementwise multiplication. Specifically, within each unit of Modumer, we integrate the cascaded modulation design with the downsampled Transformer block to build the attention layers, enabling omni-kernel modulation and mapping inputs into high-dimensional feature spaces. Moreover, we introduce a bioinspired parameter-sharing mechanism to attention layers, which not only enhances efficiency but also improves performance. In addition, a dual-domain feed-forward network (DFFN) strengthens the representational power of the model. Extensive experimental evaluations demonstrate that the proposed Modumer achieves state-of-the-art performance across ten datasets in five single-degradation image restoration tasks, including image motion deblurring, deraining, dehazing, desnowing, and low-light enhancement. Moreover, the model exhibits strong generalization capabilities in all-in-one image restoration tasks. Additionally, it demonstrates competitive performance in composite-degradation image restoration.

摘要

图像恢复旨在从退化的图像中恢复出清晰的图像。虽然基于Transformer的方法在该领域取得了显著进展,但它们受到高复杂度以及无法捕捉全范围依赖关系的限制,从而阻碍了其整体性能。在这项工作中,我们通过重新审视Transformer模块和调制设计来开发Modumer,以实现高效的图像恢复,它通过卷积模块和投影层处理输入,并通过逐元素乘法融合特征。具体而言,在Modumer的每个单元中,我们将级联调制设计与下采样的Transformer模块集成以构建注意力层,实现全内核调制并将输入映射到高维特征空间。此外,我们为注意力层引入了一种受生物启发的参数共享机制,这不仅提高了效率,还提升了性能。此外,双域前馈网络(DFFN)增强了模型的表征能力。广泛的实验评估表明,所提出的Modumer在五个单退化图像恢复任务的十个数据集上实现了当前最优的性能,包括图像运动去模糊、去雨、去雾、除雪和低光增强。此外,该模型在一体化图像恢复任务中展现出强大的泛化能力。此外,它在复合退化图像恢复中表现出具有竞争力的性能。

相似文献

1
Modumer: Modulating Transformer for Image Restoration.Modumer:用于图像恢复的调制变压器。
IEEE Trans Neural Netw Learn Syst. 2025 May 1;PP. doi: 10.1109/TNNLS.2025.3561924.
2
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
3
Image dehazing algorithm based on deep transfer learning and local mean adaptation.基于深度迁移学习和局部均值自适应的图像去雾算法
Sci Rep. 2025 Jul 31;15(1):27956. doi: 10.1038/s41598-025-13613-z.
4
ETU-Net: edge enhancement-guided U-Net with transformer for skin lesion segmentation.ETU-Net:基于边缘增强引导的 U-Net 与 Transformer 的皮肤病变分割。
Phys Med Biol. 2023 Dec 22;69(1). doi: 10.1088/1361-6560/ad13d2.
5
Short-Term Memory Impairment短期记忆障碍
6
CAN: Cascade Augmentations Against Noise for Image Restoration.
IEEE Trans Image Process. 2025;34:5131-5146. doi: 10.1109/TIP.2025.3595374.
7
Progressive decomposition of infrared and visible image fusion network with joint transformer and Resnet.结合变压器和残差网络的红外与可见光图像融合网络的渐进分解
PLoS One. 2025 Aug 22;20(8):e0330328. doi: 10.1371/journal.pone.0330328. eCollection 2025.
8
Multi-level channel-spatial attention and light-weight scale-fusion network (MCSLF-Net): multi-level channel-spatial attention and light-weight scale-fusion transformer for 3D brain tumor segmentation.多级通道空间注意力与轻量级尺度融合网络(MCSLF-Net):用于3D脑肿瘤分割的多级通道空间注意力与轻量级尺度融合变换器
Quant Imaging Med Surg. 2025 Jul 1;15(7):6301-6325. doi: 10.21037/qims-2025-354. Epub 2025 Jun 30.
9
MAFA-Uformer: Multi-attention and dual-branch feature aggregation U-shaped transformer for sparse-view CT reconstruction.MAFA-Uformer:用于稀疏视图CT重建的多注意力与双分支特征聚合U型变换器
J Xray Sci Technol. 2025 Jan;33(1):157-166. doi: 10.1177/08953996241300016. Epub 2025 Jan 8.
10
Sexual Harassment and Prevention Training性骚扰与预防培训