• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

CSformer:用于压缩感知的桥接卷积和 Transformer。

CSformer: Bridging Convolution and Transformer for Compressive Sensing.

出版信息

IEEE Trans Image Process. 2023;32:2827-2842. doi: 10.1109/TIP.2023.3274988. Epub 2023 May 22.

DOI:10.1109/TIP.2023.3274988
PMID:37186533
Abstract

Convolutional Neural Networks (CNNs) dominate image processing but suffer from local inductive bias, which is addressed by the transformer framework with its inherent ability to capture global context through self-attention mechanisms. However, how to inherit and integrate their advantages to improve compressed sensing is still an open issue. This paper proposes CSformer, a hybrid framework to explore the representation capacity of local and global features. The proposed approach is well-designed for end-to-end compressive image sensing, composed of adaptive sampling and recovery. In the sampling module, images are measured block-by-block by the learned sampling matrix. In the reconstruction stage, the measurements are projected into an initialization stem, a CNN stem, and a transformer stem. The initialization stem mimics the traditional reconstruction of compressive sensing but generates the initial reconstruction in a learnable and efficient manner. The CNN stem and transformer stem are concurrent, simultaneously calculating fine-grained and long-range features and efficiently aggregating them. Furthermore, we explore a progressive strategy and window-based transformer block to reduce the parameters and computational complexity. The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing, which achieves superior performance compared to state-of-the-art methods on different datasets. Our codes is available at: https://github.com/Lineves7/CSformer.

摘要

卷积神经网络(CNNs)在图像处理中占据主导地位,但存在局部归纳偏差,而这一问题可通过具有捕捉全局上下文的内在能力的转换器框架来解决,这种能力是通过自注意力机制实现的。然而,如何继承和整合它们的优势来改进压缩感知仍然是一个悬而未决的问题。本文提出了 CSformer,这是一种用于探索局部和全局特征表示能力的混合框架。所提出的方法非常适合端到端压缩图像感应,由自适应采样和恢复组成。在采样模块中,图像通过学习的采样矩阵逐块进行测量。在重建阶段,测量值被投影到初始化主干、CNN 主干和转换器主干中。初始化主干模仿传统的压缩感知重建,但以可学习和高效的方式生成初始重建。CNN 主干和转换器主干是并发的,同时计算细粒度和长程特征,并有效地对它们进行聚合。此外,我们探索了一种渐进策略和基于窗口的转换器块,以减少参数和计算复杂度。实验结果表明,专门基于转换器的架构对于压缩感知是有效的,与不同数据集上的最先进方法相比,它具有卓越的性能。我们的代码可在:https://github.com/Lineves7/CSformer 上获得。

相似文献

1
CSformer: Bridging Convolution and Transformer for Compressive Sensing.CSformer:用于压缩感知的桥接卷积和 Transformer。
IEEE Trans Image Process. 2023;32:2827-2842. doi: 10.1109/TIP.2023.3274988. Epub 2023 May 22.
2
VSmTrans: A hybrid paradigm integrating self-attention and convolution for 3D medical image segmentation.VSmTrans:一种融合自注意力机制和卷积的 3D 医学图像分割混合范式。
Med Image Anal. 2024 Dec;98:103295. doi: 10.1016/j.media.2024.103295. Epub 2024 Aug 24.
3
TransCS: A Transformer-Based Hybrid Architecture for Image Compressed Sensing.TransCS:一种基于Transformer的图像压缩感知混合架构。
IEEE Trans Image Process. 2022;31:6991-7005. doi: 10.1109/TIP.2022.3217365. Epub 2022 Nov 14.
4
MTC-CSNet: Marrying Transformer and Convolution for Image Compressed Sensing.MTC-CSNet:将Transformer与卷积相结合用于图像压缩感知
IEEE Trans Cybern. 2024 Sep;54(9):4949-4961. doi: 10.1109/TCYB.2024.3363748. Epub 2024 Aug 26.
5
Dual encoder network with transformer-CNN for multi-organ segmentation.基于 Transformer-CNN 的双编码器网络的多器官分割。
Med Biol Eng Comput. 2023 Mar;61(3):661-671. doi: 10.1007/s11517-022-02723-9. Epub 2022 Dec 29.
6
ATTransUNet: An enhanced hybrid transformer architecture for ultrasound and histopathology image segmentation.ATTransUNet:一种用于超声和组织病理学图像分割的增强型混合变压器架构。
Comput Biol Med. 2023 Jan;152:106365. doi: 10.1016/j.compbiomed.2022.106365. Epub 2022 Nov 28.
7
A novel pansharpening method based on cross stage partial network and transformer.一种基于跨阶段局部网络和变压器的新型全色锐化方法。
Sci Rep. 2024 Jun 2;14(1):12631. doi: 10.1038/s41598-024-63336-w.
8
CoT: Contourlet Transformer for Hierarchical Semantic Segmentation.CoT:用于分层语义分割的轮廓波变换网络
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):132-146. doi: 10.1109/TNNLS.2024.3367901. Epub 2025 Jan 7.
9
Asymmetric Network Combining CNN and Transformer for Building Extraction from Remote Sensing Images.用于从遥感图像中提取建筑物的结合卷积神经网络和变压器的非对称网络
Sensors (Basel). 2024 Sep 25;24(19):6198. doi: 10.3390/s24196198.
10
Uncertainty-driven mixture convolution and transformer network for remote sensing image super-resolution.用于遥感图像超分辨率的不确定性驱动混合卷积与变压器网络
Sci Rep. 2024 Apr 24;14(1):9435. doi: 10.1038/s41598-024-59384-x.

引用本文的文献

1
SBCS-Net: Sparse Bayesian and Deep Learning Framework for Compressed Sensing in Sensor Networks.SBCS-Net:用于传感器网络压缩感知的稀疏贝叶斯与深度学习框架
Sensors (Basel). 2025 Jul 23;25(15):4559. doi: 10.3390/s25154559.
2
SwinTCS: A Swin Transformer Approach to Compressive Sensing with Non-Local Denoising.SwinTCS:一种采用非局部去噪的Swin Transformer压缩感知方法。
J Imaging. 2025 Apr 29;11(5):139. doi: 10.3390/jimaging11050139.
3
Dual-Ascent-Inspired Transformer for Compressed Sensing.用于压缩感知的双上升启发式变换器
Sensors (Basel). 2025 Mar 28;25(7):2157. doi: 10.3390/s25072157.
4
SSM-Net: Enhancing Compressed Sensing Image Reconstruction with Mamba Architecture and Fast Iterative Shrinking Threshold Algorithm Optimization.SSM-Net:利用曼巴架构和快速迭代收缩阈值算法优化增强压缩感知图像重建
Sensors (Basel). 2025 Feb 9;25(4):1026. doi: 10.3390/s25041026.
5
FusionOpt-Net: A Transformer-Based Compressive Sensing Reconstruction Algorithm.融合优化网络:一种基于Transformer的压缩感知重建算法
Sensors (Basel). 2024 Sep 14;24(18):5976. doi: 10.3390/s24185976.
6
SALSA-Net: Explainable Deep Unrolling Networks for Compressed Sensing.SALSA-Net:用于压缩感知的可解释深度展开网络。
Sensors (Basel). 2023 May 28;23(11):5142. doi: 10.3390/s23115142.
7
IEF-CSNET: Information Enhancement and Fusion Network for Compressed Sensing Reconstruction.IEF-CSNET:用于压缩感知重建的信息增强与融合网络。
Sensors (Basel). 2023 Feb 8;23(4):1886. doi: 10.3390/s23041886.