• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

WMSA-WBS:带小波瓶颈的高效波多头自注意力机制

WMSA-WBS: Efficient Wave Multi-Head Self-Attention with Wavelet Bottleneck.

作者信息

Li Xiangyang, Li Yafeng, Fan Pan, Zhang Xueya

机构信息

School of Computer, Baoji University of Arts and Science, Baoji 721016, China.

出版信息

Sensors (Basel). 2025 Aug 14;25(16):5046. doi: 10.3390/s25165046.

DOI:10.3390/s25165046
PMID:40871908
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12389952/
Abstract

The critical component of the vision transformer (ViT) architecture is multi-head self-attention (MSA), which enables the encoding of long-range dependencies and heterogeneous interactions. However, MSA has two significant limitations: its limited ability to capture local features and its high computational costs. To address these challenges, this paper proposes an integrated multi-head self-attention approach with a bottleneck enhancement structure, named WMSA-WBS, which mitigates the aforementioned shortcomings of conventional MSA. Different from existing wavelet-enhanced ViT variants that mainly focus on the isolated wavelet decomposition in the attention layer, WMSA-WBS introduces a co-design of wavelet-based frequency processing and bottleneck optimization, achieving more efficient and comprehensive feature learning. Within WMSA-WBS, the proposed wavelet multi-head self-attention (WMSA) approach is combined with a novel wavelet bottleneck structure to capture both global and local information across the spatial, frequency, and channel domains. Specifically, this module achieves these capabilities while maintaining low computational complexity and memory consumption. Extensive experiments demonstrate that ViT models equipped with WMSA-WBS achieve superior trade-offs between accuracy and model complexity across various vision tasks, including image classification, object detection, and semantic segmentation.

摘要

视觉Transformer(ViT)架构的关键组件是多头自注意力(MSA),它能够对长距离依赖关系和异构交互进行编码。然而,MSA有两个显著的局限性:其捕捉局部特征的能力有限以及计算成本高昂。为应对这些挑战,本文提出了一种具有瓶颈增强结构的集成多头自注意力方法,名为WMSA-WBS,它减轻了传统MSA的上述缺点。与现有的主要关注注意力层中孤立小波分解的小波增强ViT变体不同,WMSA-WBS引入了基于小波的频率处理和瓶颈优化的协同设计,实现了更高效、更全面的特征学习。在WMSA-WBS中,所提出的小波多头自注意力(WMSA)方法与一种新颖的小波瓶颈结构相结合,以在空间、频率和通道域中捕捉全局和局部信息。具体而言,该模块在保持低计算复杂度和内存消耗的同时实现了这些能力。大量实验表明,配备WMSA-WBS的ViT模型在包括图像分类、目标检测和语义分割在内的各种视觉任务中,在准确性和模型复杂度之间实现了更好的权衡。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d278/12389952/ff7f8147d481/sensors-25-05046-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d278/12389952/eaf2c96539a7/sensors-25-05046-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d278/12389952/27fd661ac80c/sensors-25-05046-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d278/12389952/0809ae4a856d/sensors-25-05046-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d278/12389952/76a4716f3165/sensors-25-05046-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d278/12389952/fc761c1364ae/sensors-25-05046-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d278/12389952/ff7f8147d481/sensors-25-05046-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d278/12389952/eaf2c96539a7/sensors-25-05046-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d278/12389952/27fd661ac80c/sensors-25-05046-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d278/12389952/0809ae4a856d/sensors-25-05046-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d278/12389952/76a4716f3165/sensors-25-05046-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d278/12389952/fc761c1364ae/sensors-25-05046-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d278/12389952/ff7f8147d481/sensors-25-05046-g006.jpg

相似文献

1
WMSA-WBS: Efficient Wave Multi-Head Self-Attention with Wavelet Bottleneck.WMSA-WBS:带小波瓶颈的高效波多头自注意力机制
Sensors (Basel). 2025 Aug 14;25(16):5046. doi: 10.3390/s25165046.
2
Short-Term Memory Impairment短期记忆障碍
3
Multi-level channel-spatial attention and light-weight scale-fusion network (MCSLF-Net): multi-level channel-spatial attention and light-weight scale-fusion transformer for 3D brain tumor segmentation.多级通道空间注意力与轻量级尺度融合网络(MCSLF-Net):用于3D脑肿瘤分割的多级通道空间注意力与轻量级尺度融合变换器
Quant Imaging Med Surg. 2025 Jul 1;15(7):6301-6325. doi: 10.21037/qims-2025-354. Epub 2025 Jun 30.
4
DGCFNet: Dual Global Context Fusion Network for remote sensing image semantic segmentation.DGCFNet:用于遥感图像语义分割的双全局上下文融合网络
PeerJ Comput Sci. 2025 Mar 27;11:e2786. doi: 10.7717/peerj-cs.2786. eCollection 2025.
5
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
6
3D-WDA-PMorph: Efficient 3D MRI/TRUS Prostate Registration using Transformer-CNN Network and Wavelet-3D-Depthwise-Attention.3D-WDA-PMorph:使用Transformer-CNN网络和小波3D深度注意力的高效3D MRI/TRUS前列腺配准
J Imaging Inform Med. 2025 Jul 25. doi: 10.1007/s10278-025-01615-2.
7
Frequency-spatial feature fusion via a hierarchical framework for diabetic retinopathy classification in low-quality fundus images.通过分层框架进行频率-空间特征融合用于低质量眼底图像中的糖尿病视网膜病变分类
Biomed Phys Eng Express. 2025 Aug 5;11(5). doi: 10.1088/2057-1976/adf3b5.
8
WSDC-ViT: a novel transformer network for pneumonia image classification based on windows scalable attention and dynamic rectified linear unit convolutional modules.WSDC-ViT:一种基于窗口可扩展注意力和动态整流线性单元卷积模块的新型肺炎图像分类变压器网络。
Sci Rep. 2025 Jul 30;15(1):27868. doi: 10.1038/s41598-025-12117-0.
9
Video Coding Based on Ladder Subband Recovery and ResGroup Module.基于梯形子带恢复和ResGroup模块的视频编码
Entropy (Basel). 2025 Jul 8;27(7):734. doi: 10.3390/e27070734.
10
CDFAN: Cross-Domain Fusion Attention Network for Pansharpening.CDFAN:用于图像锐化的跨域融合注意力网络。
Entropy (Basel). 2025 May 27;27(6):567. doi: 10.3390/e27060567.

本文引用的文献

1
Hierarchical Contrastive Learning for Semantic Segmentation.
IEEE Trans Neural Netw Learn Syst. 2025 Jun;36(6):11202-11214. doi: 10.1109/TNNLS.2024.3491782.
2
A Survey on Knowledge Editing of Neural Networks.神经网络知识编辑研究综述
IEEE Trans Neural Netw Learn Syst. 2025 Jul;36(7):11759-11775. doi: 10.1109/TNNLS.2024.3498935.
3
Beyond Message-Passing: Generalization of Graph Neural Networks via Feature Perturbation for Semi-Supervised Node Classification.
IEEE Trans Neural Netw Learn Syst. 2025 Jun;36(6):10271-10282. doi: 10.1109/TNNLS.2024.3472897.
4
EEGMatch: Learning With Incomplete Labels for Semisupervised EEG-Based Cross-Subject Emotion Recognition.EEGMatch:用于基于脑电图的半监督跨主体情绪识别的不完全标签学习
IEEE Trans Neural Netw Learn Syst. 2025 Jul;36(7):12991-13005. doi: 10.1109/TNNLS.2024.3493425.
5
Structure-Preserved Self-Attention for Fusion Image Information in Multiple Color Spaces.
IEEE Trans Neural Netw Learn Syst. 2025 Jul;36(7):13021-13035. doi: 10.1109/TNNLS.2024.3490800.
6
Partition-Level Tensor Learning-Based Multiview Unsupervised Feature Selection.
IEEE Trans Neural Netw Learn Syst. 2025 Jul;36(7):12799-12811. doi: 10.1109/TNNLS.2024.3482440.
7
Representation Learning Based on Co-Evolutionary Combined With Probability Distribution Optimization for Precise Defect Location.基于协同进化与概率分布优化相结合的表示学习用于精确缺陷定位
IEEE Trans Neural Netw Learn Syst. 2025 Jul;36(7):11989-12003. doi: 10.1109/TNNLS.2024.3479734.
8
Three-Dimensional View Relationship-Based Context-Aware Emotion Recognition.
IEEE Trans Neural Netw Learn Syst. 2025 Jul;36(7):13567-13578. doi: 10.1109/TNNLS.2024.3476249.
9
Dual-Decoupling With Frequency-Spatial Domains for Image Manipulation Localization.
IEEE Trans Neural Netw Learn Syst. 2025 Jul;36(7):12595-12605. doi: 10.1109/TNNLS.2024.3472846.
10
EWT: Efficient Wavelet-Transformer for single image denoising.EWT:用于单图像去噪的高效小波变换。
Neural Netw. 2024 Sep;177:106378. doi: 10.1016/j.neunet.2024.106378. Epub 2024 May 8.