• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

DGEAHorNet:用于医学图像分割的具有双交叉全局高效注意力的高阶空间交互网络

DGEAHorNet: high-order spatial interaction network with dual cross global efficient attention for medical image segmentation.

作者信息

Peng Haixin, An Xinjun, Chen Xue, Chen Zhenxiang

机构信息

College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, China.

出版信息

Phys Eng Sci Med. 2025 Jul 24. doi: 10.1007/s13246-025-01583-5.

DOI:10.1007/s13246-025-01583-5
PMID:40707863
Abstract

Medical image segmentation is a complex and challenging task, which aims to accurately segment various structures or abnormal regions in medical images. However, obtaining accurate segmentation results is difficult because of the great uncertainty in the shape, location, and scale of the target region. To address these challenges, we propose a higher-order spatial interaction framework with dual cross global efficient attention (DGEAHorNet), which employs a neural network architecture based on recursive gate convolution to adequately extract multi-scale contextual information from images. Specifically, a Dual Cross-Attentions (DCA) is added to the skip connection that can effectively blend multi-stage encoder features and narrow the semantic gap. In the bottleneck stage, global channel spatial attention module (GCSAM) is used to extract image global information. To obtain better feature representation, we feed the output from the GCSAM into the multi-branch dense layer (SENetV2) for excitation. Furthermore, we adopt Depthwise Over-parameterized Convolutional Layer (DO-Conv) in order to replace the common convolutional layer in the input and output part of our network, then add Efficient Attention (EA) to diminish computational complexity and enhance our model's performance. For evaluating the effectiveness of our proposed DGEAHorNet, we conduct comprehensive experiments on four publicly-available datasets, and achieving 0.9320, 0.9337, 0.9312 and 0.7799 in Dice similarity coefficient on ISIC2018, ISIC2017, CVC-ClinicDB and HRF respectively. Our results show that DGEAHorNet has better performance compared with advanced methods. The code is publicly available at https://github.com/penghaixin/mymodel .

摘要

医学图像分割是一项复杂且具有挑战性的任务,其目的是在医学图像中准确分割出各种结构或异常区域。然而,由于目标区域的形状、位置和尺度存在很大的不确定性,要获得准确的分割结果并非易事。为应对这些挑战,我们提出了一种具有双交叉全局高效注意力的高阶空间交互框架(DGEAHorNet),该框架采用基于递归门卷积的神经网络架构,以充分从图像中提取多尺度上下文信息。具体而言,在跳跃连接中添加了双交叉注意力(DCA),它可以有效地融合多阶段编码器特征并缩小语义差距。在瓶颈阶段,使用全局通道空间注意力模块(GCSAM)来提取图像全局信息。为了获得更好的特征表示,我们将GCSAM的输出输入到多分支密集层(SENetV2)进行激励。此外,我们采用深度过度参数化卷积层(DO-Conv)来替换网络输入和输出部分的普通卷积层,然后添加高效注意力(EA)以降低计算复杂度并提高模型性能。为了评估我们提出的DGEAHorNet的有效性,我们在四个公开可用的数据集上进行了全面实验,在ISIC2018、ISIC2017、CVC-ClinicDB和HRF上的Dice相似系数分别达到了0.9320、0.9337、0.9312和0.7799。我们的结果表明,与先进方法相比,DGEAHorNet具有更好的性能。代码可在https://github.com/penghaixin/mymodel上公开获取。

相似文献

1
DGEAHorNet: high-order spatial interaction network with dual cross global efficient attention for medical image segmentation.DGEAHorNet:用于医学图像分割的具有双交叉全局高效注意力的高阶空间交互网络
Phys Eng Sci Med. 2025 Jul 24. doi: 10.1007/s13246-025-01583-5.
2
Structural semantic-guided MR synthesis from PET images via a dual cross-attention mechanism.通过双交叉注意力机制从PET图像进行结构语义引导的MR合成。
Med Phys. 2025 Jul;52(7):e17957. doi: 10.1002/mp.17957.
3
MACCoM: A multiple attention and convolutional cross-mixer framework for detailed 2D biomedical image segmentation.MACCoM:用于详细 2D 生物医学图像分割的多注意和卷积交叉混合器框架。
Comput Biol Med. 2024 Sep;179:108847. doi: 10.1016/j.compbiomed.2024.108847. Epub 2024 Jul 15.
4
Global-Local Feature Fusion Network Based on Nonlinear Spiking Neural Convolutional Model for MRI Brain Tumor Segmentation.基于非线性脉冲神经卷积模型的全局-局部特征融合网络用于磁共振成像脑肿瘤分割
Int J Neural Syst. 2025 Apr 28:2550036. doi: 10.1142/S0129065725500364.
5
VMDU-net: a dual encoder multi-scale fusion network for polyp segmentation with Vision Mamba and Cross-Shape Transformer integration.VMDU-net:一种用于息肉分割的双编码器多尺度融合网络,集成了视觉曼巴和十字形变换器
Front Artif Intell. 2025 Jun 18;8:1557508. doi: 10.3389/frai.2025.1557508. eCollection 2025.
6
DGCFNet: Dual Global Context Fusion Network for remote sensing image semantic segmentation.DGCFNet:用于遥感图像语义分割的双全局上下文融合网络
PeerJ Comput Sci. 2025 Mar 27;11:e2786. doi: 10.7717/peerj-cs.2786. eCollection 2025.
7
HMA-Net: a hybrid mixer framework with multihead attention for breast ultrasound image segmentation.HMA-Net:一种用于乳腺超声图像分割的具有多头注意力机制的混合混合器框架。
Front Artif Intell. 2025 Jun 18;8:1572433. doi: 10.3389/frai.2025.1572433. eCollection 2025.
8
DASNet a dual branch multi level attention sheep counting network.DASNet是一种双分支多级注意力羊只计数网络。
Sci Rep. 2025 Jul 2;15(1):23228. doi: 10.1038/s41598-025-97929-w.
9
Lesion boundary detection for skin lesion segmentation based on boundary sensing and CNN-transformer fusion networks.基于边界感知与卷积神经网络-Transformer融合网络的皮肤病变分割中的病变边界检测
Artif Intell Med. 2025 Sep;167:103190. doi: 10.1016/j.artmed.2025.103190. Epub 2025 Jun 4.
10
TLTNet: A novel transscale cascade layered transformer network for enhanced retinal blood vessel segmentation.TLTNet:一种新颖的跨尺度级联分层Transformer 网络,用于增强视网膜血管分割。
Comput Biol Med. 2024 Aug;178:108773. doi: 10.1016/j.compbiomed.2024.108773. Epub 2024 Jun 25.

本文引用的文献

1
Novel transfer learning approach for hand drawn mathematical geometric shapes classification.用于手绘数学几何形状分类的新型迁移学习方法。
PeerJ Comput Sci. 2025 Jan 31;11:e2652. doi: 10.7717/peerj-cs.2652. eCollection 2025.
2
HSH-UNet: Hybrid selective high order interactive U-shaped model for automated skin lesion segmentation.HSH-UNet:用于自动皮肤病变分割的混合选择性高阶交互 U 形模型。
Comput Biol Med. 2024 Jan;168:107798. doi: 10.1016/j.compbiomed.2023.107798. Epub 2023 Dec 1.
3
FAT-Net: Feature adaptive transformers for automated skin lesion segmentation.
FAT-Net:用于自动皮肤病变分割的特征自适应转换器。
Med Image Anal. 2022 Feb;76:102327. doi: 10.1016/j.media.2021.102327. Epub 2021 Dec 4.
4
Ms RED: A novel multi-scale residual encoding and decoding network for skin lesion segmentation.雷德女士:一种用于皮肤病变分割的新型多尺度残差编码和解码网络。
Med Image Anal. 2022 Jan;75:102293. doi: 10.1016/j.media.2021.102293. Epub 2021 Nov 3.
5
CA-Net: Comprehensive Attention Convolutional Neural Networks for Explainable Medical Image Segmentation.CA-Net:用于可解释医学图像分割的综合注意力卷积神经网络。
IEEE Trans Med Imaging. 2021 Feb;40(2):699-711. doi: 10.1109/TMI.2020.3035253. Epub 2021 Feb 2.
6
Robust vessel segmentation in fundus images.眼底图像中稳健的血管分割
Int J Biomed Imaging. 2013;2013:154860. doi: 10.1155/2013/154860. Epub 2013 Dec 12.