• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

DCE-UNet:一种基于Transformer的用于X射线图像中多种青少年脊柱疾病的全自动分割网络。

DCE-UNet: A Transformer-Based Fully Automated Segmentation Network for Multiple Adolescent Spinal Disorders in X-ray Images.

作者信息

Xue Zhilong, Deng Shuangcheng, Yue Yiqun, Chen Chenping, Li Zhiwu, Yang Yang, Sun Shilong, Liu Yubang

机构信息

Beijing Institute of Petrochemical Technology, Qingyuan North Road, No. 19, Daxing District, Beijing 102617, China, Beijing, 102617, CHINA.

Beijing Institute of Petrochemical Technology, Qingyuan North Road, No. 19, Daxing District, Beijing 102617, China, Beijing, Beijing, 102617, CHINA.

出版信息

Biomed Phys Eng Express. 2025 Aug 21. doi: 10.1088/2057-1976/adfde9.

DOI:10.1088/2057-1976/adfde9
PMID:40840472
Abstract

In recent years, spinal X-ray image segmentation has played a vital role in the computer-aided diagnosis of various adolescent spinal disorders. However, due to the complex morphology of lesions and the fact that most existing methods are tailored to single-disease scenarios, current segmentation networks struggle to balance local detail preservation and global structural understanding across different disease types. As a result, they often suffer from limited accuracy, insufficient robustness, and poor adaptability. To address these challenges, we propose a novel fully automated spinal segmentation network, DCE-UNet, which integrates the local modeling strength of convolutional neural networks (CNNs) with the global contextual awareness of Transformers. The network introduces several architectural and feature fusion innovations. Specifically, a lightweight Transformer module is incorporated in the encoder to model high-level semantic features and enhance global contextual understanding. In the decoder, a Rec-Block module combining residual convolution and channel attention is designed to improve feature reconstruction and multi-scale fusion during the upsampling process. Additionally, the downsampling feature extraction path integrates a novel DC-Block that fuses channel and spatial attention mechanisms, enhancing the network's ability to represent complex lesion structures. Experiments conducted on a self-constructed large-scale multi-disease adolescent spinal X-ray dataset demonstrate that DCE-UNet achieves a Dice score of 91.3%, a mean Intersection over Union (mIoU) of 84.1, and a Hausdorff Distance (HD) of 4.007, outperforming several state-of-the-art comparison networks. Validation on real segmentation tasks further confirms that DCE-UNet delivers consistently superior performance across various lesion regions, highlighting its strong adaptability to multiple pathologies and promising potential for clinical application.

摘要

近年来,脊柱X线图像分割在各种青少年脊柱疾病的计算机辅助诊断中发挥了至关重要的作用。然而,由于病变形态复杂,且大多数现有方法是针对单一疾病场景定制的,当前的分割网络难以在不同疾病类型之间平衡局部细节保留和全局结构理解。因此,它们常常存在准确性有限、鲁棒性不足和适应性差的问题。为应对这些挑战,我们提出了一种新型的全自动脊柱分割网络DCE-UNet,它将卷积神经网络(CNN)的局部建模能力与Transformer的全局上下文感知能力相结合。该网络引入了多项架构和特征融合创新。具体而言,在编码器中融入了一个轻量级Transformer模块,以对高级语义特征进行建模并增强全局上下文理解。在解码器中,设计了一个结合残差卷积和通道注意力的Rec-Block模块,以改善上采样过程中的特征重建和多尺度融合。此外,下采样特征提取路径集成了一个融合通道和空间注意力机制的新型DC-Block,增强了网络表示复杂病变结构的能力。在自行构建的大规模多疾病青少年脊柱X线数据集上进行的实验表明,DCE-UNet的Dice分数达到91.3%,平均交并比(mIoU)为84.1,豪斯多夫距离(HD)为4.007,优于多个先进的对比网络。在实际分割任务上的验证进一步证实,DCE-UNet在各个病变区域均表现出始终如一的卓越性能,凸显了其对多种病理状况的强大适应性以及在临床应用中的广阔前景。

相似文献

1
DCE-UNet: A Transformer-Based Fully Automated Segmentation Network for Multiple Adolescent Spinal Disorders in X-ray Images.DCE-UNet:一种基于Transformer的用于X射线图像中多种青少年脊柱疾病的全自动分割网络。
Biomed Phys Eng Express. 2025 Aug 21. doi: 10.1088/2057-1976/adfde9.
2
A novel recursive transformer-based U-Net architecture for enhanced multi-scale medical image segmentation.一种基于递归变压器的新型U-Net架构,用于增强多尺度医学图像分割。
Comput Biol Med. 2025 Sep;196(Pt A):110658. doi: 10.1016/j.compbiomed.2025.110658. Epub 2025 Jul 6.
3
MSCT-UNET: multi-scale contrastive transformer within U-shaped network for medical image segmentation.MSCT-UNET:U 形网络中的多尺度对比变换用于医学图像分割。
Phys Med Biol. 2023 Dec 28;69(1). doi: 10.1088/1361-6560/ad135d.
4
A novel image segmentation network with multi-scale and flow-guided attention for early screening of vaginal intraepithelial neoplasia (VAIN).一种用于阴道上皮内瘤变(VAIN)早期筛查的具有多尺度和流引导注意力的新型图像分割网络。
Med Phys. 2025 Aug;52(8):e18041. doi: 10.1002/mp.18041.
5
TLTNet: A novel transscale cascade layered transformer network for enhanced retinal blood vessel segmentation.TLTNet:一种新颖的跨尺度级联分层Transformer 网络,用于增强视网膜血管分割。
Comput Biol Med. 2024 Aug;178:108773. doi: 10.1016/j.compbiomed.2024.108773. Epub 2024 Jun 25.
6
VMKLA-UNet: vision Mamba with KAN linear attention U-Net.VMKLA-UNet:带KAN线性注意力机制的视觉曼巴U-Net
Sci Rep. 2025 Apr 17;15(1):13258. doi: 10.1038/s41598-025-97397-2.
7
ThreeF-Net: Fine-grained feature fusion network for breast ultrasound image segmentation.ThreeF-Net:用于乳腺超声图像分割的细粒度特征融合网络。
Comput Biol Med. 2025 Aug;194:110527. doi: 10.1016/j.compbiomed.2025.110527. Epub 2025 Jun 14.
8
Multi-level channel-spatial attention and light-weight scale-fusion network (MCSLF-Net): multi-level channel-spatial attention and light-weight scale-fusion transformer for 3D brain tumor segmentation.多级通道空间注意力与轻量级尺度融合网络(MCSLF-Net):用于3D脑肿瘤分割的多级通道空间注意力与轻量级尺度融合变换器
Quant Imaging Med Surg. 2025 Jul 1;15(7):6301-6325. doi: 10.21037/qims-2025-354. Epub 2025 Jun 30.
9
CXR-MultiTaskNet a unified deep learning framework for joint disease localization and classification in chest radiographs.CXR-MultiTaskNet:一种用于胸部X光片中疾病联合定位与分类的统一深度学习框架。
Sci Rep. 2025 Aug 31;15(1):32022. doi: 10.1038/s41598-025-16669-z.
10
LFE-UNet: A Lightweight Full-Encoder U-shaped Network for Efficient Semantic Segmentation in Medical Imaging.LFE-UNet:一种用于医学成像中高效语义分割的轻量级全编码器U型网络。
Curr Med Imaging. 2025;21:e15734056370555. doi: 10.2174/0115734056370555250426140155.