• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

TransConver:用于在MRI图像中开发自动脑肿瘤分割的变压器与卷积并行网络。

TransConver: transformer and convolution parallel network for developing automatic brain tumor segmentation in MRI images.

作者信息

Liang Junjie, Yang Cihui, Zeng Mengjie, Wang Xixi

机构信息

School of Information Engineering, Nanchang Hangkong University, Nanchang, China.

出版信息

Quant Imaging Med Surg. 2022 Apr;12(4):2397-2415. doi: 10.21037/qims-21-919.

DOI:10.21037/qims-21-919
PMID:35371952
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8923874/
Abstract

BACKGROUND

Medical image segmentation plays a vital role in computer-aided diagnosis (CAD) systems. Both convolutional neural networks (CNNs) with strong local information extraction capacities and transformers with excellent global representation capacities have achieved remarkable performance in medical image segmentation. However, because of the semantic differences between local and global features, how to combine convolution and transformers effectively is an important challenge in medical image segmentation.

METHODS

In this paper, we proposed TransConver, a U-shaped segmentation network based on convolution and transformer for automatic and accurate brain tumor segmentation in MRI images. Unlike the recently proposed transformer and convolution based models, we proposed a parallel module named transformer-convolution inception (TC-inception), which extracts local and global information via convolution blocks and transformer blocks, respectively, and integrates them by a cross-attention fusion with global and local feature (CAFGL) mechanism. Meanwhile, the improved skip connection structure named skip connection with cross-attention fusion (SCCAF) mechanism can alleviate the semantic differences between encoder features and decoder features for better feature fusion. In addition, we designed 2D-TransConver and 3D-TransConver for 2D and 3D brain tumor segmentation tasks, respectively, and verified the performance and advantage of our model through brain tumor datasets.

RESULTS

We trained our model on 335 cases from the training dataset of MICCAI BraTS2019 and evaluated the model's performance based on 66 cases from MICCAI BraTS2018 and 125 cases from MICCAI BraTS2019. Our TransConver achieved the best average Dice score of 83.72% and 86.32% on BraTS2019 and BraTS2018, respectively.

CONCLUSIONS

We proposed a transformer and convolution parallel network named TransConver for brain tumor segmentation. The TC-Inception module effectively extracts global information while retaining local details. The experimental results demonstrated that good segmentation requires the model to extract local fine-grained details and global semantic information simultaneously, and our TransConver effectively improves the accuracy of brain tumor segmentation.

摘要

背景

医学图像分割在计算机辅助诊断(CAD)系统中起着至关重要的作用。具有强大局部信息提取能力的卷积神经网络(CNN)和具有出色全局表示能力的变换器在医学图像分割中都取得了显著的性能。然而,由于局部和全局特征之间的语义差异,如何有效地结合卷积和变换器是医学图像分割中的一个重要挑战。

方法

在本文中,我们提出了TransConver,一种基于卷积和变换器的U形分割网络,用于在MRI图像中自动准确地分割脑肿瘤。与最近提出的基于变换器和卷积的模型不同,我们提出了一个名为变换器-卷积 inception(TC-inception)的并行模块,它分别通过卷积块和变换器块提取局部和全局信息,并通过具有全局和局部特征的交叉注意力融合(CAFGL)机制将它们集成。同时,名为具有交叉注意力融合的跳跃连接(SCCAF)机制的改进跳跃连接结构可以缓解编码器特征和解码器特征之间的语义差异,以实现更好的特征融合。此外,我们分别为2D和3D脑肿瘤分割任务设计了2D-TransConver和3D-TransConver,并通过脑肿瘤数据集验证了我们模型的性能和优势。

结果

我们在MICCAI BraTS2019训练数据集的335个病例上训练了我们的模型,并基于MICCAI BraTS2018的66个病例和MICCAI BraTS2019的125个病例评估了模型的性能。我们的TransConver在BraTS2019和BraTS2018上分别取得了83.72%和86.32%的最佳平均骰子分数。

结论

我们提出了一种名为TransConver的变换器和卷积并行网络用于脑肿瘤分割。TC-Inception模块在保留局部细节的同时有效地提取全局信息。实验结果表明,良好的分割需要模型同时提取局部细粒度细节和全局语义信息,并且我们的TransConver有效地提高了脑肿瘤分割的准确性。

相似文献

1
TransConver: transformer and convolution parallel network for developing automatic brain tumor segmentation in MRI images.TransConver:用于在MRI图像中开发自动脑肿瘤分割的变压器与卷积并行网络。
Quant Imaging Med Surg. 2022 Apr;12(4):2397-2415. doi: 10.21037/qims-21-919.
2
Dual encoder network with transformer-CNN for multi-organ segmentation.基于 Transformer-CNN 的双编码器网络的多器官分割。
Med Biol Eng Comput. 2023 Mar;61(3):661-671. doi: 10.1007/s11517-022-02723-9. Epub 2022 Dec 29.
3
[Fully Automatic Glioma Segmentation Algorithm of Magnetic Resonance Imaging Based on 3D-UNet With More Global Contextual Feature Extraction: An Improvement on Insufficient Extraction of Global Features].基于具有更多全局上下文特征提取的3D-UNet的磁共振成像全自动胶质瘤分割算法:对全局特征提取不足的改进
Sichuan Da Xue Xue Bao Yi Xue Ban. 2024 Mar 20;55(2):447-454. doi: 10.12182/20240360208.
4
VSmTrans: A hybrid paradigm integrating self-attention and convolution for 3D medical image segmentation.VSmTrans:一种融合自注意力机制和卷积的 3D 医学图像分割混合范式。
Med Image Anal. 2024 Dec;98:103295. doi: 10.1016/j.media.2024.103295. Epub 2024 Aug 24.
5
SwinBTS: A Method for 3D Multimodal Brain Tumor Segmentation Using Swin Transformer.SwinBTS:一种使用Swin Transformer进行3D多模态脑肿瘤分割的方法。
Brain Sci. 2022 Jun 17;12(6):797. doi: 10.3390/brainsci12060797.
6
Swin Unet3D: a three-dimensional medical image segmentation network combining vision transformer and convolution.Swin Unet3D:一种结合视觉Transformer 和卷积的三维医学图像分割网络。
BMC Med Inform Decis Mak. 2023 Feb 14;23(1):33. doi: 10.1186/s12911-023-02129-z.
7
DECTNet: Dual Encoder Network combined convolution and Transformer architecture for medical image segmentation.DECTNet:用于医学图像分割的双编码器网络结合卷积和 Transformer 架构。
PLoS One. 2024 Apr 4;19(4):e0301019. doi: 10.1371/journal.pone.0301019. eCollection 2024.
8
A new architecture combining convolutional and transformer-based networks for automatic 3D multi-organ segmentation on CT images.一种新的架构,结合了卷积和基于Transformer 的网络,用于 CT 图像上的自动 3D 多器官分割。
Med Phys. 2023 Nov;50(11):6990-7002. doi: 10.1002/mp.16750. Epub 2023 Sep 22.
9
Transformer guided self-adaptive network for multi-scale skin lesion image segmentation.Transformer 引导的自适网络用于多尺度皮肤病变图像分割。
Comput Biol Med. 2024 Feb;169:107846. doi: 10.1016/j.compbiomed.2023.107846. Epub 2023 Dec 23.
10
Efficient brain tumor segmentation using Swin transformer and enhanced local self-attention.基于 Swin Transformer 和增强型局部自注意力的高效脑肿瘤分割。
Int J Comput Assist Radiol Surg. 2024 Feb;19(2):273-281. doi: 10.1007/s11548-023-03024-8. Epub 2023 Oct 5.

引用本文的文献

1
A Review on Deep Learning Methods for Glioma Segmentation, Limitations, and Future Perspectives.胶质瘤分割的深度学习方法、局限性及未来展望综述
J Imaging. 2025 Aug 11;11(8):269. doi: 10.3390/jimaging11080269.
2
LCFC-Laptop: A Benchmark Dataset for Detecting Surface Defects in Consumer Electronics.LCFC-笔记本电脑:一个用于检测消费电子产品表面缺陷的基准数据集。
Sensors (Basel). 2025 Jul 22;25(15):4535. doi: 10.3390/s25154535.
3
Habitat radiomics and transformer fusion model to evaluate treatment effectiveness of cavitary MDR-TB patients.基于栖息地的放射组学与变压器融合模型评估空洞型耐多药结核病患者的治疗效果
iScience. 2025 May 23;28(6):112743. doi: 10.1016/j.isci.2025.112743. eCollection 2025 Jun 20.
4
Transformers for Neuroimage Segmentation: Scoping Review.用于神经图像分割的变压器:范围综述。
J Med Internet Res. 2025 Jan 29;27:e57723. doi: 10.2196/57723.
5
TAC-UNet: transformer-assisted convolutional neural network for medical image segmentation.TAC-UNet:用于医学图像分割的Transformer辅助卷积神经网络。
Quant Imaging Med Surg. 2024 Dec 5;14(12):8824-8839. doi: 10.21037/qims-24-1229. Epub 2024 Nov 5.
6
Dilated multi-scale residual attention (DMRA) U-Net: three-dimensional (3D) dilated multi-scale residual attention U-Net for brain tumor segmentation.扩张多尺度残差注意力(DMRA)U-Net:用于脑肿瘤分割的三维(3D)扩张多尺度残差注意力U-Net。
Quant Imaging Med Surg. 2024 Oct 1;14(10):7249-7264. doi: 10.21037/qims-24-779. Epub 2024 Sep 19.
7
Multimodal data integration for oncology in the era of deep neural networks: a review.深度神经网络时代肿瘤学中的多模态数据整合:综述
Front Artif Intell. 2024 Jul 25;7:1408843. doi: 10.3389/frai.2024.1408843. eCollection 2024.
8
Recent deep learning-based brain tumor segmentation models using multi-modality magnetic resonance imaging: a prospective survey.近期基于深度学习的使用多模态磁共振成像的脑肿瘤分割模型:一项前瞻性调查。
Front Bioeng Biotechnol. 2024 Jul 22;12:1392807. doi: 10.3389/fbioe.2024.1392807. eCollection 2024.
9
Next-Gen Medical Imaging: U-Net Evolution and the Rise of Transformers.下一代医学成像:U-Net 进化与 Transformers 的崛起。
Sensors (Basel). 2024 Jul 18;24(14):4668. doi: 10.3390/s24144668.
10
Enhancing brain tumor segmentation in MRI images using the IC-net algorithm framework.利用 IC-net 算法框架增强 MRI 图像中的脑肿瘤分割。
Sci Rep. 2024 Jul 8;14(1):15660. doi: 10.1038/s41598-024-66314-4.

本文引用的文献

1
Beyond Self-Attention: External Attention Using Two Linear Layers for Visual Tasks.超越自注意力机制:用于视觉任务的基于两个线性层的外部注意力机制
IEEE Trans Pattern Anal Mach Intell. 2023 May;45(5):5436-5447. doi: 10.1109/TPAMI.2022.3211006. Epub 2023 Apr 3.
2
Automatic segmentation of the left ventricle in echocardiographic images using convolutional neural networks.使用卷积神经网络对超声心动图图像中的左心室进行自动分割。
Quant Imaging Med Surg. 2021 May;11(5):1763-1781. doi: 10.21037/qims-20-745.
3
UNet++: A Nested U-Net Architecture for Medical Image Segmentation.U-Net++:一种用于医学图像分割的嵌套U-Net架构。
Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018). 2018 Sep;11045:3-11. doi: 10.1007/978-3-030-00889-5_1. Epub 2018 Sep 20.
4
Dense-UNet: a novel multiphoton cellular image segmentation model based on a convolutional neural network.密集型U-Net:一种基于卷积神经网络的新型多光子细胞图像分割模型。
Quant Imaging Med Surg. 2020 Jun;10(6):1275-1285. doi: 10.21037/qims-19-1090.
5
Cross-Modal Attention With Semantic Consistence for Image-Text Matching.用于图像-文本匹配的具有语义一致性的跨模态注意力机制
IEEE Trans Neural Netw Learn Syst. 2020 Dec;31(12):5412-5425. doi: 10.1109/TNNLS.2020.2967597. Epub 2020 Nov 30.
6
H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes.H-DenseUNet:用于 CT 容积的肝脏和肿瘤分割的混合密集连接 UNet。
IEEE Trans Med Imaging. 2018 Dec;37(12):2663-2674. doi: 10.1109/TMI.2018.2845918. Epub 2018 Jun 11.
7
Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features.利用专家分割标签和放射组学特征推进癌症基因组图谱胶质细胞瘤 MRI 数据集。
Sci Data. 2017 Sep 5;4:170117. doi: 10.1038/sdata.2017.117.
8
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.DeepLab:基于深度卷积网络、空洞卷积和全连接条件随机场的语义图像分割。
IEEE Trans Pattern Anal Mach Intell. 2018 Apr;40(4):834-848. doi: 10.1109/TPAMI.2017.2699184. Epub 2017 Apr 27.
9
Fully Convolutional Networks for Semantic Segmentation.全卷积网络用于语义分割。
IEEE Trans Pattern Anal Mach Intell. 2017 Apr;39(4):640-651. doi: 10.1109/TPAMI.2016.2572683. Epub 2016 May 24.
10
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).多模态脑肿瘤图像分割基准(BRATS)。
IEEE Trans Med Imaging. 2015 Oct;34(10):1993-2024. doi: 10.1109/TMI.2014.2377694. Epub 2014 Dec 4.