• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于脑肿瘤分割的具有特征融合的双视觉Transformer-DSUNET

Dual vision Transformer-DSUNET with feature fusion for brain tumor segmentation.

作者信息

Zakariah Mohammed, Al-Razgan Muna, Alfakih Taha

机构信息

Department of Computer Science and Engineering, College of Applied Studies and Community Service, King Saud University, P.O. Box 22459, Riyadh, 11495, Saudi Arabia.

Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11345, Saudi Arabia.

出版信息

Heliyon. 2024 Sep 14;10(18):e37804. doi: 10.1016/j.heliyon.2024.e37804. eCollection 2024 Sep 30.

DOI:10.1016/j.heliyon.2024.e37804
PMID:39323802
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11422567/
Abstract

Brain tumors are one of the leading causes of cancer death; screening early is the best strategy to diagnose and treat brain tumors. Magnetic Resonance Imaging (MRI) is extensively utilized for brain tumor diagnosis; nevertheless, achieving improved accuracy and performance, a critical challenge in most of the previously reported automated medical diagnostics, is a complex problem. The study introduces the Dual Vision Transformer-DSUNET model, which incorporates feature fusion techniques to provide precise and efficient differentiation between brain tumors and other brain regions by leveraging multi-modal MRI data. The impetus for this study arises from the necessity of automating the segmentation process of brain tumors in medical imaging, a critical component in the realms of diagnosis and therapy strategy. The BRATS 2020 dataset is employed to tackle this issue, an extensively utilized dataset for segmenting brain tumors. This dataset encompasses multi-modal MRI images, including T1-weighted, T2-weighted, T1Gd (contrast-enhanced), and FLAIR modalities. The proposed model incorporates the dual vision idea to comprehensively capture the heterogeneous properties of brain tumors across several imaging modalities. Moreover, feature fusion techniques are implemented to augment the amalgamation of data originating from several modalities, enhancing the accuracy and dependability of tumor segmentation. The Dual Vision Transformer-DSUNET model's performance is evaluated using the Dice Coefficient as a prevalent metric for quantifying segmentation accuracy. The results obtained from the experiment exhibit remarkable performance, with Dice Coefficient values of 91.47 % for enhanced tumors, 92.38 % for core tumors, and 90.88 % for edema. The cumulative Dice score for the entirety of the classes is 91.29 %. In addition, the model has a high level of accuracy, roughly 99.93 %, which underscores its durability and efficacy in segmenting brain tumors. Experimental findings demonstrate the integrity of the suggested architecture, which has quickly improved the detection accuracy of many brain diseases.

摘要

脑肿瘤是癌症死亡的主要原因之一;早期筛查是诊断和治疗脑肿瘤的最佳策略。磁共振成像(MRI)被广泛用于脑肿瘤诊断;然而,在大多数先前报道的自动医学诊断中,提高准确性和性能是一个复杂的问题,这是一个关键挑战。该研究引入了双视觉Transformer-DSUNET模型,该模型结合了特征融合技术,通过利用多模态MRI数据,在脑肿瘤和其他脑区域之间提供精确而有效的区分。这项研究的动力源于医学成像中脑肿瘤分割过程自动化的必要性,这是诊断和治疗策略领域的一个关键组成部分。使用BRATS 2020数据集来解决这个问题,这是一个广泛用于分割脑肿瘤的数据集。该数据集包含多模态MRI图像,包括T1加权、T2加权、T1Gd(增强)和FLAIR模态。所提出的模型纳入了双视觉理念,以全面捕捉跨多种成像模态的脑肿瘤的异质性。此外,实施特征融合技术以增强源自多种模态的数据的融合,提高肿瘤分割的准确性和可靠性。使用骰子系数作为量化分割准确性的常用指标来评估双视觉Transformer-DSUNET模型的性能。实验获得的结果表现出显著的性能,增强肿瘤的骰子系数值为91.47%,核心肿瘤为92.38%,水肿为90.88%。所有类别的累计骰子分数为91.29%。此外,该模型具有较高的准确性,约为99.93%,这突出了其在分割脑肿瘤方面的耐久性和有效性。实验结果证明了所建议架构的完整性,它迅速提高了许多脑部疾病的检测准确性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/9d0ab2e98e77/gr23.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/fc78ef7412ab/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/2b9934887570/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/033082e0da43/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/81dbda9b5b24/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/43144c7779ce/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/298d4626f9aa/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/60ebd443ae76/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/97cb0b20a925/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/1247fda2f0d6/gr9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/c25e0ed836af/gr10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/a96a7832ecd0/gr11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/841e87f57faf/gr12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/7e0e9161bba5/gr13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/d892bdfa88e7/gr14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/ad57aef47e41/gr15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/e234f5457443/gr16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/e4e2e9c1544e/gr17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/64495209a2fb/gr18.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/8a8b1edb9d89/gr19.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/091c09e05ae6/gr20.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/3e8e97dd715f/gr21.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/22fc21dbe0d0/gr22.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/9d0ab2e98e77/gr23.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/fc78ef7412ab/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/2b9934887570/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/033082e0da43/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/81dbda9b5b24/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/43144c7779ce/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/298d4626f9aa/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/60ebd443ae76/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/97cb0b20a925/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/1247fda2f0d6/gr9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/c25e0ed836af/gr10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/a96a7832ecd0/gr11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/841e87f57faf/gr12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/7e0e9161bba5/gr13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/d892bdfa88e7/gr14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/ad57aef47e41/gr15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/e234f5457443/gr16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/e4e2e9c1544e/gr17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/64495209a2fb/gr18.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/8a8b1edb9d89/gr19.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/091c09e05ae6/gr20.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/3e8e97dd715f/gr21.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/22fc21dbe0d0/gr22.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8700/11422567/9d0ab2e98e77/gr23.jpg

相似文献

1
Dual vision Transformer-DSUNET with feature fusion for brain tumor segmentation.用于脑肿瘤分割的具有特征融合的双视觉Transformer-DSUNET
Heliyon. 2024 Sep 14;10(18):e37804. doi: 10.1016/j.heliyon.2024.e37804. eCollection 2024 Sep 30.
2
[Fully Automatic Glioma Segmentation Algorithm of Magnetic Resonance Imaging Based on 3D-UNet With More Global Contextual Feature Extraction: An Improvement on Insufficient Extraction of Global Features].基于具有更多全局上下文特征提取的3D-UNet的磁共振成像全自动胶质瘤分割算法:对全局特征提取不足的改进
Sichuan Da Xue Xue Bao Yi Xue Ban. 2024 Mar 20;55(2):447-454. doi: 10.12182/20240360208.
3
Scalable Swin Transformer network for brain tumor segmentation from incomplete MRI modalities.基于可扩展 Swin Transformer 网络的不完全 MRI 模式脑肿瘤分割。
Artif Intell Med. 2024 Mar;149:102788. doi: 10.1016/j.artmed.2024.102788. Epub 2024 Feb 2.
4
SwinCross: Cross-modal Swin transformer for head-and-neck tumor segmentation in PET/CT images.SwinCross:用于 PET/CT 图像中头颈部肿瘤分割的跨模态 Swin 变换器。
Med Phys. 2024 Mar;51(3):2096-2107. doi: 10.1002/mp.16703. Epub 2023 Sep 30.
5
MSFR-Net: Multi-modality and single-modality feature recalibration network for brain tumor segmentation.MSFR-Net:用于脑肿瘤分割的多模态和单模态特征重新校准网络。
Med Phys. 2023 Apr;50(4):2249-2262. doi: 10.1002/mp.15933. Epub 2022 Aug 23.
6
A 3D hierarchical cross-modality interaction network using transformers and convolutions for brain glioma segmentation in MR images.一种使用变换和卷积的 3D 层次跨模态交互网络,用于磁共振图像中的脑胶质瘤分割。
Med Phys. 2024 Nov;51(11):8371-8389. doi: 10.1002/mp.17354. Epub 2024 Aug 13.
7
Joint learning-based feature reconstruction and enhanced network for incomplete multi-modal brain tumor segmentation.基于联合学习的特征重构和增强网络用于不完全多模态脑肿瘤分割。
Comput Biol Med. 2023 Sep;163:107234. doi: 10.1016/j.compbiomed.2023.107234. Epub 2023 Jul 4.
8
Znet: Deep Learning Approach for 2D MRI Brain Tumor Segmentation.Znet:二维 MRI 脑肿瘤分割的深度学习方法。
IEEE J Transl Eng Health Med. 2022 May 23;10:1800508. doi: 10.1109/JTEHM.2022.3176737. eCollection 2022.
9
STC-UNet: renal tumor segmentation based on enhanced feature extraction at different network levels.STC-UNet:基于不同网络层次增强特征提取的肾肿瘤分割。
BMC Med Imaging. 2024 Jul 19;24(1):179. doi: 10.1186/s12880-024-01359-5.
10
RFS+: A Clinically Adaptable and Computationally Efficient Strategy for Enhanced Brain Tumor Segmentation.RFS+:一种用于增强脑肿瘤分割的临床适用且计算高效的策略
Cancers (Basel). 2023 Nov 28;15(23):5620. doi: 10.3390/cancers15235620.

引用本文的文献

1
ResSAXU-Net for multimodal brain tumor segmentation from brain MRI.用于从脑部磁共振成像中进行多模态脑肿瘤分割的ResSAXU-Net
Sci Rep. 2025 Jul 7;15(1):24179. doi: 10.1038/s41598-025-09539-1.
2
Dual-Stream Contrastive Latent Learning Generative Adversarial Network for Brain Image Synthesis and Tumor Classification.用于脑图像合成与肿瘤分类的双流对比潜在学习生成对抗网络
J Imaging. 2025 Mar 28;11(4):101. doi: 10.3390/jimaging11040101.

本文引用的文献

1
Sparse Dynamic Volume TransUNet with multi-level edge fusion for brain tumor segmentation.基于多尺度边缘融合的稀疏动态容积 TransUNet 脑肿瘤分割方法
Comput Biol Med. 2024 Apr;172:108284. doi: 10.1016/j.compbiomed.2024.108284. Epub 2024 Mar 15.
2
Adaptive Cross-Feature Fusion Network With Inconsistency Guidance for Multi-Modal Brain Tumor Segmentation.具有不一致性引导的自适应跨特征融合网络用于多模态脑肿瘤分割
IEEE J Biomed Health Inform. 2025 May;29(5):3148-3158. doi: 10.1109/JBHI.2023.3347556. Epub 2025 May 6.
3
Brain tumor segmentation based on optimized convolutional neural network and improved chimp optimization algorithm.
基于优化卷积神经网络和改进的黑猩猩优化算法的脑肿瘤分割。
Comput Biol Med. 2024 Jan;168:107723. doi: 10.1016/j.compbiomed.2023.107723. Epub 2023 Nov 19.
4
Advancing Brain Tumor Classification through Fine-Tuned Vision Transformers: A Comparative Study of Pre-Trained Models.通过微调视觉Transformer推进脑肿瘤分类:预训练模型的比较研究
Sensors (Basel). 2023 Sep 15;23(18):7913. doi: 10.3390/s23187913.
5
Dual Deep CNN for Tumor Brain Classification.用于脑肿瘤分类的双深度卷积神经网络
Diagnostics (Basel). 2023 Jun 13;13(12):2050. doi: 10.3390/diagnostics13122050.
6
Focal cross transformer: multi-view brain tumor segmentation model based on cross window and focal self-attention.聚焦交叉变换器:基于交叉窗口和聚焦自注意力的多视图脑肿瘤分割模型
Front Neurosci. 2023 May 12;17:1192867. doi: 10.3389/fnins.2023.1192867. eCollection 2023.
7
Multimodal Stereotactic Brain Tumor Segmentation Using 3D-Znet.使用3D-Znet的多模态立体定向脑肿瘤分割
Bioengineering (Basel). 2023 May 11;10(5):581. doi: 10.3390/bioengineering10050581.
8
CKD-TransBTS: Clinical Knowledge-Driven Hybrid Transformer With Modality-Correlated Cross-Attention for Brain Tumor Segmentation.CKD-TransBTS:基于临床知识驱动的混合 Transformer 与模态相关交叉注意力用于脑肿瘤分割。
IEEE Trans Med Imaging. 2023 Aug;42(8):2451-2461. doi: 10.1109/TMI.2023.3250474. Epub 2023 Aug 1.
9
Selective Deeply Supervised Multi-Scale Attention Network for Brain Tumor Segmentation.选择性深度监督多尺度注意力网络在脑肿瘤分割中的应用。
Sensors (Basel). 2023 Feb 20;23(4):2346. doi: 10.3390/s23042346.
10
PatchResNet: Multiple Patch Division-Based Deep Feature Fusion Framework for Brain Tumor Classification Using MRI Images.基于多补丁划分的深度特征融合框架用于基于 MRI 图像的脑肿瘤分类。
J Digit Imaging. 2023 Jun;36(3):973-987. doi: 10.1007/s10278-023-00789-x. Epub 2023 Feb 16.