Zakariah Mohammed, Al-Razgan Muna, Alfakih Taha
Department of Computer Science and Engineering, College of Applied Studies and Community Service, King Saud University, P.O. Box 22459, Riyadh, 11495, Saudi Arabia.
Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11345, Saudi Arabia.
Heliyon. 2024 Sep 14;10(18):e37804. doi: 10.1016/j.heliyon.2024.e37804. eCollection 2024 Sep 30.
Brain tumors are one of the leading causes of cancer death; screening early is the best strategy to diagnose and treat brain tumors. Magnetic Resonance Imaging (MRI) is extensively utilized for brain tumor diagnosis; nevertheless, achieving improved accuracy and performance, a critical challenge in most of the previously reported automated medical diagnostics, is a complex problem. The study introduces the Dual Vision Transformer-DSUNET model, which incorporates feature fusion techniques to provide precise and efficient differentiation between brain tumors and other brain regions by leveraging multi-modal MRI data. The impetus for this study arises from the necessity of automating the segmentation process of brain tumors in medical imaging, a critical component in the realms of diagnosis and therapy strategy. The BRATS 2020 dataset is employed to tackle this issue, an extensively utilized dataset for segmenting brain tumors. This dataset encompasses multi-modal MRI images, including T1-weighted, T2-weighted, T1Gd (contrast-enhanced), and FLAIR modalities. The proposed model incorporates the dual vision idea to comprehensively capture the heterogeneous properties of brain tumors across several imaging modalities. Moreover, feature fusion techniques are implemented to augment the amalgamation of data originating from several modalities, enhancing the accuracy and dependability of tumor segmentation. The Dual Vision Transformer-DSUNET model's performance is evaluated using the Dice Coefficient as a prevalent metric for quantifying segmentation accuracy. The results obtained from the experiment exhibit remarkable performance, with Dice Coefficient values of 91.47 % for enhanced tumors, 92.38 % for core tumors, and 90.88 % for edema. The cumulative Dice score for the entirety of the classes is 91.29 %. In addition, the model has a high level of accuracy, roughly 99.93 %, which underscores its durability and efficacy in segmenting brain tumors. Experimental findings demonstrate the integrity of the suggested architecture, which has quickly improved the detection accuracy of many brain diseases.
脑肿瘤是癌症死亡的主要原因之一;早期筛查是诊断和治疗脑肿瘤的最佳策略。磁共振成像(MRI)被广泛用于脑肿瘤诊断;然而,在大多数先前报道的自动医学诊断中,提高准确性和性能是一个复杂的问题,这是一个关键挑战。该研究引入了双视觉Transformer-DSUNET模型,该模型结合了特征融合技术,通过利用多模态MRI数据,在脑肿瘤和其他脑区域之间提供精确而有效的区分。这项研究的动力源于医学成像中脑肿瘤分割过程自动化的必要性,这是诊断和治疗策略领域的一个关键组成部分。使用BRATS 2020数据集来解决这个问题,这是一个广泛用于分割脑肿瘤的数据集。该数据集包含多模态MRI图像,包括T1加权、T2加权、T1Gd(增强)和FLAIR模态。所提出的模型纳入了双视觉理念,以全面捕捉跨多种成像模态的脑肿瘤的异质性。此外,实施特征融合技术以增强源自多种模态的数据的融合,提高肿瘤分割的准确性和可靠性。使用骰子系数作为量化分割准确性的常用指标来评估双视觉Transformer-DSUNET模型的性能。实验获得的结果表现出显著的性能,增强肿瘤的骰子系数值为91.47%,核心肿瘤为92.38%,水肿为90.88%。所有类别的累计骰子分数为91.29%。此外,该模型具有较高的准确性,约为99.93%,这突出了其在分割脑肿瘤方面的耐久性和有效性。实验结果证明了所建议架构的完整性,它迅速提高了许多脑部疾病的检测准确性。