Zhu Hongjun, Huang Jiaohang, Chen Kuo, Ying Xuehui, Qian Ying
School of Software Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China; Chongqing Engineering Research Center of Software Quality Assurance, Testing and Assessment, Chongqing, 400065, China; Key Laboratory of Big Data Intelligent Computing, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
School of Software Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
Comput Biol Med. 2025 Jun;191:110148. doi: 10.1016/j.compbiomed.2025.110148. Epub 2025 Apr 10.
Brain Tumor Segmentation (BraTS) plays a critical role in clinical diagnosis, treatment planning, and monitoring the progression of brain tumors. However, due to the variability in tumor appearance, size, and intensity across different MRI modalities, automated segmentation remains a challenging task. In this study, we propose a novel Transformer-based framework, multiPI-TransBTS, which integrates multi-physical information to enhance segmentation accuracy. The model leverages spatial information, semantic information, and multi-modal imaging data, addressing the inherent heterogeneity in brain tumor characteristics. The multiPI-TransBTS framework consists of an encoder, an Adaptive Feature Fusion (AFF) module, and a multi-source, multi-scale feature decoder. The encoder incorporates a multi-branch architecture to separately extract modality-specific features from different MRI sequences. The AFF module fuses information from multiple sources using channel-wise and element-wise attention, ensuring effective feature recalibration. The decoder combines both common and task-specific features through a Task-Specific Feature Introduction (TSFI) strategy, producing accurate segmentation outputs for Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET) regions. Comprehensive evaluations on the BraTS2019 and BraTS2020 datasets demonstrate the superiority of multiPI-TransBTS over the state-of-the-art methods. The model consistently achieves better Dice coefficients, Hausdorff distances, and Sensitivity scores, highlighting its effectiveness in addressing the BraTS challenges. Our results also indicate the need for further exploration of the balance between precision and recall in the ET segmentation task. The proposed framework represents a significant advancement in BraTS, with potential implications for improving clinical outcomes for brain tumor patients.
脑肿瘤分割(BraTS)在临床诊断、治疗规划以及监测脑肿瘤进展方面发挥着关键作用。然而,由于不同MRI模态下肿瘤外观、大小和强度的变异性,自动分割仍然是一项具有挑战性的任务。在本研究中,我们提出了一种基于Transformer的新型框架multiPI-TransBTS,它整合了多物理信息以提高分割精度。该模型利用空间信息、语义信息和多模态成像数据,解决了脑肿瘤特征中固有的异质性问题。multiPI-TransBTS框架由一个编码器、一个自适应特征融合(AFF)模块以及一个多源、多尺度特征解码器组成。编码器采用多分支架构,从不同的MRI序列中分别提取特定模态的特征。AFF模块使用通道注意力和元素级注意力融合来自多个源的信息,确保有效的特征重新校准。解码器通过特定任务特征引入(TSFI)策略结合通用特征和特定任务特征,为全肿瘤(WT)、肿瘤核心(TC)和强化肿瘤(ET)区域生成准确的分割输出。在BraTS2019和BraTS2020数据集上的综合评估表明,multiPI-TransBTS优于现有最先进的方法。该模型始终能获得更好的Dice系数、豪斯多夫距离和灵敏度分数,突出了其在应对BraTS挑战方面的有效性。我们的结果还表明,在ET分割任务中需要进一步探索精度和召回率之间的平衡。所提出的框架代表了BraTS领域的一项重大进展,对改善脑肿瘤患者的临床结果具有潜在意义。