Guo Shunchao, Chen Qijian, Wang Li, Wang Lihui, Zhu Yuemin
Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, People's Republic of China.
Key Laboratory of Complex Systems and Intelligent Optimization of Guizhou Province, Institute of Big Data Application and Artificial Intelligence, School of Computer and Information, Qiannan Normal University for Nationalities, Duyun, People's Republic of China.
Phys Med Biol. 2023 Dec 11;68(24). doi: 10.1088/1361-6560/ad0c8d.
. Both local and global context information is crucial semantic features for brain tumor segmentation, while almost all the CNN-based methods cannot learn global spatial dependencies very well due to the limitation of convolution operations. The purpose of this paper is to build a new framework to make full use of local and global features from multimodal MR images for improving the performance of brain tumor segmentation.. A new automated segmentation method named nnUnetFormer was proposed based on nnUnet and transformer. It fused transformer modules into the deeper layers of the nnUnet framework to efficiently obtain both local and global features of lesion regions from multimodal MR images.We evaluated our method on BraTS 2021 dataset by 5-fold cross-validation and achieved excellent performance with Dice similarity coefficient (DSC) 0.936, 0.921 and 0.872, and 95th percentile of Hausdorff distance (HD95) 3.96, 4.57 and 10.45 for the regions of whole tumor (WT), tumor core (TC), and enhancing tumor (ET), respectively, which outperformed recent state-of-the-art methods in terms of both average DSC and average HD95. Besides, ablation experiments showed that fusing transformer into our modified nnUnet framework improves the performance of brain tumor segmentation, especially for the TC region. Moreover, for validating the generalization capacity of our method, we further conducted experiments on FeTS 2021 dataset and achieved satisfactory segmentation performance on 11 unseen institutions with DSC 0.912, 0.872 and 0.759, and HD95 6.16, 8.81 and 38.50 for the regions of WT, TC, and ET, respectively.. Extensive qualitative and quantitative experimental results demonstrated that the proposed method has competitive performance against the state-of-the-art methods, indicating its interest for clinical applications.
局部和全局上下文信息都是脑肿瘤分割的关键语义特征,然而,由于卷积操作的局限性,几乎所有基于卷积神经网络(CNN)的方法都无法很好地学习全局空间依赖性。本文的目的是构建一个新框架,充分利用多模态磁共振(MR)图像的局部和全局特征,以提高脑肿瘤分割的性能。基于nnUnet和Transformer提出了一种名为nnUnetFormer的新型自动分割方法。它将Transformer模块融合到nnUnet框架的更深层,以有效地从多模态MR图像中获取病变区域的局部和全局特征。我们通过五折交叉验证在BraTS 2021数据集上评估了我们的方法,对于全肿瘤(WT)、肿瘤核心(TC)和增强肿瘤(ET)区域,分别取得了优异的性能,骰子相似系数(DSC)为0.936、0.921和0.872,豪斯多夫距离第95百分位数(HD95)为3.96、4.57和10.45,在平均DSC和平均HD95方面均优于最近的先进方法。此外,消融实验表明,将Transformer融合到我们改进的nnUnet框架中可提高脑肿瘤分割的性能,尤其是对于TC区域。此外,为了验证我们方法的泛化能力,我们在FeTS 2021数据集上进一步进行了实验,并在11个未见机构上取得了令人满意的分割性能,对于WT、TC和ET区域,DSC分别为0.912、0.872和0.759,HD95分别为6.16、8.81和38.50。大量定性和定量实验结果表明,所提出的方法与先进方法相比具有竞争力,表明其在临床应用中的价值。