Suppr超能文献

基于具有更多全局上下文特征提取的3D-UNet的磁共振成像全自动胶质瘤分割算法:对全局特征提取不足的改进

[Fully Automatic Glioma Segmentation Algorithm of Magnetic Resonance Imaging Based on 3D-UNet With More Global Contextual Feature Extraction: An Improvement on Insufficient Extraction of Global Features].

作者信息

Tian Hengyi, Wang Yu, Ji Yarong, Rahman Md Mostafizur

机构信息

( 100048) School of Artificial Intelligence, Beijing Technology and Business University, Beijing 100048, China.

出版信息

Sichuan Da Xue Xue Bao Yi Xue Ban. 2024 Mar 20;55(2):447-454. doi: 10.12182/20240360208.

Abstract

OBJECTIVE

The fully automatic segmentation of glioma and its subregions is fundamental for computer-aided clinical diagnosis of tumors. In the segmentation process of brain magnetic resonance imaging (MRI), convolutional neural networks with small convolutional kernels can only capture local features and are ineffective at integrating global features, which narrows the receptive field and leads to insufficient segmentation accuracy. This study aims to use dilated convolution to address the problem of inadequate global feature extraction in 3D-UNet.

METHODS

  1. Algorithm construction: A 3D-UNet model with three pathways for more global contextual feature extraction, or 3DGE-UNet, was proposed in the paper. By using publicly available datasets from the Brain Tumor Segmentation Challenge (BraTS) of 2019 (335 patient cases), a global contextual feature extraction (GE) module was designed. This module was integrated at the first, second, and third skip connections of the 3D UNet network. The module was utilized to fully extract global features at different scales from the images. The global features thus extracted were then overlaid with the upsampled feature maps to expand the model's receptive field and achieve deep fusion of features at different scales, thereby facilitating end-to-end automatic segmentation of brain tumors. 2) Algorithm validation: The image data were sourced from the BraTs 2019 dataset, which included the preoperative MRI images of 335 patients across four modalities (T1, T1ce, T2, and FLAIR) and a tumor image with annotations made by physicians. The dataset was divided into the training, the validation, and the testing sets at an 8∶1∶1 ratio. Physician-labelled tumor images were used as the gold standard. Then, the algorithm's segmentation performance on the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) was evaluated in the test set using the Dice coefficient (for overall effectiveness evaluation), sensitivity (detection rate of lesion areas), and 95% Hausdorff distance (segmentation accuracy of tumor boundaries). The performance was tested using both the 3D-UNet model without the GE module and the 3DGE-UNet model with the GE module to internally validate the effectiveness of the GE module setup. Additionally, the performance indicators were evaluated using the 3DGE-UNet model, ResUNet, UNet++, nnUNet, and UNETR, and the convergence of these five algorithm models was compared to externally validate the effectiveness of the 3DGE-UNet model.

RESULTS

  1. In internal validation, the enhanced 3DGE-UNet model achieved Dice mean values of 91.47%, 87.14%, and 83.35% for segmenting the WT, TC, and ET regions in the test set, respectively, producing the optimal values for comprehensive evaluation. These scores were superior to the corresponding scores of the traditional 3D-UNet model, which were 89.79%, 85.13%, and 80.90%, indicating a significant improvement in segmentation accuracy across all three regions (<0.05). Compared with the 3D-UNet model, the 3DGE-UNet model demonstrated higher sensitivity for ET (86.46% vs. 80.77%) (<0.05) , demonstrating better performance in the detection of all the lesion areas. When dealing with lesion areas, the 3DGE-UNet model tended to correctly identify and capture the positive areas in a more comprehensive way, thereby effectively reducing the likelihood of missed diagnoses. The 3DGE-UNet model also exhibited exceptional performance in segmenting the edges of WT, producing a mean 95% Hausdorff distance superior to that of the 3D-UNet model (8.17 mm vs. 13.61 mm, <0.05). However, its performance for TC (8.73 mm vs. 7.47 mm) and ET (6.21 mm vs. 5.45 mm) was similar to that of the 3D-UNet model. 2) In the external validation, the other four algorithms outperformed the 3DGE-UNet model only in the mean Dice for TC (87.25%), the mean sensitivity for WT (94.59%), the mean sensitivity for TC (86.98%), and the mean 95% Hausdorff distance for ET (5.37 mm). Nonetheless, these differences were not statistically significant (>0.05). The 3DGE-UNet model demonstrated rapid convergence during the training phase, outpacing the other external models.

CONCLUSION

The 3DGE-UNet model can effectively extract and fuse feature information on different scales, improving the accuracy of brain tumor segmentation.

摘要

目的

脑胶质瘤及其亚区域的全自动分割是肿瘤计算机辅助临床诊断的基础。在脑磁共振成像(MRI)分割过程中,具有小卷积核的卷积神经网络只能捕捉局部特征,在整合全局特征方面效果不佳,这缩小了感受野并导致分割精度不足。本研究旨在使用扩张卷积解决3D-UNet中全局特征提取不足的问题。

方法

1)算法构建:本文提出了一种具有三条路径以进行更多全局上下文特征提取的3D-UNet模型,即3DGE-UNet。利用2019年脑肿瘤分割挑战赛(BraTS)的公开数据集(335例患者病例),设计了一个全局上下文特征提取(GE)模块。该模块集成在3D UNet网络的第一、第二和第三个跳跃连接处。利用该模块从图像中充分提取不同尺度的全局特征。然后将提取的全局特征与上采样特征图叠加,以扩大模型的感受野,实现不同尺度特征的深度融合,从而促进脑肿瘤的端到端自动分割。2)算法验证:图像数据来自BraTs 2019数据集,其中包括335例患者的术前MRI图像(四种模态:T1、T1ce、T2和FLAIR)以及医生标注的肿瘤图像。数据集按8∶1∶1的比例分为训练集、验证集和测试集。医生标注的肿瘤图像用作金标准。然后,在测试集中使用Dice系数(用于总体有效性评估)、敏感度(病变区域检测率)和95%豪斯多夫距离(肿瘤边界分割精度)评估算法在全肿瘤(WT)、肿瘤核心(TC)和强化肿瘤(ET)上的分割性能。使用没有GE模块的3D-UNet模型和具有GE模块的3DGE-UNet模型进行测试,以内部验证GE模块设置的有效性。此外,使用3DGE-UNet模型、ResUNet、UNet++、nnUNet和UNETR评估性能指标,并比较这五种算法模型的收敛情况,以外部验证3DGE-UNet模型的有效性。

结果

1)在内部验证中,增强后的3DGE-UNet模型在测试集中分割WT、TC和ET区域的Dice均值分别达到91.47%、87.14%和83.35%,在综合评估中产生了最优值。这些分数优于传统3D-UNet模型的相应分数,分别为89.79%、85.13%和80.90%,表明在所有三个区域分割精度均有显著提高(<0.05)。与3D-UNet模型相比,3DGE-UNet模型对ET的敏感度更高(86.46%对80.77%)(<0.05),在所有病变区域检测中表现更好。在处理病变区域时,3DGE-UNet模型倾向于更全面地正确识别和捕捉阳性区域,从而有效降低漏诊的可能性。3DGE-UNet模型在分割WT边缘方面也表现出色,其平均95%豪斯多夫距离优于3D-UNet模型(8.17 mm对13.61 mm,<0.05)。然而,其在TC(8.73 mm对7.47 mm)和ET(6.21 mm对5.45 mm)方面的性能与3D-UNet模型相似。2)在外部验证中,其他四种算法仅在TC的平均Dice(87.25%)、WT的平均敏感度(94.59%)、TC的平均敏感度(86.98%)和ET的平均95%豪斯多夫距离(5.37 mm)方面优于3DGE-UNet模型。然而,这些差异无统计学意义(>0.05)。3DGE-UNet模型在训练阶段表现出快速收敛,超过了其他外部模型。

结论

3DGE-UNet模型能够有效提取和融合不同尺度的特征信息,提高脑肿瘤分割的准确性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b340/11026905/4870dd5dce59/scdxxbyxb-55-2-447-1.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验