Ottom Mohammad Ashraf, Abdul Rahman Hanif, Alazzam Iyad M, Dinov Ivo D
Statistics Online Computational Resource, University of Michigan, Ann Arbor, MI 48104, USA.
Department of Information Systems, Yarmouk University, Irbid 21163, Jordan.
Bioengineering (Basel). 2023 May 11;10(5):581. doi: 10.3390/bioengineering10050581.
Stereotactic brain tumor segmentation based on 3D neuroimaging data is a challenging task due to the complexity of the brain architecture, extreme heterogeneity of tumor malformations, and the extreme variability of intensity signal and noise distributions. Early tumor diagnosis can help medical professionals to select optimal medical treatment plans that can potentially save lives. Artificial intelligence (AI) has previously been used for automated tumor diagnostics and segmentation models. However, the model development, validation, and reproducibility processes are challenging. Often, cumulative efforts are required to produce a fully automated and reliable computer-aided diagnostic system for tumor segmentation. This study proposes an enhanced deep neural network approach, the 3D-Znet model, based on the variational autoencoder-autodecoder Znet method, for segmenting 3D MR (magnetic resonance) volumes. The 3D-Znet artificial neural network architecture relies on fully dense connections to enable the reuse of features on multiple levels to improve model performance. It consists of four encoders and four decoders along with the initial input and the final output blocks. Encoder-decoder blocks in the network include double convolutional 3D layers, 3D batch normalization, and an activation function. These are followed by size normalization between inputs and outputs and network concatenation across the encoding and decoding branches. The proposed deep convolutional neural network model was trained and validated using a multimodal stereotactic neuroimaging dataset (BraTS2020) that includes multimodal tumor masks. Evaluation of the pretrained model resulted in the following dice coefficient scores: Whole Tumor (WT) = 0.91, Tumor Core (TC) = 0.85, and Enhanced Tumor (ET) = 0.86. The performance of the proposed 3D-Znet method is comparable to other state-of-the-art methods. Our protocol demonstrates the importance of data augmentation to avoid overfitting and enhance model performance.
基于3D神经影像数据的立体定向脑肿瘤分割是一项具有挑战性的任务,这是由于脑结构的复杂性、肿瘤畸形的极端异质性以及强度信号和噪声分布的极端变异性。早期肿瘤诊断有助于医学专业人员选择可能挽救生命的最佳治疗方案。人工智能(AI)此前已用于自动化肿瘤诊断和分割模型。然而,模型开发、验证和可重复性过程具有挑战性。通常,需要付出累积的努力才能生成一个用于肿瘤分割的全自动且可靠的计算机辅助诊断系统。本研究基于变分自编码器-自解码器Znet方法,提出了一种增强的深度神经网络方法,即3D-Znet模型,用于分割3D磁共振(MR)体积数据。3D-Znet人工神经网络架构依赖于全密集连接,以实现多层特征的重用,从而提高模型性能。它由四个编码器、四个解码器以及初始输入和最终输出块组成。网络中的编码器-解码器块包括双卷积3D层、3D批量归一化和一个激活函数。随后是输入和输出之间的尺寸归一化以及跨编码和解码分支的网络拼接。使用包含多模态肿瘤掩码的多模态立体定向神经影像数据集(BraTS2020)对所提出的深度卷积神经网络模型进行了训练和验证。对预训练模型的评估得出以下骰子系数得分:全肿瘤(WT)=0.91,肿瘤核心(TC)=0.85,强化肿瘤(ET)=0.86。所提出的3D-Znet方法的性能与其他现有最先进方法相当。我们的方案证明了数据增强对于避免过拟合和提高模型性能的重要性。