Department of Computer Science, Vanderbilt University, Nashville TN, 37212, USA.
Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, 37212, USA.
Med Image Anal. 2023 Dec;90:102939. doi: 10.1016/j.media.2023.102939. Epub 2023 Aug 25.
Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realizes global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissue structures. To address such challenges and inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting of multiple modalities, anatomies, and a wide range of tissue classes, including 133 structures in the brain, 14 organs in the abdomen, 4 hierarchical components in the kidneys, inter-connected kidney tumors and brain tumors. We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in a single network, outperforming prior state-of-the-art method SLANT27 ensembled with 27 networks. Our model performance increases the mean DSC score of the publicly available Colin and CANDI dataset from 0.7264 to 0.7444 and from 0.6968 to 0.7025, respectively. Code, pre-trained models, and use case pipeline are available at: https://github.com/MASILab/UNesT.
基于转换器的模型能够更好地学习全局依赖关系,最近在计算机视觉和医学图像分析中表现出了出色的表示学习能力。转换器将图像转换为单独的补丁,并通过自注意力机制实现全局通信。然而,在这种 1D 序列中,补丁之间的位置信息很难保留,当处理 3D 医学图像分割中大量不同大小的异质组织时,丢失位置信息会导致性能不佳。此外,当前的方法在处理大型医疗分割任务(如预测大量组织类别或建模全局互连的组织结构)时不够稳健和高效。为了解决这些挑战,并受到视觉转换器中嵌套层次结构的启发,我们提出了一种新的 3D 医学图像分割方法(UNesT),采用简化且收敛更快的转换器编码器设计,通过分层聚合实现空间相邻补丁序列之间的局部通信。我们在多个具有挑战性的数据集上广泛验证了我们的方法,这些数据集包含多种模态、解剖结构和广泛的组织类别,包括大脑中的 133 个结构、腹部中的 14 个器官、肾脏中的 4 个层次组件、互连的肾脏肿瘤和脑肿瘤。我们表明,UNesT 始终能够实现最先进的性能,并评估其泛化能力和数据效率。特别是,该模型在单个网络中实现了 133 个组织类别的整个大脑分割任务的完整 ROI,优于由 27 个网络集成的先前最先进的 SLANT27 方法。我们的模型性能分别将 Colin 和 CANDI 数据集的平均 DSC 评分从 0.7264 提高到 0.7444,从 0.6968 提高到 0.7025。代码、预训练模型和用例管道可在 https://github.com/MASILab/UNesT 上获得。