IEEE Trans Med Imaging. 2018 Jul;37(7):1562-1573. doi: 10.1109/TMI.2018.2791721.
Convolutional neural networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they have not demonstrated sufficiently accurate and robust results for clinical use. In addition, they are limited by the lack of image-specific adaptation and the lack of generalizability to previously unseen object classes (a.k.a. zero-shot learning). To address these problems, we propose a novel deep learning-based interactive segmentation framework by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. We propose image-specific fine tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised (without additional user interactions) or supervised (with additional scribbles). We also propose a weighted loss function considering network and interaction-based uncertainty for the fine tuning. We applied this framework to two applications: 2-D segmentation of multiple organs from fetal magnetic resonance (MR) slices, where only two types of these organs were annotated for training and 3-D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only the tumor core in one MR sequence was annotated for training. Experimental results show that: 1) our model is more robust to segment previously unseen objects than state-of-the-art CNNs; 2) image-specific fine tuning with the proposed weighted loss function significantly improves segmentation accuracy; and 3) our method leads to accurate results with fewer user interactions and less user time than traditional interactive segmentation methods.
卷积神经网络 (CNN) 在自动医学图像分割方面取得了最先进的性能。然而,它们在临床应用中还没有表现出足够的准确性和鲁棒性。此外,它们还受到缺乏图像特定的适应性和缺乏对以前未见的目标类别的泛化能力(即零镜头学习)的限制。为了解决这些问题,我们提出了一种新的基于深度学习的交互式分割框架,将 CNN 纳入基于边界框和手写的分割管道中。我们提出了特定于图像的微调,以使 CNN 模型适应特定的测试图像,这可以是无监督的(无需额外的用户交互)或有监督的(有额外的手写)。我们还提出了一种考虑网络和交互不确定性的加权损失函数用于微调。我们将此框架应用于两个应用程序:从胎儿磁共振 (MR) 切片中分割多个器官,其中仅对两种类型的器官进行了注释以供训练;从不同的 MR 序列中分割脑肿瘤核心(不包括水肿)和整个脑肿瘤(包括水肿),其中仅对一种 MR 序列中的肿瘤核心进行了注释以供训练。实验结果表明:1) 与最先进的 CNN 相比,我们的模型对以前未见的对象的分割更具鲁棒性;2) 具有所提出的加权损失函数的特定于图像的微调显著提高了分割准确性;3) 与传统的交互式分割方法相比,我们的方法只需较少的用户交互和较少的用户时间即可获得准确的结果。
IEEE Trans Med Imaging. 2018-7
IEEE Trans Pattern Anal Mach Intell. 2018-6-1
BMC Med Imaging. 2024-12-18
Magn Reson Imaging. 2019-6-7
J Digit Imaging. 2018-10
Med Image Comput Comput Assist Interv. 2024-10
Int J Comput Assist Radiol Surg. 2025-8-18
Front Med (Lausanne). 2025-5-30
Med Image Comput Comput Assist Interv. 2024-10
Diagnostics (Basel). 2025-5-8
Tomography. 2025-4-30
J Imaging Inform Med. 2025-3-4
Chin Med J (Engl). 2025-3-20
Front Comput Neurosci. 2019-8-13
IEEE Trans Pattern Anal Mach Intell. 2018-6-1
Med Image Anal. 2017-7-26
Med Image Anal. 2017-5-8
IEEE Trans Pattern Anal Mach Intell. 2017-4-27
Med Image Anal. 2016-10-29
IEEE Trans Med Imaging. 2017-2
IEEE Trans Pattern Anal Mach Intell. 2016-5-24
IEEE Trans Med Imaging. 2016-3-7