Chen Junyu, Li Ye, Luna Licia P, Chung Hyun W, Rowe Steven P, Du Yong, Solnes Lilja B, Frey Eric C
Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA.
Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA.
Med Phys. 2021 Jul;48(7):3860-3877. doi: 10.1002/mp.14903. Epub 2021 May 28.
Quantitative bone single-photon emission computed tomography (QBSPECT) has the potential to provide a better quantitative assessment of bone metastasis than planar bone scintigraphy due to its ability to better quantify activity in overlapping structures. An important element of assessing the response of bone metastasis is accurate image segmentation. However, limited by the properties of QBSPECT images, the segmentation of anatomical regions-of-interests (ROIs) still relies heavily on the manual delineation by experts. This work proposes a fast and robust automated segmentation method for partitioning a QBSPECT image into lesion, bone, and background.
We present a new unsupervised segmentation loss function and its semi- and supervised variants for training a convolutional neural network (ConvNet). The loss functions were developed based on the objective function of the classical Fuzzy C-means (FCM) algorithm. The first proposed loss function can be computed within the input image itself without any ground truth labels, and is thus unsupervised; the proposed supervised loss function follows the traditional paradigm of the deep learning-based segmentation methods and leverages ground truth labels during training. The last loss function is a combination of the first and the second and includes a weighting parameter, which enables semi-supervised segmentation using deep learning neural network.
We conducted a comprehensive study to compare our proposed methods with ConvNets trained using supervised, cross-entropy and Dice loss functions, and conventional clustering methods. The Dice similarity coefficient (DSC) and several other metrics were used as figures of merit as applied to the task of delineating lesion and bone in both simulated and clinical SPECT/CT images. We experimentally demonstrated that the proposed methods yielded good segmentation results on a clinical dataset even though the training was done using realistic simulated images. On simulated SPECT/CT, the proposed unsupervised model's accuracy was greater than the conventional clustering methods while reducing computation time by 200-fold. For the clinical QBSPECT/CT, the proposed semi-supervised ConvNet model, trained using simulated images, produced DSCs of and for lesion and bone segmentation in SPECT, and a DSC of bone segmentation of CT images. These DSCs were larger than that for standard segmentation loss functions by for SPECT segmentation, and for CT segmentation with P-values from a paired t-test.
A ConvNet-based image segmentation method that uses novel loss functions was developed and evaluated. The method can operate in unsupervised, semi-supervised, or fully-supervised modes depending on the availability of annotated training data. The results demonstrated that the proposed method provides fast and robust lesion and bone segmentation for QBSPECT/CT. The method can potentially be applied to other medical image segmentation applications.
定量骨单光子发射计算机断层扫描(QBSPECT)有潜力比平面骨闪烁显像提供更好的骨转移定量评估,因为它能够更好地量化重叠结构中的活性。评估骨转移反应的一个重要因素是准确的图像分割。然而,受QBSPECT图像特性的限制,解剖感兴趣区域(ROI)的分割仍然严重依赖专家的手动描绘。这项工作提出了一种快速且稳健的自动分割方法,用于将QBSPECT图像划分为病变、骨骼和背景。
我们提出了一种新的无监督分割损失函数及其半监督和监督变体,用于训练卷积神经网络(ConvNet)。这些损失函数是基于经典模糊C均值(FCM)算法的目标函数开发的。首先提出的损失函数可以在输入图像本身内计算,无需任何真实标签,因此是无监督的;提出的监督损失函数遵循基于深度学习的分割方法的传统范式,并在训练期间利用真实标签。最后一个损失函数是第一个和第二个的组合,包括一个加权参数,它能够使用深度学习神经网络进行半监督分割。
我们进行了一项综合研究,将我们提出的方法与使用监督、交叉熵和骰子损失函数训练的ConvNet以及传统聚类方法进行比较。骰子相似系数(DSC)和其他几个指标被用作衡量标准,应用于在模拟和临床SPECT/CT图像中描绘病变和骨骼的任务。我们通过实验证明,即使使用逼真的模拟图像进行训练,所提出的方法在临床数据集上也产生了良好的分割结果。在模拟SPECT/CT上,所提出的无监督模型的准确率高于传统聚类方法,同时计算时间减少了200倍。对于临床QBSPECT/CT,使用模拟图像训练的所提出的半监督ConvNet模型在SPECT中病变和骨骼分割的DSC分别为 和 ,在CT图像骨骼分割的DSC为 。这些DSC在SPECT分割中比标准分割损失函数大 ,在CT分割中比标准分割损失函数大 ,配对t检验的P值为 。
开发并评估了一种基于ConvNet的图像分割方法,该方法使用了新颖的损失函数。该方法可以根据标注训练数据的可用性在无监督、半监督或全监督模式下运行。结果表明,所提出的方法为QBSPECT/CT提供了快速且稳健的病变和骨骼分割。该方法有可能应用于其他医学图像分割应用。