Suppr超能文献

通过卷积神经网络学习用于SPECT/CT分割的模糊聚类

Learning fuzzy clustering for SPECT/CT segmentation via convolutional neural networks.

作者信息

Chen Junyu, Li Ye, Luna Licia P, Chung Hyun W, Rowe Steven P, Du Yong, Solnes Lilja B, Frey Eric C

机构信息

Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA.

Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA.

出版信息

Med Phys. 2021 Jul;48(7):3860-3877. doi: 10.1002/mp.14903. Epub 2021 May 28.

Abstract

PURPOSE

Quantitative bone single-photon emission computed tomography (QBSPECT) has the potential to provide a better quantitative assessment of bone metastasis than planar bone scintigraphy due to its ability to better quantify activity in overlapping structures. An important element of assessing the response of bone metastasis is accurate image segmentation. However, limited by the properties of QBSPECT images, the segmentation of anatomical regions-of-interests (ROIs) still relies heavily on the manual delineation by experts. This work proposes a fast and robust automated segmentation method for partitioning a QBSPECT image into lesion, bone, and background.

METHODS

We present a new unsupervised segmentation loss function and its semi- and supervised variants for training a convolutional neural network (ConvNet). The loss functions were developed based on the objective function of the classical Fuzzy C-means (FCM) algorithm. The first proposed loss function can be computed within the input image itself without any ground truth labels, and is thus unsupervised; the proposed supervised loss function follows the traditional paradigm of the deep learning-based segmentation methods and leverages ground truth labels during training. The last loss function is a combination of the first and the second and includes a weighting parameter, which enables semi-supervised segmentation using deep learning neural network.

EXPERIMENTS AND RESULTS

We conducted a comprehensive study to compare our proposed methods with ConvNets trained using supervised, cross-entropy and Dice loss functions, and conventional clustering methods. The Dice similarity coefficient (DSC) and several other metrics were used as figures of merit as applied to the task of delineating lesion and bone in both simulated and clinical SPECT/CT images. We experimentally demonstrated that the proposed methods yielded good segmentation results on a clinical dataset even though the training was done using realistic simulated images. On simulated SPECT/CT, the proposed unsupervised model's accuracy was greater than the conventional clustering methods while reducing computation time by 200-fold. For the clinical QBSPECT/CT, the proposed semi-supervised ConvNet model, trained using simulated images, produced DSCs of and for lesion and bone segmentation in SPECT, and a DSC of bone segmentation of CT images. These DSCs were larger than that for standard segmentation loss functions by for SPECT segmentation, and for CT segmentation with P-values from a paired t-test.

CONCLUSIONS

A ConvNet-based image segmentation method that uses novel loss functions was developed and evaluated. The method can operate in unsupervised, semi-supervised, or fully-supervised modes depending on the availability of annotated training data. The results demonstrated that the proposed method provides fast and robust lesion and bone segmentation for QBSPECT/CT. The method can potentially be applied to other medical image segmentation applications.

摘要

目的

定量骨单光子发射计算机断层扫描(QBSPECT)有潜力比平面骨闪烁显像提供更好的骨转移定量评估,因为它能够更好地量化重叠结构中的活性。评估骨转移反应的一个重要因素是准确的图像分割。然而,受QBSPECT图像特性的限制,解剖感兴趣区域(ROI)的分割仍然严重依赖专家的手动描绘。这项工作提出了一种快速且稳健的自动分割方法,用于将QBSPECT图像划分为病变、骨骼和背景。

方法

我们提出了一种新的无监督分割损失函数及其半监督和监督变体,用于训练卷积神经网络(ConvNet)。这些损失函数是基于经典模糊C均值(FCM)算法的目标函数开发的。首先提出的损失函数可以在输入图像本身内计算,无需任何真实标签,因此是无监督的;提出的监督损失函数遵循基于深度学习的分割方法的传统范式,并在训练期间利用真实标签。最后一个损失函数是第一个和第二个的组合,包括一个加权参数,它能够使用深度学习神经网络进行半监督分割。

实验与结果

我们进行了一项综合研究,将我们提出的方法与使用监督、交叉熵和骰子损失函数训练的ConvNet以及传统聚类方法进行比较。骰子相似系数(DSC)和其他几个指标被用作衡量标准,应用于在模拟和临床SPECT/CT图像中描绘病变和骨骼的任务。我们通过实验证明,即使使用逼真的模拟图像进行训练,所提出的方法在临床数据集上也产生了良好的分割结果。在模拟SPECT/CT上,所提出的无监督模型的准确率高于传统聚类方法,同时计算时间减少了200倍。对于临床QBSPECT/CT,使用模拟图像训练的所提出的半监督ConvNet模型在SPECT中病变和骨骼分割的DSC分别为 和 ,在CT图像骨骼分割的DSC为 。这些DSC在SPECT分割中比标准分割损失函数大 ,在CT分割中比标准分割损失函数大 ,配对t检验的P值为 。

结论

开发并评估了一种基于ConvNet的图像分割方法,该方法使用了新颖的损失函数。该方法可以根据标注训练数据的可用性在无监督、半监督或全监督模式下运行。结果表明,所提出的方法为QBSPECT/CT提供了快速且稳健的病变和骨骼分割。该方法有可能应用于其他医学图像分割应用。

相似文献

1
Learning fuzzy clustering for SPECT/CT segmentation via convolutional neural networks.
Med Phys. 2021 Jul;48(7):3860-3877. doi: 10.1002/mp.14903. Epub 2021 May 28.
3
Lung tumor segmentation in 4D CT images using motion convolutional neural networks.
Med Phys. 2021 Nov;48(11):7141-7153. doi: 10.1002/mp.15204. Epub 2021 Sep 13.
4
Semi-supervised learning towards automated segmentation of PET images with limited annotations: application to lymphoma patients.
Phys Eng Sci Med. 2024 Sep;47(3):833-849. doi: 10.1007/s13246-024-01408-x. Epub 2024 Mar 21.
5
Learning low-dose CT degradation from unpaired data with flow-based model.
Med Phys. 2022 Dec;49(12):7516-7530. doi: 10.1002/mp.15886. Epub 2022 Aug 8.
7
Automatic segmentation of prostate cancer metastases in PSMA PET/CT images using deep neural networks with weighted batch-wise dice loss.
Comput Biol Med. 2023 May;158:106882. doi: 10.1016/j.compbiomed.2023.106882. Epub 2023 Apr 4.
9
Semi-supervised abdominal multi-organ segmentation by object-redrawing.
Med Phys. 2024 Nov;51(11):8334-8347. doi: 10.1002/mp.17364. Epub 2024 Aug 21.

引用本文的文献

1
Partial volume correction for Lu-177-PSMA SPECT.
EJNMMI Phys. 2024 Nov 12;11(1):93. doi: 10.1186/s40658-024-00697-1.
3
Semi-supervised learning towards automated segmentation of PET images with limited annotations: application to lymphoma patients.
Phys Eng Sci Med. 2024 Sep;47(3):833-849. doi: 10.1007/s13246-024-01408-x. Epub 2024 Mar 21.
5
SwinCross: Cross-modal Swin transformer for head-and-neck tumor segmentation in PET/CT images.
Med Phys. 2024 Mar;51(3):2096-2107. doi: 10.1002/mp.16703. Epub 2023 Sep 30.
6
Systematic Review of Tumor Segmentation Strategies for Bone Metastases.
Cancers (Basel). 2023 Mar 14;15(6):1750. doi: 10.3390/cancers15061750.
7
Pix2Pix generative adversarial network for low dose myocardial perfusion SPECT denoising.
Quant Imaging Med Surg. 2022 Jul;12(7):3539-3555. doi: 10.21037/qims-21-1042.

本文引用的文献

1
Deep Learning-based Image Segmentation on Multimodal Medical Imaging.
IEEE Trans Radiat Plasma Med Sci. 2019 Mar;3(2):162-169. doi: 10.1109/trpms.2018.2890359. Epub 2019 Jan 1.
3
Deep Learning for Variational Multimodality Tumor Segmentation in PET/CT.
Neurocomputing (Amst). 2020 Jun 7;392:277-295. doi: 10.1016/j.neucom.2018.10.099. Epub 2019 Apr 24.
4
Generalizing Deep Learning for Medical Image Segmentation to Unseen Domains via Deep Stacked Transformation.
IEEE Trans Med Imaging. 2020 Jul;39(7):2531-2540. doi: 10.1109/TMI.2020.2973595. Epub 2020 Feb 12.
5
Cancer statistics, 2020.
CA Cancer J Clin. 2020 Jan;70(1):7-30. doi: 10.3322/caac.21590. Epub 2020 Jan 8.
6
Mumford-Shah Loss Functional for Image Segmentation with Deep Learning.
IEEE Trans Image Process. 2019 Sep 27. doi: 10.1109/TIP.2019.2941265.
7
CE-Net: Context Encoder Network for 2D Medical Image Segmentation.
IEEE Trans Med Imaging. 2019 Oct;38(10):2281-2292. doi: 10.1109/TMI.2019.2903562. Epub 2019 Mar 7.
9
Discriminative Localization in CNNs for Weakly-Supervised Segmentation of Pulmonary Nodules.
Med Image Comput Comput Assist Interv. 2017 Sep;10435:568-576. doi: 10.1007/978-3-319-66179-7_65. Epub 2017 Sep 4.
10
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.
IEEE Trans Pattern Anal Mach Intell. 2018 Apr;40(4):834-848. doi: 10.1109/TPAMI.2017.2699184. Epub 2017 Apr 27.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验