Suppr超能文献

使用带有加权批次骰子损失的深度神经网络对PSMA PET/CT图像中的前列腺癌转移灶进行自动分割。

Automatic segmentation of prostate cancer metastases in PSMA PET/CT images using deep neural networks with weighted batch-wise dice loss.

作者信息

Xu Yixi, Klyuzhin Ivan, Harsini Sara, Ortiz Anthony, Zhang Shun, Bénard François, Dodhia Rahul, Uribe Carlos F, Rahmim Arman, Lavista Ferres Juan

机构信息

Microsoft, Redmond, WA, USA.

Microsoft, Redmond, WA, USA; Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada; Department of Radiology, University of British Columbia, Vancouver, BC, Canada.

出版信息

Comput Biol Med. 2023 May;158:106882. doi: 10.1016/j.compbiomed.2023.106882. Epub 2023 Apr 4.

Abstract

PURPOSE

Automatic and accurate segmentation of lesions in images of metastatic castration-resistant prostate cancer has the potential to enable personalized radiopharmaceutical therapy and advanced treatment response monitoring. The aim of this study is to develop a convolutional neural networks-based framework for fully-automated detection and segmentation of metastatic prostate cancer lesions in whole-body PET/CT images.

METHODS

525 whole-body PET/CT images of patients with metastatic prostate cancer were available for the study, acquired with the [F]DCFPyL radiotracer that targets prostate-specific membrane antigen (PSMA). U-Net (1)-based convolutional neural networks (CNNs) were trained to identify lesions on paired axial PET/CT slices. Baseline models were trained using batch-wise dice loss, as well as the proposed weighted batch-wise dice loss (wDice), and the lesion detection performance was quantified, with a particular emphasis on lesion size, intensity, and location. We used 418 images for model training, 30 for model validation, and 77 for model testing. In addition, we allowed our model to take n = 0,2, …, 12 neighboring axial slices to examine how incorporating greater amounts of 3D context influences model performance. We selected the optimal number of neighboring axial slices that maximized the detection rate on the 30 validation images, and trained five neural networks with different architectures.

RESULTS

Model performance was evaluated using the detection rate, Dice similarity coefficient (DSC) and sensitivity. We found that the proposed wDice loss significantly improved the lesion detection rate, lesion-wise DSC and lesion-wise sensitivity compared to the baseline, with corresponding average increases of 0.07 (p-value = 0.01), 0.03 (p-value = 0.01) and 0.04 (p-value = 0.01), respectively. The inclusion of the first two neighboring axial slices in the input likewise increased the detection rate by 0.17, lesion-wise DSC by 0.05, and lesion-wise mean sensitivity by 0.16. However, there was a minimal effect from including more distant neighboring slices. We ultimately chose to use a number of neighboring slices equal to 2 and the wDice loss function to train our final model. To evaluate the model's performance, we trained three models using identical hyperparameters on three different data splits. The results showed that, on average, the model was able to detect 80% of all testing lesions, with a detection rate of 93% for lesions with maximum standardized uptake values (SUVmax) greater than 5.0. In addition, the average median lesion-wise DSC was 0.51 and 0.60 for all the lesions and lesions with SUVmax>5.0, respectively, on the testing set. Four additional neural networks with different architectures were trained, and they both yielded stronger performance of segmenting lesions whose SUVmax>5.0 compared to the rest of lesions.

CONCLUSION

Our results demonstrate that prostate cancer metastases in PSMA PET/CT images can be detected and segmented using CNNs. The segmentation performance strongly depends on the intensity, size, and the location of lesions, and can be improved by using specialized loss functions. Specifically, the models performed best in detection of lesions with SUVmax>5.0. Another challenge was to accurately segment lesions close to the bladder. Future work will focus on improving the detection of lesions with lower SUV values by designing custom loss functions that take into account the lesion intensity, using additional data augmentation techniques, and reducing the number of false lesions by developing methods to better separate signal from noise.

摘要

目的

对转移性去势抵抗性前列腺癌图像中的病灶进行自动、准确分割,有可能实现个性化放射性药物治疗及先进的治疗反应监测。本研究旨在开发一种基于卷积神经网络的框架,用于在全身PET/CT图像中全自动检测和分割转移性前列腺癌病灶。

方法

本研究可获取525例转移性前列腺癌患者的全身PET/CT图像,这些图像是使用靶向前列腺特异性膜抗原(PSMA)的[F]DCFPyL放射性示踪剂采集的。基于U-Net(1)的卷积神经网络(CNN)经过训练,以识别成对轴向PET/CT切片上的病灶。使用逐批次骰子损失以及所提出的加权逐批次骰子损失(wDice)训练基线模型,并对病灶检测性能进行量化,特别关注病灶大小、强度和位置。我们使用418幅图像进行模型训练,30幅进行模型验证,77幅进行模型测试。此外,我们允许模型采用n = 0、2、…、12个相邻轴向切片,以研究纳入更多3D上下文信息如何影响模型性能。我们选择在30幅验证图像上使检测率最大化的相邻轴向切片的最佳数量,并训练了五个具有不同架构的神经网络。

结果

使用检测率、骰子相似系数(DSC)和灵敏度评估模型性能。我们发现,与基线相比,所提出的wDice损失显著提高了病灶检测率、病灶层面DSC和病灶层面灵敏度,相应的平均增幅分别为0.07(p值 = 0.01)、0.03(p值 = 0.01)和0.04(p值 = 0.01)。在输入中纳入前两个相邻轴向切片同样使检测率提高了0.17,病灶层面DSC提高了0.05,并使病灶层面平均灵敏度提高了0.16。然而,纳入更远距离的相邻切片影响极小。我们最终选择使用数量为2的相邻切片和wDice损失函数来训练我们的最终模型。为评估模型性能,我们在三个不同的数据划分上使用相同的超参数训练了三个模型。结果表明,平均而言,该模型能够检测出所有测试病灶的80%,对于最大标准化摄取值(SUVmax)大于5.0的病灶,检测率为93%。此外,在测试集上,所有病灶和SUVmax>5.0的病灶的平均病灶层面DSC分别为0.51和0.60。另外训练了四个具有不同架构的神经网络,与其他病灶相比,它们在分割SUVmax>5.0的病灶方面均表现出更强的性能。

结论

我们的结果表明,可使用CNN检测和分割PSMA PET/CT图像中的前列腺癌转移灶分割性能很大程度上取决于病灶强度、大小和位置,使用专门的损失函数可加以改善。具体而言,模型在检测SUVmax>5.0的病灶方面表现最佳。另一个挑战是准确分割靠近膀胱的病灶。未来的工作将集中于通过设计考虑病灶强度的定制损失函数、使用额外的数据增强技术以及开发更好地分离信号与噪声的方法来减少假病灶数量,从而提高对SUV值较低病灶的检测。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验