Suppr超能文献

基于深度学习的头颈部肿瘤 kV 图像分割的逼真 CT 数据增强。

Realistic CT data augmentation for accurate deep-learning based segmentation of head and neck tumors in kV images acquired during radiation therapy.

机构信息

ACRF Image X Institute, The University of Sydney, Eveleigh, New South Wales, Australia.

South Western Sydney Clinical School, University of New South Wales, Liverpool, New South Wales, Australia.

出版信息

Med Phys. 2023 Jul;50(7):4206-4219. doi: 10.1002/mp.16388. Epub 2023 Apr 17.

Abstract

BACKGROUND

Using radiation therapy (RT) to treat head and neck (H&N) cancers requires precise targeting of the tumor to avoid damaging the surrounding healthy organs. Immobilisation masks and planning target volume margins are used to attempt to mitigate patient motion during treatment, however patient motion can still occur. Patient motion during RT can lead to decreased treatment effectiveness and a higher chance of treatment related side effects. Tracking tumor motion would enable motion compensation during RT, leading to more accurate dose delivery.

PURPOSE

The purpose of this paper is to develop a method to detect and segment the tumor in kV images acquired during RT. Unlike previous tumor segmentation methods for kV images, in this paper, a process for generating realistic and synthetic CT deformations was developed to augment the training data and make the segmentation method robust to patient motion. Detecting the tumor in 2D kV images is a necessary step toward 3D tracking of the tumor position during treatment.

METHOD

In this paper, a conditional generative adversarial network (cGAN) is presented that can detect and segment the gross tumor volume (GTV) in kV images acquired during H&N RT. Retrospective data from 15 H&N cancer patients obtained from the Cancer Imaging Archive were used to train and test patient-specific cGANs. The training data consisted of digitally reconstructed radiographs (DRRs) generated from each patient's planning CT and contoured GTV. Training data was augmented by using synthetically deformed CTs to generate additional DRRs (in total 39 600 DRRs per patient or 25 200 DRRs for nasopharyngeal patients) containing realistic patient motion. The method for deforming the CTs was a novel deformation method based on simulating head rotation and internal tumor motion. The testing dataset consisted of 1080 DRRs for each patient, obtained by deforming the planning CT and GTV at different magnitudes to the training data. The accuracy of the generated segmentations was evaluated by measuring the segmentation centroid error, Dice similarity coefficient (DSC) and mean surface distance (MSD). This paper evaluated the hypothesis that when patient motion occurs, using a cGAN to segment the GTV would create a more accurate segmentation than no-tracking segmentations from the original contoured GTV, the current standard-of-care. This hypothesis was tested using the 1-tailed Mann-Whitney U-test.

RESULTS

The magnitude of our cGAN segmentation centroid error was (mean ± standard deviation) 1.1 ± 0.8 mm and the DSC and MSD values were 0.90 ± 0.03 and 1.6 ± 0.5 mm, respectively. Our cGAN segmentation method reduced the segmentation centroid error (p < 0.001), and MSD (p = 0.031) when compared to the no-tracking segmentation, but did not significantly increase the DSC (p = 0.294).

CONCLUSIONS

The accuracy of our cGAN segmentation method demonstrates the feasibility of this method for H&N cancer patients during RT. Accurate tumor segmentation of H&N tumors would allow for intrafraction monitoring methods to compensate for tumor motion during treatment, ensuring more accurate dose delivery and enabling better H&N cancer patient outcomes.

摘要

背景

使用放射治疗(RT)治疗头颈部(H&N)癌症需要精确瞄准肿瘤,以避免损伤周围健康器官。固定面罩和计划靶区边界用于尝试减轻治疗过程中的患者运动,但患者运动仍可能发生。RT 期间的患者运动可导致治疗效果降低和治疗相关副作用的机会增加。跟踪肿瘤运动可在 RT 期间实现运动补偿,从而实现更精确的剂量输送。

目的

本文的目的是开发一种在 RT 期间获取的千伏图像中检测和分割肿瘤的方法。与以前用于千伏图像的肿瘤分割方法不同,在本文中,开发了一种生成真实和合成 CT 变形的过程,以扩充训练数据并使分割方法对患者运动具有鲁棒性。在 2D 千伏图像中检测肿瘤是在治疗过程中对肿瘤位置进行 3D 跟踪的必要步骤。

方法

本文提出了一种条件生成对抗网络(cGAN),可在 H&N RT 期间获取的千伏图像中检测和分割大体肿瘤体积(GTV)。从癌症成像档案中回顾性地获取了 15 名 H&N 癌症患者的数据,用于训练和测试患者特异性 cGAN。训练数据由从每位患者的计划 CT 和轮廓 GTV 生成的数字重建射线照片(DRR)组成。通过使用合成变形 CT 来生成额外的 DRR(每个患者总共 39,600 个 DRR,或对于鼻咽癌患者为 25,200 个 DRR)来扩充训练数据,其中包含逼真的患者运动。变形 CT 的方法是一种新颖的变形方法,基于模拟头部旋转和内部肿瘤运动。测试数据集由每个患者的 1080 个 DRR 组成,通过在不同程度上对计划 CT 和 GTV 进行变形而获得。生成的分割的准确性通过测量分割质心误差,Dice 相似系数(DSC)和平均表面距离(MSD)来评估。本文评估了以下假设:当发生患者运动时,使用 cGAN 对 GTV 进行分割将比当前标准治疗方法的原始轮廓 GTV 产生更准确的分割。使用单尾曼-惠特尼 U 检验测试了该假设。

结果

我们的 cGAN 分割质心误差的幅度为(平均值±标准差)1.1±0.8mm,DSC 和 MSD 值分别为 0.90±0.03 和 1.6±0.5mm。与无跟踪分割相比,我们的 cGAN 分割方法降低了分割质心误差(p<0.001)和 MSD(p=0.031),但 DSC 没有显著增加(p=0.294)。

结论

我们的 cGAN 分割方法的准确性证明了该方法在 RT 期间对头颈部癌症患者的可行性。对头颈部肿瘤的精确肿瘤分割将允许使用分次内监测方法来补偿治疗过程中的肿瘤运动,从而确保更精确的剂量输送并为头颈部癌症患者带来更好的结果。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验