• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度学习的头颈部肿瘤 kV 图像分割的逼真 CT 数据增强。

Realistic CT data augmentation for accurate deep-learning based segmentation of head and neck tumors in kV images acquired during radiation therapy.

机构信息

ACRF Image X Institute, The University of Sydney, Eveleigh, New South Wales, Australia.

South Western Sydney Clinical School, University of New South Wales, Liverpool, New South Wales, Australia.

出版信息

Med Phys. 2023 Jul;50(7):4206-4219. doi: 10.1002/mp.16388. Epub 2023 Apr 17.

DOI:10.1002/mp.16388
PMID:37029643
Abstract

BACKGROUND

Using radiation therapy (RT) to treat head and neck (H&N) cancers requires precise targeting of the tumor to avoid damaging the surrounding healthy organs. Immobilisation masks and planning target volume margins are used to attempt to mitigate patient motion during treatment, however patient motion can still occur. Patient motion during RT can lead to decreased treatment effectiveness and a higher chance of treatment related side effects. Tracking tumor motion would enable motion compensation during RT, leading to more accurate dose delivery.

PURPOSE

The purpose of this paper is to develop a method to detect and segment the tumor in kV images acquired during RT. Unlike previous tumor segmentation methods for kV images, in this paper, a process for generating realistic and synthetic CT deformations was developed to augment the training data and make the segmentation method robust to patient motion. Detecting the tumor in 2D kV images is a necessary step toward 3D tracking of the tumor position during treatment.

METHOD

In this paper, a conditional generative adversarial network (cGAN) is presented that can detect and segment the gross tumor volume (GTV) in kV images acquired during H&N RT. Retrospective data from 15 H&N cancer patients obtained from the Cancer Imaging Archive were used to train and test patient-specific cGANs. The training data consisted of digitally reconstructed radiographs (DRRs) generated from each patient's planning CT and contoured GTV. Training data was augmented by using synthetically deformed CTs to generate additional DRRs (in total 39 600 DRRs per patient or 25 200 DRRs for nasopharyngeal patients) containing realistic patient motion. The method for deforming the CTs was a novel deformation method based on simulating head rotation and internal tumor motion. The testing dataset consisted of 1080 DRRs for each patient, obtained by deforming the planning CT and GTV at different magnitudes to the training data. The accuracy of the generated segmentations was evaluated by measuring the segmentation centroid error, Dice similarity coefficient (DSC) and mean surface distance (MSD). This paper evaluated the hypothesis that when patient motion occurs, using a cGAN to segment the GTV would create a more accurate segmentation than no-tracking segmentations from the original contoured GTV, the current standard-of-care. This hypothesis was tested using the 1-tailed Mann-Whitney U-test.

RESULTS

The magnitude of our cGAN segmentation centroid error was (mean ± standard deviation) 1.1 ± 0.8 mm and the DSC and MSD values were 0.90 ± 0.03 and 1.6 ± 0.5 mm, respectively. Our cGAN segmentation method reduced the segmentation centroid error (p < 0.001), and MSD (p = 0.031) when compared to the no-tracking segmentation, but did not significantly increase the DSC (p = 0.294).

CONCLUSIONS

The accuracy of our cGAN segmentation method demonstrates the feasibility of this method for H&N cancer patients during RT. Accurate tumor segmentation of H&N tumors would allow for intrafraction monitoring methods to compensate for tumor motion during treatment, ensuring more accurate dose delivery and enabling better H&N cancer patient outcomes.

摘要

背景

使用放射治疗(RT)治疗头颈部(H&N)癌症需要精确瞄准肿瘤,以避免损伤周围健康器官。固定面罩和计划靶区边界用于尝试减轻治疗过程中的患者运动,但患者运动仍可能发生。RT 期间的患者运动可导致治疗效果降低和治疗相关副作用的机会增加。跟踪肿瘤运动可在 RT 期间实现运动补偿,从而实现更精确的剂量输送。

目的

本文的目的是开发一种在 RT 期间获取的千伏图像中检测和分割肿瘤的方法。与以前用于千伏图像的肿瘤分割方法不同,在本文中,开发了一种生成真实和合成 CT 变形的过程,以扩充训练数据并使分割方法对患者运动具有鲁棒性。在 2D 千伏图像中检测肿瘤是在治疗过程中对肿瘤位置进行 3D 跟踪的必要步骤。

方法

本文提出了一种条件生成对抗网络(cGAN),可在 H&N RT 期间获取的千伏图像中检测和分割大体肿瘤体积(GTV)。从癌症成像档案中回顾性地获取了 15 名 H&N 癌症患者的数据,用于训练和测试患者特异性 cGAN。训练数据由从每位患者的计划 CT 和轮廓 GTV 生成的数字重建射线照片(DRR)组成。通过使用合成变形 CT 来生成额外的 DRR(每个患者总共 39,600 个 DRR,或对于鼻咽癌患者为 25,200 个 DRR)来扩充训练数据,其中包含逼真的患者运动。变形 CT 的方法是一种新颖的变形方法,基于模拟头部旋转和内部肿瘤运动。测试数据集由每个患者的 1080 个 DRR 组成,通过在不同程度上对计划 CT 和 GTV 进行变形而获得。生成的分割的准确性通过测量分割质心误差,Dice 相似系数(DSC)和平均表面距离(MSD)来评估。本文评估了以下假设:当发生患者运动时,使用 cGAN 对 GTV 进行分割将比当前标准治疗方法的原始轮廓 GTV 产生更准确的分割。使用单尾曼-惠特尼 U 检验测试了该假设。

结果

我们的 cGAN 分割质心误差的幅度为(平均值±标准差)1.1±0.8mm,DSC 和 MSD 值分别为 0.90±0.03 和 1.6±0.5mm。与无跟踪分割相比,我们的 cGAN 分割方法降低了分割质心误差(p<0.001)和 MSD(p=0.031),但 DSC 没有显著增加(p=0.294)。

结论

我们的 cGAN 分割方法的准确性证明了该方法在 RT 期间对头颈部癌症患者的可行性。对头颈部肿瘤的精确肿瘤分割将允许使用分次内监测方法来补偿治疗过程中的肿瘤运动,从而确保更精确的剂量输送并为头颈部癌症患者带来更好的结果。

相似文献

1
Realistic CT data augmentation for accurate deep-learning based segmentation of head and neck tumors in kV images acquired during radiation therapy.基于深度学习的头颈部肿瘤 kV 图像分割的逼真 CT 数据增强。
Med Phys. 2023 Jul;50(7):4206-4219. doi: 10.1002/mp.16388. Epub 2023 Apr 17.
2
Multi-modal segmentation with missing image data for automatic delineation of gross tumor volumes in head and neck cancers.多模态分割中存在图像缺失数据的情况下,实现头颈部癌症大体肿瘤体积的自动勾画。
Med Phys. 2024 Oct;51(10):7295-7307. doi: 10.1002/mp.17260. Epub 2024 Jun 19.
3
Gross tumor volume segmentation for head and neck cancer radiotherapy using deep dense multi-modality network.使用深度密集多模态网络对头颈部癌症放射治疗的大体肿瘤体积分割。
Phys Med Biol. 2019 Oct 16;64(20):205015. doi: 10.1088/1361-6560/ab440d.
4
Development of a deep learning-based patient-specific target contour prediction model for markerless tumor positioning.基于深度学习的无标记肿瘤定位患者特异性靶区预测模型的开发。
Med Phys. 2022 Mar;49(3):1382-1390. doi: 10.1002/mp.15456. Epub 2022 Jan 27.
5
AnatomyNet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy.AnatomyNet:用于快速和全自动对头颈部解剖结构进行整体体积分割的深度学习方法。
Med Phys. 2019 Feb;46(2):576-589. doi: 10.1002/mp.13300. Epub 2018 Dec 17.
6
A modality conversion approach to MV-DRs and KV-DRRs registration using information bottlenecked conditional generative adversarial network.基于信息瓶颈条件生成对抗网络的 MV-DRs 和 KV-DRRs 配准的模态转换方法。
Med Phys. 2019 Oct;46(10):4575-4587. doi: 10.1002/mp.13770. Epub 2019 Sep 6.
7
Deep learning-based target decomposition for markerless lung tumor tracking in radiotherapy.基于深度学习的无标记肺肿瘤放疗跟踪目标分解。
Med Phys. 2024 Jun;51(6):4271-4282. doi: 10.1002/mp.17039. Epub 2024 Mar 20.
8
Simultaneous object detection and segmentation for patient-specific markerless lung tumor tracking in simulated radiographs with deep learning.基于深度学习的模拟 X 光片上用于患者特定无标记肺肿瘤跟踪的同时目标检测和分割。
Med Phys. 2024 Mar;51(3):1957-1973. doi: 10.1002/mp.16705. Epub 2023 Sep 8.
9
Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks.使用基于形状表示模型约束的全卷积神经网络进行头颈部癌症放疗的全自动多器官分割。
Med Phys. 2018 Oct;45(10):4558-4567. doi: 10.1002/mp.13147. Epub 2018 Sep 19.
10
Cascaded deep learning-based auto-segmentation for head and neck cancer patients: Organs at risk on T2-weighted magnetic resonance imaging.基于级联深度学习的头颈部癌症患者自动分割:T2 加权磁共振成像上的危险器官。
Med Phys. 2021 Dec;48(12):7757-7772. doi: 10.1002/mp.15290. Epub 2021 Nov 1.

引用本文的文献

1
Patient-specific deep learning tracking for real-time 2D pancreas localisation in kV-guided radiotherapy.用于千伏级引导放疗中实时二维胰腺定位的患者特异性深度学习跟踪
Phys Imaging Radiat Oncol. 2025 Jun 6;35:100794. doi: 10.1016/j.phro.2025.100794. eCollection 2025 Jul.
2
Patient-specific prostate segmentation in kilovoltage images for radiation therapy intrafraction monitoring via deep learning.通过深度学习在千伏图像中进行患者特异性前列腺分割以用于放射治疗分次内监测。
Commun Med (Lond). 2025 Jun 3;5(1):212. doi: 10.1038/s43856-025-00935-2.
3
Comparison of Deep Learning-Based Auto-Segmentation Results on Daily Kilovoltage, Megavoltage, and Cone Beam CT Images in Image-Guided Radiotherapy.
基于深度学习的自动分割结果在图像引导放射治疗中每日千伏、兆伏和锥形束CT图像上的比较
Technol Cancer Res Treat. 2025 Jan-Dec;24:15330338251344198. doi: 10.1177/15330338251344198. Epub 2025 May 21.
4
Artificial intelligence research in radiation oncology: a practical guide for the clinician on concepts and methods.放射肿瘤学中的人工智能研究:临床医生关于概念和方法的实用指南。
BJR Open. 2024 Nov 13;6(1):tzae039. doi: 10.1093/bjro/tzae039. eCollection 2024 Jan.
5
Dosimetric impact of variable air cavity within PTV for rectum cancer.直肠癌计划靶区内可变气腔的剂量学影响
J Appl Clin Med Phys. 2025 Jan;26(1):e14539. doi: 10.1002/acm2.14539. Epub 2024 Oct 3.
6
Image detection of aortic dissection complications based on multi-scale feature fusion.基于多尺度特征融合的主动脉夹层并发症图像检测
Heliyon. 2024 Mar 15;10(6):e27678. doi: 10.1016/j.heliyon.2024.e27678. eCollection 2024 Mar 30.