Suppr超能文献

智能轮廓:利用4D CT最大强度投影和平均强度投影,通过深度学习驱动的非小细胞肺癌内部大体肿瘤体积勾画

Smart contours: deep learning-driven internal gross tumor volume delineation in non-small cell lung cancer using 4D CT maximum and average intensity projections.

作者信息

Huang Yuling, Luo Mingming, Luo Zan, Liu Mingzhi, Li Junyu, Jian Junming, Zhang Yun

机构信息

Department of Radiation Oncology, Jiangxi Cancer Hospital & Institute (The Second Affiliated Hospital of Nanchang Medical College), Nanchang, 330029, Jiangxi, PR China.

Jiangxi Key Laboratory of Oncology (2024SSY06041), Nanchang, 330029, Jiangxi, PR China.

出版信息

Radiat Oncol. 2025 Apr 18;20(1):59. doi: 10.1186/s13014-025-02642-7.

Abstract

BACKGROUND

Delineating the internal gross tumor volume (IGTV) is crucial for the treatment of non-small cell lung cancer (NSCLC). Deep learning (DL) enables the automation of this process; however, current studies focus mainly on multiple phases of four-dimensional (4D) computed tomography (CT), which leads to indirect results. This study proposed a DL-based method for automatic IGTV delineation using maximum and average intensity projections (MIP and AIP, respectively) from 4D CT.

METHODS

We retrospectively enrolled 124 patients with NSCLC and divided them into training (70%, n = 87) and validation (30%, n = 37) cohorts. Four-dimensional CT images were acquired, and the corresponding MIP and AIP images were generated. The IGTVs were contoured on 4D CT and used as the ground truth (GT). The MIP or AIP images, along with the corresponding IGTVs (IGTV and IGTV, respectively), were fed into the DL models for training and validation. We assessed the performance of three segmentation models-U-net, attention U-net, and V-net-using the Dice similarity coefficient (DSC) and the 95th percentile of the Hausdorff distance (HD95) as the primary metrics.

RESULTS

The attention U-net model trained on AIP images presented a mean DSC of 0.871 ± 0.048 and mean HD95 of 2.958 ± 2.266 mm, whereas the model trained on MIP images achieved a mean DSC of 0.852 ± 0.053 and mean HD95 of 3.209 ± 2.136 mm. Among the models, attention U-net and U-net achieved similar results, considerably surpassing V-net.

CONCLUSIONS

DL models can automate IGTV delineation using MIP and AIP images, streamline contouring, and enhance the accuracy and consistency of lung cancer radiotherapy planning to improve patient outcomes.

摘要

背景

勾勒内部大体肿瘤体积(IGTV)对于非小细胞肺癌(NSCLC)的治疗至关重要。深度学习(DL)能够实现这一过程的自动化;然而,目前的研究主要集中在四维(4D)计算机断层扫描(CT)的多个阶段,这导致结果间接。本研究提出了一种基于DL的方法,使用来自4D CT的最大强度投影(MIP)和平均强度投影(AIP)自动勾勒IGTV。

方法

我们回顾性纳入了124例NSCLC患者,并将他们分为训练组(70%,n = 87)和验证组(30%,n = 37)。采集了4D CT图像,并生成了相应的MIP和AIP图像。在4D CT上勾勒出IGTV,并将其用作真实标准(GT)。将MIP或AIP图像以及相应的IGTV(分别为IGTV和IGTV)输入到DL模型中进行训练和验证。我们使用Dice相似系数(DSC)和豪斯多夫距离的第95百分位数(HD95)作为主要指标,评估了三种分割模型——U-net、注意力U-net和V-net的性能。

结果

在AIP图像上训练的注意力U-net模型的平均DSC为0.871±0.048,平均HD95为2.958±2.266毫米,而在MIP图像上训练的模型的平均DSC为0.852±0.053,平均HD95为3.209±2.136毫米。在这些模型中,注意力U-net和U-net取得了相似的结果,大大超过了V-net。

结论

DL模型可以使用MIP和AIP图像自动勾勒IGTV,简化轮廓绘制,并提高肺癌放射治疗计划的准确性和一致性,以改善患者预后。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cc0d/12008886/df8ce26c7582/13014_2025_2642_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验