Department of Oncology, National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China.
School of Automation, Central South University, Changsha, China.
J Appl Clin Med Phys. 2024 Jan;25(1):e14211. doi: 10.1002/acm2.14211. Epub 2023 Nov 22.
The location and morphology of the liver are significantly affected by respiratory motion. Therefore, delineating the gross target volume (GTV) based on 4D medical images is more accurate than regular 3D-CT with contrast. However, the 4D method is also more time-consuming and laborious. This study proposes a deep learning (DL) framework based on 4D-CT that can achieve automatic delineation of internal GTV.
The proposed network consists of two encoding paths, one for feature extraction of adjacent slices (spatial slices) in a specific 3D-CT sequence, and one for feature extraction of slices at the same location in three adjacent phase 3D-CT sequences (temporal slices), a feature fusion module based on an attention mechanism was proposed for fusing the temporal and spatial features. Twenty-six patients' 4D-CT, each consisting of 10 respiratory phases, were used as the dataset. The Hausdorff distance (HD95), Dice similarity coefficient (DSC), and volume difference (VD) between the manual and predicted tumor contour were computed to evaluate the model's segmentation accuracy.
The predicted GTVs and IGTVs were compared quantitatively and visually with the ground truth. For the test dataset, the proposed method achieved a mean DSC of 0.869 ± 0.089 and an HD95 of 5.14 ± 3.34 mm for all GTVs, with under-segmented GTVs on some CT slices being compensated by GTVs on other slices, resulting in better agreement between the predicted IGTVs and the ground truth, with a mean DSC of 0.882 ± 0.085 and an HD95 of 4.88 ± 2.84 mm. The best GTV results were generally observed at the end-inspiration stage.
Our proposed DL framework for tumor segmentation on 4D-CT datasets shows promise for fully automated delineation in the future. The promising results of this work provide impetus for its integration into the 4DCT treatment planning workflow to improve hepatocellular carcinoma radiotherapy.
肝脏的位置和形态受呼吸运动的显著影响。因此,基于 4D 医学图像对大体肿瘤靶区(GTV)进行勾画比常规增强 3D-CT 更为准确。然而,4D 方法也更加耗时费力。本研究提出了一种基于 4D-CT 的深度学习(DL)框架,该框架可以实现内部 GTV 的自动勾画。
所提出的网络由两个编码路径组成,一个用于特定 3D-CT 序列中相邻切片(空间切片)的特征提取,另一个用于三个相邻相位 3D-CT 序列中同一位置切片(时间切片)的特征提取,提出了一种基于注意力机制的特征融合模块,用于融合时间和空间特征。使用 26 名患者的 4D-CT 作为数据集,每个数据集由 10 个呼吸相位组成。计算手动和预测肿瘤轮廓之间的 Hausdorff 距离(HD95)、Dice 相似系数(DSC)和体积差异(VD),以评估模型的分割准确性。
定量和定性地比较了预测 GTV 和 IGTV 与真实肿瘤的差异。对于测试数据集,所提出的方法在所有 GTV 中实现了平均 DSC 为 0.869±0.089 和 HD95 为 5.14±3.34mm,由于某些 CT 切片上的 GTV 欠分割,其他切片上的 GTV 得到了补偿,从而使预测的 IGTV 与真实肿瘤之间的一致性更好,平均 DSC 为 0.882±0.085 和 HD95 为 4.88±2.84mm。一般来说,在吸气末期可以观察到最佳的 GTV 结果。
我们提出的用于 4D-CT 数据集的肿瘤分割深度学习框架有望在未来实现完全自动化勾画。这项工作的有前景的结果为将其集成到 4DCT 治疗计划工作流程中以改善肝细胞癌放疗提供了动力。