Chen Chuanyu, Luo Yi, Hou Qiuyang, Qiu Jun, Yuan Shuya, Deng Kexue
Department of Radiology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China.
Med Phys. 2025 Jan;52(1):375-387. doi: 10.1002/mp.17414. Epub 2024 Sep 28.
Lymph node metastasis (LNM) plays a crucial role in the management of lung cancer; however, the ability of chest computed tomography (CT) imaging to detect LNM status is limited.
This study aimed to develop and validate a vision transformer-based deep transfer learning nomogram for predicting LNM in lung adenocarcinoma patients using preoperative unenhanced chest CT imaging.
This study included 528 patients with lung adenocarcinoma who were randomly divided into training and validation cohorts at a 7:3 ratio. The pretrained vision transformer (ViT) was utilized to extract deep transfer learning (DTL) feature, and logistic regression was employed to construct a ViT-based DTL model. Subsequently, the model was compared with six classical convolutional neural network (CNN) models. Finally, the ViT-based DTL signature was combined with independent clinical predictors to construct a ViT-based deep transfer learning nomogram (DTLN).
The ViT-based DTL model showed good performance, with an area under the curve (AUC) of 0.821 (95% CI, 0.775-0.867) in the training cohort and 0.825 (95% CI, 0.758-0.891) in the validation cohort. The ViT-based DTL model demonstrated comparable performance to classical CNN models in predicting LNM, and the ViT-based DTL signature was then used to construct ViT-based DTLN with independent clinical predictors such as tumor maximum diameter, location, and density. The DTLN achieved the best predictive performance, with AUCs of 0.865 (95% CI, 0.827-0.903) and 0.894 (95% CI, 0845-0942), respectively, surpassing both the clinical factor model and the ViT-based DTL model (p < 0.001).
This study developed a new DTL model based on ViT to predict LNM status in lung adenocarcinoma patients and revealed that the performance of the ViT-based DTL model was comparable to that of classical CNN models, confirming that ViT was viable for deep learning tasks involving medical images. The ViT-based DTLN performed exceptionally well and can assist clinicians and radiologists in making accurate judgments and formulating appropriate treatment plans.
淋巴结转移(LNM)在肺癌治疗中起着关键作用;然而,胸部计算机断层扫描(CT)成像检测LNM状态的能力有限。
本研究旨在开发并验证一种基于视觉Transformer的深度迁移学习列线图,用于使用术前胸部平扫CT成像预测肺腺癌患者的LNM。
本研究纳入528例肺腺癌患者,按7:3的比例随机分为训练队列和验证队列。利用预训练的视觉Transformer(ViT)提取深度迁移学习(DTL)特征,并采用逻辑回归构建基于ViT的DTL模型。随后,将该模型与六个经典卷积神经网络(CNN)模型进行比较。最后,将基于ViT的DTL特征与独立临床预测因素相结合,构建基于ViT的深度迁移学习列线图(DTLN)。
基于ViT的DTL模型表现良好,训练队列中的曲线下面积(AUC)为0.821(95%CI,0.775-0.867),验证队列中的AUC为0.825(95%CI,0.758-0.891)。基于ViT的DTL模型在预测LNM方面表现与经典CNN模型相当,基于ViT的DTL特征随后与肿瘤最大直径、位置和密度等独立临床预测因素一起用于构建基于ViT的DTLN。DTLN实现了最佳预测性能,AUC分别为0.865(95%CI,0.827-0.903)和0.894(95%CI,0845-0942),超过了临床因素模型和基于ViT的DTL模型(p<0.001)。
本研究开发了一种基于ViT的新DTL模型来预测肺腺癌患者的LNM状态,并表明基于ViT的DTL模型的性能与经典CNN模型相当,证实了ViT在涉及医学图像的深度学习任务中是可行的。基于ViT的DTLN表现出色,可协助临床医生和放射科医生做出准确判断并制定适当的治疗方案。