Huang Xiaoyu, Li Weihang, Wang Yaru, Wu Qibing, Li Ping, Xu Kai, Huang Yong
Department of Oncology, The Second Affiliated Hospital of Anhui Medical University, Hefei, China.
Department of Chinese Integrative Medicine Oncology, The First Affiliated Hospital of Anhui Medical University, Hefei, China.
Radiol Med. 2025 Sep 2. doi: 10.1007/s11547-025-02083-y.
This study aimed to develop a deep learning (DL) framework using registration-guided generative adversarial networks (RegGAN) to synthesize contrast-enhanced CT (Syn-CECT) from non-contrast CT (NCCT), enabling iodine-free esophageal cancer (EC) T-staging.
A retrospective multicenter analysis included 1,092 EC patients (2013-2024) divided into training (N = 313), internal (N = 117), and external test cohorts (N = 116 and N = 546). RegGAN synthesized Syn-CECT by integrating registration and adversarial training to address NCCT-CECT misalignment. Tumor segmentation used CSSNet with hierarchical feature fusion, while T-staging employed a dual-path DL model combining radiomic features (from NCCT/Syn-CECT) and Vision Transformer-derived deep features. Performance was validated via quantitative metrics (NMAE, PSNR, SSIM), Dice scores, AUC, and reader studies comparing six clinicians with/without model assistance.
RegGAN achieved Syn-CECT quality comparable to real CECT (NMAE = 0.1903, SSIM = 0.7723; visual scores: p ≥ 0.12). CSSNet produced accurate tumor segmentation (Dice = 0.89, 95% HD = 2.27 in external tests). The DL staging model outperformed machine learning (AUC = 0.7893-0.8360 vs. ≤ 0.8323), surpassing early-career clinicians (AUC = 0.641-0.757) and matching experts (AUC = 0.840). Syn-CECT-assisted clinicians improved diagnostic accuracy (AUC increase: ~ 0.1, p < 0.01), with decision curve analysis confirming clinical utility at > 35% risk threshold.
The RegGAN-based framework eliminates contrast agents while maintaining diagnostic accuracy for EC segmentation (Dice > 0.88) and T-staging (AUC > 0.78). It offers a safe, cost-effective alternative for patients with iodine allergies or renal impairment and enhances diagnostic consistency across clinician experience levels. This approach addresses limitations of invasive staging and repeated contrast exposure, demonstrating transformative potential for resource-limited settings.
本研究旨在开发一种深度学习(DL)框架,使用配准引导的生成对抗网络(RegGAN)从非增强CT(NCCT)合成增强CT(Syn-CECT),实现无碘食管癌(EC)的T分期。
一项回顾性多中心分析纳入了1092例EC患者(2013 - 2024年),分为训练组(N = 313)、内部验证组(N = 117)和外部测试组(N = 116和N = 546)。RegGAN通过整合配准和对抗训练来合成Syn-CECT,以解决NCCT与CECT的错位问题。肿瘤分割使用具有分层特征融合的CSSNet,而T分期采用结合了影像组学特征(来自NCCT/Syn-CECT)和视觉Transformer衍生的深度特征的双路径DL模型。通过定量指标(NMAE、PSNR、SSIM)、Dice分数、AUC以及比较六位临床医生在有无模型辅助情况下的读者研究来验证性能。
RegGAN实现的Syn-CECT质量与真实CECT相当(NMAE = 0.1903,SSIM = 0.7723;视觉评分:p≥0.12)。CSSNet实现了准确的肿瘤分割(外部测试中Dice = 0.89,95% HD = 2.27)。DL分期模型优于机器学习(AUC = 0.7893 - 0.8360对≤0.8323),超过了初职临床医生(AUC = 0.641 - 0.757)并与专家相当(AUC = 0.840)。Syn-CECT辅助的临床医生提高了诊断准确性(AUC增加:约0.1,p < 0.01),决策曲线分析证实了在风险阈值>35%时的临床实用性。
基于RegGAN的框架消除了造影剂,同时保持了EC分割(Dice > 0.88)和T分期(AUC > 0.78)的诊断准确性。它为碘过敏或肾功能不全的患者提供了一种安全、经济有效的替代方案,并提高了不同临床医生经验水平的诊断一致性。这种方法解决了侵入性分期和重复造影剂暴露的局限性,在资源有限的环境中显示出变革潜力。