Osuala Richard, Joshi Smriti, Tsirikoglou Apostolia, Garrucho Lidia, Pinaya Walter H L, Lang Daniel M, Schnabel Julia A, Diaz Oliver, Lekadir Karim
Universitat de Barcelona, Departament de Matemàtiques i Informàtica, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Barcelona, Spain.
Helmholtz Munich, Institute of Machine Learning in Biomedical Imaging, Munich, Germany.
J Med Imaging (Bellingham). 2025 Nov;12(Suppl 2):S22014. doi: 10.1117/1.JMI.12.S2.S22014. Epub 2025 Jun 28.
Deep generative models and synthetic data generation have become essential for advancing computer-assisted diagnosis and treatment. We explore one such emerging and particularly promising application of deep generative models, namely, the generation of virtual contrast enhancement. This allows to predict and simulate contrast enhancement in breast magnetic resonance imaging (MRI) without physical contrast agent injection, thereby unlocking lesion localization and categorization even in patient populations where the lengthy, costly, and invasive process of physical contrast agent injection is contraindicated.
We define a framework for desirable properties of synthetic data, which leads us to propose the scaled aggregate measure (SAMe) consisting of a balanced set of scaled complementary metrics for generative model training and convergence evaluation. We further adopt a conditional generative adversarial network to translate from non-contrast-enhanced -weighted fat-saturated breast MRI slices to their dynamic contrast-enhanced (DCE) counterparts, thus learning to detect, localize, and adequately highlight breast cancer lesions. Next, we extend our model approach to jointly generate multiple DCE-MRI time points, enabling the simulation of contrast enhancement across temporal DCE-MRI acquisitions. In addition, three-dimensional U-Net tumor segmentation models are implemented and trained on combinations of synthetic and real DCE-MRI data to investigate the effect of data augmentation with synthetic DCE-MRI volumes.
Conducting four main sets of experiments, (i) the variation across single metrics demonstrated the value of SAMe, and (ii) the quality and potential of virtual contrast injection for tumor detection and localization were shown. Segmentation models (iii) augmented with synthetic DCE-MRI data were more robust in the presence of domain shifts between pre-contrast and DCE-MRI domains. The joint synthesis approach of multi-sequence DCE-MRI (iv) resulted in temporally coherent synthetic DCE-MRI sequences and indicated the generative model's capability of learning complex contrast enhancement patterns.
Virtual contrast injection can result in accurate synthetic DCE-MRI images, potentially enhancing breast cancer diagnosis and treatment protocols. We demonstrate that detecting, localizing, and segmenting tumors using synthetic DCE-MRI is feasible and promising, particularly considering patients where contrast agent injection is risky or contraindicated. Jointly generating multiple subsequent DCE-MRI sequences can increase image quality and unlock clinical applications assessing tumor characteristics related to its response to contrast media injection as a pillar for personalized treatment planning.
深度生成模型和合成数据生成对于推进计算机辅助诊断和治疗至关重要。我们探索深度生成模型的一种新兴且特别有前景的应用,即虚拟对比增强的生成。这使得在不注射物理造影剂的情况下预测和模拟乳腺磁共振成像(MRI)中的对比增强,从而即使在禁忌进行漫长、昂贵且有创的物理造影剂注射过程的患者群体中也能实现病变定位和分类。
我们定义了一个合成数据理想属性的框架,这促使我们提出缩放聚合度量(SAMe),它由一组平衡的缩放互补指标组成,用于生成模型训练和收敛评估。我们进一步采用条件生成对抗网络,将未增强对比的加权脂肪饱和乳腺MRI切片转换为其动态对比增强(DCE)对应图像,从而学习检测、定位并充分突出乳腺癌病变。接下来,我们扩展模型方法以联合生成多个DCE - MRI时间点,实现跨时间DCE - MRI采集的对比增强模拟。此外,实施三维U - Net肿瘤分割模型,并在合成和真实DCE - MRI数据的组合上进行训练,以研究使用合成DCE - MRI体积进行数据增强的效果。
进行了四组主要实验,(i)单个指标的变化证明了SAMe的价值,(ii)展示了虚拟对比剂注射在肿瘤检测和定位方面的质量和潜力。(iii)用合成DCE - MRI数据增强的分割模型在对比前和DCE - MRI域之间存在域转移的情况下更稳健。多序列DCE - MRI的联合合成方法(iv)产生了时间上连贯的合成DCE - MRI序列,并表明生成模型学习复杂对比增强模式的能力。
虚拟对比剂注射可生成准确的合成DCE - MRI图像,可能会增强乳腺癌的诊断和治疗方案。我们证明,使用合成DCE - MRI检测、定位和分割肿瘤是可行且有前景的,特别是对于那些注射造影剂有风险或禁忌的患者。联合生成多个后续DCE - MRI序列可以提高图像质量,并开启评估与肿瘤对造影剂注射反应相关特征的临床应用,作为个性化治疗计划的支柱。