Rajaraman Sivaramakrishnan, Liang Zhaohui, Xue Zhiyun, Antani Sameer
Division of Intramural Research, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
Proceedings (IEEE Int Conf Bioinformatics Biomed). 2024 Dec;2024:5059-5066. doi: 10.1109/bibm62325.2024.10822172.
Deep learning (DL) has transformed medical image classification; however, its efficacy is often limited by significant data imbalance due to far fewer cases (minority class) compared to controls (majority class). It has been shown that synthetic image augmentation techniques can simulate clinical variability, leading to enhanced model performance. We hypothesize that they could also mitigate the challenge of data imbalance, thereby addressing overfitting to the majority class and enhancing generalization. Recently, latent diffusion models (LDMs) have shown promise in synthesizing high-quality medical images. This study evaluates the effectiveness of a text-guided image-to-image LDM in synthesizing disease-positive chest X-rays (CXRs) and augmenting a pediatric CXR dataset to improve classification performance. We first establish baseline performance by fine-tuning an ImageNet-pretrained Inception-V3 model on class-imbalanced data for two tasks-normal vs. pneumonia and normal vs. bronchopneumonia. Next, we fine-tune individual text-guided image-to-image LDMs to generate CXRs showing signs of pneumonia and bronchopneumonia. The Inception-V3 model is retrained on an updated data set that includes these synthesized images as part of augmented training and validation sets. Classification performance is compared using balanced accuracy, sensitivity, specificity, F-score, Matthews correlation coefficient (MCC), Kappa, and Youden's index against the baseline performance. Results show that the augmentation significantly improves Youden's index (p<0.05) and markedly enhances other metrics, indicating that data augmentation using LDM-synthesized images is an effective strategy for addressing class imbalance in medical image classification.
深度学习(DL)已经改变了医学图像分类;然而,由于与对照组(多数类)相比病例数(少数类)少得多,其功效往往受到严重数据不平衡的限制。研究表明,合成图像增强技术可以模拟临床变异性,从而提高模型性能。我们假设它们还可以缓解数据不平衡的挑战,从而解决对多数类的过拟合问题并增强泛化能力。最近,潜在扩散模型(LDM)在合成高质量医学图像方面显示出了前景。本研究评估了文本引导的图像到图像LDM在合成疾病阳性胸部X光片(CXR)以及扩充儿科CXR数据集以提高分类性能方面的有效性。我们首先通过在类不平衡数据上微调一个在ImageNet上预训练的Inception-V3模型来建立两个任务(正常与肺炎、正常与支气管肺炎)的基线性能。接下来,我们微调单个文本引导的图像到图像LDM,以生成显示肺炎和支气管肺炎迹象的CXR。在一个更新的数据集上重新训练Inception-V3模型,该数据集将这些合成图像作为扩充训练集和验证集的一部分。使用平衡准确率、灵敏度、特异性、F分数、马修斯相关系数(MCC)、卡帕值和尤登指数将分类性能与基线性能进行比较。结果表明,扩充显著提高了尤登指数(p<0.05),并显著提高了其他指标,表明使用LDM合成图像进行数据扩充是解决医学图像分类中类不平衡问题的有效策略。