Han Xu, Fan Fangfang, Rong Jingzhao, Li Zhen, Fakhri Georges El, Chen Qingyu, Liu Xiaofeng
Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06519, USA.
Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA 02140, USA.
Proc SPIE Int Soc Opt Eng. 2025 Feb;13411. doi: 10.1117/12.3046450. Epub 2025 Apr 10.
The Text to Medical Image (T2MedI) approach using latent diffusion models holds significant promise for addressing the scarcity of medical imaging data and elucidating the appearance distribution of lesions corresponding to specific patient status descriptions. Like natural image synthesis models, our investigations reveal that the T2MedI model may exhibit biases towards certain subgroups, potentially neglecting minority groups present in the training dataset. In this study, we initially developed a T2MedI model adapted from the pre-trained Imagen framework. This model employs a fixed Contrastive Language-Image Pre-training (CLIP) text encoder, with its decoder fine-tuned using medical images from the Radiology Objects in Context (ROCO) dataset. We conduct both qualitative and quantitative analyses to examine its gender bias. To address this issue, we propose a subgroup distribution alignment method during fine-tuning on a target application dataset. Specifically, this process involves an alignment loss, guided by an off-the-shelf sensitivity-subgroup classifier, which aims to synchronize the classification probabilities between the generated images and those expected in the target dataset. Additionally, we preserve image quality through a CLIP-consistency regularization term, based on a knowledge distillation framework. For evaluation purposes, we designated the BraTS18 dataset as the target, and developed a gender classifier based on brain magnetic resonance (MR) imaging slices derived from it. Our methodology significantly mitigates gender representation inconsistencies in the generated MR images, aligning them more closely with the gender distribution in the BraTS18 dataset.
使用潜在扩散模型的文本到医学图像(T2MedI)方法在解决医学成像数据稀缺问题以及阐明与特定患者状态描述相对应的病变外观分布方面具有巨大潜力。与自然图像合成模型一样,我们的研究表明,T2MedI模型可能对某些亚组存在偏差,可能会忽略训练数据集中存在的少数群体。在本研究中,我们最初开发了一个基于预训练的Imagen框架改编的T2MedI模型。该模型采用固定的对比语言-图像预训练(CLIP)文本编码器,其解码器使用来自上下文放射学对象(ROCO)数据集的医学图像进行微调。我们进行了定性和定量分析以检查其性别偏差。为了解决这个问题,我们在目标应用数据集的微调过程中提出了一种亚组分布对齐方法。具体来说,这个过程涉及一个对齐损失,由一个现成的敏感性-亚组分类器引导,旨在使生成图像的分类概率与目标数据集中预期的概率同步。此外,我们基于知识蒸馏框架通过CLIP一致性正则化项来保持图像质量。为了进行评估,我们将BraTS18数据集指定为目标,并基于从中提取的脑磁共振(MR)成像切片开发了一个性别分类器。我们的方法显著减轻了生成的MR图像中性别表示的不一致性,使其与BraTS18数据集中的性别分布更紧密地对齐。