He Chunming, Li Kai, Xu Guoxia, Yan Jiangpeng, Tang Longxiang, Zhang Yulun, Wang Yaowei, Li Xiu
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):18404-18418. doi: 10.1109/TNNLS.2023.3315307. Epub 2024 Dec 2.
Unpaired medical image enhancement (UMIE) aims to transform a low-quality (LQ) medical image into a high-quality (HQ) one without relying on paired images for training. While most existing approaches are based on Pix2Pix/CycleGAN and are effective to some extent, they fail to explicitly use HQ information to guide the enhancement process, which can lead to undesired artifacts and structural distortions. In this article, we propose a novel UMIE approach that avoids the above limitation of existing methods by directly encoding HQ cues into the LQ enhancement process in a variational fashion and thus model the UMIE task under the joint distribution between the LQ and HQ domains. Specifically, we extract features from an HQ image and explicitly insert the features, which are expected to encode HQ cues, into the enhancement network to guide the LQ enhancement with the variational normalization module. We train the enhancement network adversarially with a discriminator to ensure the generated HQ image falls into the HQ domain. We further propose a content-aware loss to guide the enhancement process with wavelet-based pixel-level and multiencoder-based feature-level constraints. Additionally, as a key motivation for performing image enhancement is to make the enhanced images serve better for downstream tasks, we propose a bi-level learning scheme to optimize the UMIE task and downstream tasks cooperatively, helping generate HQ images both visually appealing and favorable for downstream tasks. Experiments on three medical datasets verify that our method outperforms existing techniques in terms of both enhancement quality and downstream task performance. The code and the newly collected datasets are publicly available at https://github.com/ChunmingHe/HQG-Net.
非配对医学图像增强(UMIE)旨在将低质量(LQ)医学图像转换为高质量(HQ)图像,而无需依赖配对图像进行训练。虽然大多数现有方法基于Pix2Pix/CycleGAN,并且在一定程度上有效,但它们未能明确使用HQ信息来指导增强过程,这可能导致不期望的伪影和结构扭曲。在本文中,我们提出了一种新颖的UMIE方法,该方法通过以变分方式将HQ线索直接编码到LQ增强过程中,从而避免了现有方法的上述局限性,进而在LQ和HQ域之间的联合分布下对UMIE任务进行建模。具体而言,我们从HQ图像中提取特征,并将预期编码HQ线索的特征明确插入到增强网络中,以通过变分归一化模块指导LQ增强。我们使用鉴别器对抗训练增强网络,以确保生成的HQ图像落入HQ域。我们进一步提出了一种内容感知损失,以基于小波的像素级和基于多编码器的特征级约束来指导增强过程。此外,由于执行图像增强的一个关键动机是使增强后的图像更好地服务于下游任务,我们提出了一种双层学习方案来协同优化UMIE任务和下游任务,帮助生成视觉上吸引人且有利于下游任务的HQ图像。在三个医学数据集上的实验验证了我们的方法在增强质量和下游任务性能方面均优于现有技术。代码和新收集的数据集可在https://github.com/ChunmingHe/HQG-Net上公开获取。