Kageyama Hajime, Yoshida Nobukiyo, Kondo Keisuke, Akai Hiroyuki
Department of Radiology, Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-Ku, Tokyo, 108-8639, Japan.
Graduate Division of Health Sciences, Komazawa University, 1-23-1 Komazawa, Setagaya-Ku, Tokyo, 154-8525, Japan.
Radiol Phys Technol. 2025 Mar;18(1):172-185. doi: 10.1007/s12194-024-00871-1. Epub 2024 Dec 16.
This study investigated the effectiveness of augmenting datasets for super-resolution processing of brain Magnetic Resonance Images (MRI) T1-weighted images (T1WIs) using deep learning. By incorporating images with different contrasts from the same subject, this study sought to improve network performance and assess its impact on image quality metrics, such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). This retrospective study included 240 patients who underwent brain MRI. Two types of datasets were created: the Pure-Dataset group comprising T1WIs and the Mixed-Dataset group comprising T1WIs, T2-weighted images, and fluid-attenuated inversion recovery images. A U-Net-based network and an Enhanced Deep Super-Resolution network (EDSR) were trained on these datasets. Objective image quality analysis was performed using PSNR and SSIM. Statistical analyses, including paired t test and Pearson's correlation coefficient, were conducted to evaluate the results. Augmenting datasets with images of different contrasts significantly improved training accuracy as the dataset size increased. PSNR values ranged 29.84-30.26 dB for U-Net trained on mixed datasets, and SSIM values ranged 0.9858-0.9868. Similarly, PSNR values ranged 32.34-32.64 dB for EDSR trained on mixed datasets, and SSIM values ranged 0.9941-0.9945. Significant differences in PSNR and SSIM were observed between models trained on pure and mixed datasets. Pearson's correlation coefficient indicated a strong positive correlation between dataset size and image quality metrics. Using diverse image data obtained from the same subject can improve the performance of deep-learning models in medical image super-resolution tasks.
本研究调查了使用深度学习增强数据集以对脑磁共振成像(MRI)T1加权图像(T1WI)进行超分辨率处理的有效性。通过纳入同一受试者具有不同对比度的图像,本研究旨在提高网络性能,并评估其对图像质量指标的影响,如峰值信噪比(PSNR)和结构相似性(SSIM)。这项回顾性研究纳入了240例接受脑部MRI检查的患者。创建了两种类型的数据集:仅包含T1WI的纯数据集组和包含T1WI、T2加权图像及液体衰减反转恢复图像的混合数据集组。在这些数据集上训练了基于U-Net的网络和增强深度超分辨率网络(EDSR)。使用PSNR和SSIM进行客观图像质量分析。进行了包括配对t检验和Pearson相关系数在内的统计分析以评估结果。随着数据集大小的增加,用不同对比度的图像增强数据集显著提高了训练精度。在混合数据集上训练的U-Net的PSNR值范围为29.84 - 30.26 dB,SSIM值范围为0.9858 - 0.9868。同样,在混合数据集上训练的EDSR的PSNR值范围为32.34 - 32.64 dB,SSIM值范围为0.9941 - 0.9945。在纯数据集和混合数据集上训练的模型之间,PSNR和SSIM存在显著差异。Pearson相关系数表明数据集大小与图像质量指标之间存在强正相关。使用从同一受试者获得的多样图像数据可以提高深度学习模型在医学图像超分辨率任务中的性能。