Tang Chenwei, Eisenmenger Laura B, Rivera-Rivera Leonardo, Huo Eugene, Junn Jacqueline C, Kuner Anthony D, Oechtering Thekla H, Peret Anthony, Starekova Jitka, Johnson Kevin M
Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA.
Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA.
J Magn Reson Imaging. 2025 Jun;61(6):2572-2584. doi: 10.1002/jmri.29672. Epub 2024 Dec 17.
Deep learning (DL) often requires an image quality metric; however, widely used metrics are not designed for medical images.
To develop an image quality metric that is specific to MRI using radiologists image rankings and DL models.
Retrospective.
A total of 19,344 rankings on 2916 unique image pairs from the NYU fastMRI Initiative neuro database was used for the neural network-based image quality metrics training with an 80%/20% training/validation split and fivefold cross-validation.
FIELD STRENGTH/SEQUENCE: 1.5 T and 3 T T1, T1 postcontrast, T2, and FLuid Attenuated Inversion Recovery (FLAIR).
Synthetically corrupted image pairs were ranked by radiologists (N = 7), with a subset also scoring images using a Likert scale (N = 2). DL models were trained to match rankings using two architectures (EfficientNet and IQ-Net) with and without reference image subtraction and compared to ranking based on mean squared error (MSE) and structural similarity (SSIM). Image quality assessing DL models were evaluated as alternatives to MSE and SSIM as optimization targets for DL denoising and reconstruction.
Radiologists' agreement was assessed by a percentage metric and quadratic weighted Cohen's kappa. Ranking accuracies were compared using repeated measurements analysis of variance. Reconstruction models trained with IQ-Net score, MSE and SSIM were compared by paired t test. P < 0.05 was considered significant.
Compared to direct Likert scoring, ranking produced a higher level of agreement between radiologists (70.4% vs. 25%). Image ranking was subjective with a high level of intraobserver agreement ( ) and lower interobserver agreement ( ). IQ-Net and EfficientNet accurately predicted rankings with a reference image ( and ). However, EfficientNet resulted in images with artifacts and high MSE when used in denoising tasks while IQ-Net optimized networks performed well for both denoising and reconstruction tasks.
Image quality networks can be trained from image ranking and used to optimize DL tasks.
3 TECHNICAL EFFICACY: Stage 1.
深度学习(DL)通常需要一种图像质量指标;然而,广泛使用的指标并非为医学图像设计。
利用放射科医生的图像排名和深度学习模型开发一种针对MRI的图像质量指标。
回顾性研究。
来自纽约大学快速MRI计划神经数据库的2916对独特图像对的19344个排名用于基于神经网络的图像质量指标训练,训练/验证分割比例为80%/20%,并进行五折交叉验证。
场强/序列:1.5T和3T的T1、T1增强、T2以及液体衰减反转恢复(FLAIR)序列。
放射科医生(N = 7)对合成损坏的图像对进行排名,其中一部分人(N = 2)还使用李克特量表对图像进行评分。使用两种架构(EfficientNet和IQ-Net)训练深度学习模型以匹配排名,有无参考图像相减两种情况都有,并与基于均方误差(MSE)和结构相似性(SSIM)的排名进行比较。将评估图像质量的深度学习模型作为MSE和SSIM的替代方案,作为深度学习去噪和重建的优化目标进行评估。
通过百分比指标和二次加权科恩kappa系数评估放射科医生之间的一致性。使用重复测量方差分析比较排名准确性。通过配对t检验比较使用IQ-Net评分、MSE和SSIM训练的重建模型。P < 0.05被认为具有统计学意义。
与直接的李克特评分相比,排名在放射科医生之间产生了更高水平的一致性(70.4%对25%)。图像排名具有主观性,观察者内一致性较高( ),观察者间一致性较低( )。IQ-Net和EfficientNet在有参考图像时能准确预测排名( 和 )。然而,在去噪任务中使用EfficientNet时会导致图像出现伪影且MSE较高,而IQ-Net优化的网络在去噪和重建任务中均表现良好。
图像质量网络可以从图像排名中训练出来,并用于优化深度学习任务。
3 技术效能:1期。