Nam Chang-Mo, Lee Kyong Joon, Ko Yousun, Kim Kil Joong, Kim Bohyoung, Lee Kyoung Ho
Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82 Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, Korea.
Division of Biomedical Engineering, Hankuk University of Foreign Studies, Oedae-ro 81, Mohyeon-myeon, Cheoin-gu, Yongin-si, Gyeonggi-do, 17035, Korea.
BMC Med Imaging. 2018 Dec 17;18(1):53. doi: 10.1186/s12880-017-0244-2.
To develop an algorithm to predict the visually lossless thresholds (VLTs) of CT images solely using the original images by exploiting the image features and DICOM header information for JPEG2000 compression and to evaluate the algorithm in comparison with pre-existing image fidelity metrics.
Five radiologists independently determined the VLT for 206 body CT images for JPEG2000 compression using QUEST procedure. The images were divided into training (n = 103) and testing (n = 103) sets. Using the training set, a multiple linear regression (MLR) model was constructed regarding the image features and DICOM header information as independent variables and regarding the VLTs determined with median value of the radiologists' responses (VLT) as dependent variable, after determining an optimal subset of independent variables by backward stepwise selection in a cross-validation scheme. The performance was evaluated on the testing set by measuring absolute differences and intra-class correlation (ICC) coefficient between the VLT and the VLTs predicted by the model (VLT). The performance of the model was also compared two metrics, peak signal-to-noise ratio (PSNR) and high-dynamic range visual difference predictor (HDRVDP). The time for computing VLTs between MLR model, PSNR, and HDRVDP were compared using the repeated ANOVA with a post-hoc analysis. P < 0.05 was considered to indicate a statistically significant difference.
The means of absolute differences with the VLT were 0.58 (95% CI, 0.48, 0.67), 0.73 (0.61, 0.85), and 0.68 (0.58, 0.79), for the MLR model, PSNR, and HDRVDP, respectively, showing significant difference between them (p < 0.01). The ICC coefficients of MLR model, PSNR, and HDRVDP were 0.88 (95% CI, 0.81, 0.95), 0.85 (0.79, 0.91), and 0.84 (0.77, 0.91). The computing times for calculating VLT per image were 1.5 ± 0.1 s, 3.9 ± 0.3 s, and 68.2 ± 1.4 s, for MLR metric, PSNR, and HDRVDP, respectively.
The proposed MLR model directly predicting the VLT of a given CT image showed competitive performance to those of image fidelity metrics with less computational expenses. The model would be promising to be used for adaptive compression of CT images.
通过利用图像特征和DICOM头信息进行JPEG2000压缩,开发一种仅使用原始图像来预测CT图像视觉无损阈值(VLT)的算法,并与现有的图像保真度度量标准进行比较来评估该算法。
五名放射科医生使用QUEST程序独立确定206幅用于JPEG2000压缩的身体CT图像的VLT。将图像分为训练集(n = 103)和测试集(n = 103)。使用训练集,将图像特征和DICOM头信息作为自变量,将根据放射科医生反应的中位数确定的VLT作为因变量,通过交叉验证方案中的向后逐步选择确定自变量的最佳子集后,构建多元线性回归(MLR)模型。通过测量VLT与模型预测的VLT之间的绝对差异和组内相关(ICC)系数,在测试集上评估性能。还将该模型的性能与两个度量标准进行比较,即峰值信噪比(PSNR)和高动态范围视觉差异预测器(HDRVDP)。使用重复方差分析和事后分析比较MLR模型、PSNR和HDRVDP计算VLT的时间。P < 0.05被认为表示具有统计学显著差异。
MLR模型、PSNR和HDRVDP与VLT的绝对差异均值分别为0.58(95%CI,0.48,0.67)、0.73(0.61,0.85)和0.68(0.58,0.79),它们之间存在显著差异(p < 0.01)。MLR模型、PSNR和HDRVDP的ICC系数分别为0.88(95%CI,0.81,0.95)、0.85(0.79,0.91)和0.84(0.77,0.91)。MLR度量标准、PSNR和HDRVDP计算每幅图像VLT的时间分别为1.5±0.1秒、3.9±0.3秒和68.2±1.4秒。
所提出的直接预测给定CT图像VLT的MLR模型与图像保真度度量标准相比,具有竞争力的性能且计算成本更低。该模型有望用于CT图像的自适应压缩。