Wang Ke, Tamir Jonathan I, De Goyeneche Alfredo, Wollner Uri, Brada Rafi, Yu Stella X, Lustig Michael
Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA.
International Computer Science Institute, University of California at Berkeley, Berkeley, California, USA.
Magn Reson Med. 2022 Jul;88(1):476-491. doi: 10.1002/mrm.29227. Epub 2022 Apr 3.
To improve reconstruction fidelity of fine structures and textures in deep learning- (DL) based reconstructions.
A novel patch-based Unsupervised Feature Loss (UFLoss) is proposed and incorporated into the training of DL-based reconstruction frameworks in order to preserve perceptual similarity and high-order statistics. The UFLoss provides instance-level discrimination by mapping similar instances to similar low-dimensional feature vectors and is trained without any human annotation. By adding an additional loss function on the low-dimensional feature space during training, the reconstruction frameworks from under-sampled or corrupted data can reproduce more realistic images that are closer to the original with finer textures, sharper edges, and improved overall image quality. The performance of the proposed UFLoss is demonstrated on unrolled networks for accelerated two- (2D) and three-dimensional (3D) knee MRI reconstruction with retrospective under-sampling. Quantitative metrics including normalized root mean squared error (NRMSE), structural similarity index (SSIM), and our proposed UFLoss were used to evaluate the performance of the proposed method and compare it with others.
In vivo experiments indicate that adding the UFLoss encourages sharper edges and more faithful contrasts compared to traditional and learning-based methods with pure loss. More detailed textures can be seen in both 2D and 3D knee MR images. Quantitative results indicate that reconstruction with UFLoss can provide comparable NRMSE and a higher SSIM while achieving a much lower UFLoss value.
We present UFLoss, a patch-based unsupervised learned feature loss, which allows the training of DL-based reconstruction to obtain more detailed texture, finer features, and sharper edges with higher overall image quality under DL-based reconstruction frameworks. (Code available at: https://github.com/mikgroup/UFLoss).
提高基于深度学习(DL)的重建中精细结构和纹理的重建保真度。
提出了一种新颖的基于块的无监督特征损失(UFLoss),并将其纳入基于DL的重建框架的训练中,以保持感知相似性和高阶统计量。UFLoss通过将相似实例映射到相似的低维特征向量来提供实例级别的区分,并且无需任何人工标注即可进行训练。通过在训练期间在低维特征空间上添加额外的损失函数,来自欠采样或损坏数据的重建框架可以生成更逼真的图像,这些图像更接近原始图像,具有更精细的纹理、更清晰的边缘和更高的整体图像质量。在所提出的UFLoss的性能在展开网络上进行了演示,用于具有回顾性欠采样的加速二维(2D)和三维(3D)膝关节MRI重建。使用包括归一化均方根误差(NRMSE)、结构相似性指数(SSIM)和我们提出的UFLoss在内的定量指标来评估所提出方法的性能并与其他方法进行比较。
体内实验表明,与基于传统损失和纯学习的方法相比,添加UFLoss可使边缘更清晰,对比度更逼真。在2D和3D膝关节MR图像中都可以看到更详细的纹理。定量结果表明,使用UFLoss进行重建可以提供相当的NRMSE和更高的SSIM,同时实现低得多的UFLoss值。
我们提出了UFLoss,一种基于块的无监督学习特征损失,它允许在基于DL的重建框架下训练基于DL的重建,以获得更详细的纹理、更精细的特征和更清晰的边缘,同时具有更高的整体图像质量。(代码可在:https://github.com/mikgroup/UFLoss获取)