Suppr超能文献

利用深度神经网络对 OCT 图像进行深度特征去噪。

Deep feature loss to denoise OCT images using deep neural networks.

机构信息

Australian e-Health Research Ctr., Australia.

The Univ. of Western Australia, Australia.

出版信息

J Biomed Opt. 2021 Apr;26(4). doi: 10.1117/1.JBO.26.4.046003.

Abstract

SIGNIFICANCE

Speckle noise is an inherent limitation of optical coherence tomography (OCT) images that makes clinical interpretation challenging. The recent emergence of deep learning could offer a reliable method to reduce noise in OCT images.

AIM

We sought to investigate the use of deep features (VGG) to limit the effect of blurriness and increase perceptual sharpness and to evaluate its impact on the performance of OCT image denoising (DnCNN).

APPROACH

Fifty-one macula-centered OCT pairs were used in training of the network. Another set of 20 OCT pair was used for testing. The DnCNN model was cascaded with a VGG network that acted as a perceptual loss function instead of the traditional losses of L1 and L2. The VGG network remains fixed during the training process. We focused on the individual layers of the VGG-16 network to decipher the contribution of each distinctive layer as a loss function to produce denoised OCT images that were perceptually sharp and that preserved the faint features (retinal layer boundaries) essential for interpretation. The peak signal-to-noise ratio (PSNR), edge-preserving index, and no-reference image sharpness/blurriness [perceptual sharpness index (PSI), just noticeable blur (JNB), and spectral and spatial sharpness measure (S3)] metrics were used to compare deep feature losses with the traditional losses.

RESULTS

The deep feature loss produced images with high perceptual sharpness measures at the cost of less smoothness (PSNR) in OCT images. The deep feature loss outperformed the traditional losses (L1 and L2) for all of the evaluation metrics except for PSNR. The PSI, S3, and JNB estimates of deep feature loss performance were 0.31, 0.30, and 16.53, respectively. For L1 and L2 losses performance, the PSI, S3, and JNB were 0.21 and 0.21, 0.17 and 0.16, and 14.46 and 14.34, respectively.

CONCLUSIONS

We demonstrate the potential of deep feature loss in denoising OCT images. Our preliminary findings suggest research directions for further investigation.

摘要

意义

散斑噪声是光学相干断层扫描 (OCT) 图像固有的限制因素,使得临床解释具有挑战性。深度学习的最新出现可能提供一种可靠的方法来减少 OCT 图像中的噪声。

目的

我们旨在研究使用深度特征 (VGG) 来限制模糊度的影响并提高感知锐度,并评估其对 OCT 图像去噪 (DnCNN) 性能的影响。

方法

51 个黄斑中心 OCT 对用于网络训练。另一组 20 个 OCT 对用于测试。DnCNN 模型与 VGG 网络级联,VGG 网络作为感知损失函数,而不是传统的 L1 和 L2 损失。VGG 网络在训练过程中保持固定。我们专注于 VGG-16 网络的各个层,以破译每个独特层作为损失函数的贡献,以生成感知锐度高且保留微弱特征(视网膜层边界)的去噪 OCT 图像,这些特征对于解释至关重要。使用峰值信噪比 (PSNR)、边缘保持指数和无参考图像锐度/模糊度 [感知锐度指数 (PSI)、可察觉模糊 (JNB) 和光谱和空间锐度测量 (S3)] 指标来比较深度特征损失与传统损失。

结果

深度特征损失产生的图像具有高感知锐度度量,但 OCT 图像的平滑度(PSNR)较低。深度特征损失在所有评估指标上都优于传统损失(L1 和 L2),除了 PSNR。深度特征损失的 PSI、S3 和 JNB 估计值分别为 0.31、0.30 和 16.53。对于 L1 和 L2 损失性能,PSI、S3 和 JNB 分别为 0.21 和 0.21、0.17 和 0.16 以及 14.46 和 14.34。

结论

我们证明了深度特征损失在 OCT 图像去噪中的潜力。我们的初步发现为进一步研究提供了研究方向。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eb9b/8062795/c647ff4dafb5/JBO-026-046003-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验