School of Electronical and Information Engineering, Tianjin University, Tianjin 300072, China.
School of Electronical and Information Engineering, Tianjin University, Tianjin 300072, China.
Med Image Anal. 2020 Aug;64:101727. doi: 10.1016/j.media.2020.101727. Epub 2020 May 23.
For a poor quality optical coherence tomography (OCT) image, quality enhancement is limited to speckle residue and edge blur as well as texture loss, especially at the background region near edges. To solve this problem, in this paper we propose a de-speckling method based on the convolutional neural network (CNN). In the proposed method, we use a deep nonlinear CNN mapping model in the serial architecture, here named as OCTNet. Our OCTNet in the proposed method can fully utilize the deep information on speckles and edges as well as fine textures of an original OCT image. And also we construct an available pertinent dataset by combining three existing methods to train the model. With the proposed method, we can accurately get the speckle noise from an original OCT image. We test our method on four experimental human retinal OCT images and also compare it with three state-of-the-art methods, including the adaptive complex diffusion (ACD) method and the curvelet shrinkage (Curvelet) method as well as the shearlet-based total variation (STV) method. The performance of these methods is quantitatively evaluated in terms of image distinguishability, contrast, smoothness and edge sharpness, and also qualitatively analyzed at aspects of speckle reduction, texture protection and edge preservation. The experimental results show that our OCTNet can reduce the speckle noise and protect the structural information as well as preserve the edge features effectively and simultaneously, even where the background region near edges. And also our OCTNet has full advantages on excellent generalization, adaptiveness, robust and batch performance. These advantages make our method be suitable to process a great mass of different images rapidly without any parameter fine-turning under a time-constrained real-time situation.
对于质量较差的光学相干断层扫描(OCT)图像,质量增强仅限于散斑残留、边缘模糊以及纹理丢失,尤其是在边缘附近的背景区域。为了解决这个问题,本文提出了一种基于卷积神经网络(CNN)的去散斑方法。在提出的方法中,我们使用了串联结构中的深度非线性 CNN 映射模型,在这里称为 OCTNet。OCTNet 可以充分利用原始 OCT 图像中的散斑、边缘以及精细纹理的深度信息。我们还通过结合三种现有的方法来构建一个可用的相关数据集来训练模型。使用所提出的方法,我们可以从原始 OCT 图像中准确地获取散斑噪声。我们在四个实验性的人类视网膜 OCT 图像上测试了我们的方法,并与三种最先进的方法进行了比较,包括自适应复扩散(ACD)方法、曲波收缩(Curvelet)方法以及基于剪切波的全变差(STV)方法。这些方法的性能在图像可区分性、对比度、平滑度和边缘锐度方面进行了定量评估,并在去散斑、保护纹理和保留边缘特征方面进行了定性分析。实验结果表明,OCTNet 可以有效地同时减少散斑噪声、保护结构信息以及保留边缘特征,即使在边缘附近的背景区域也是如此。此外,OCTNet 在出色的泛化能力、适应性、鲁棒性和批量性能方面具有全面优势。这些优势使得我们的方法能够在时间受限的实时情况下,无需进行任何参数微调,快速处理大量不同的图像。