Zhou Yi, Peng Tao, Ahmed Thiara Sana, Shi Fei, Zhu Weifang, Xiang Dehui, Schmetterer Leopold, Jiang Jianxin, Tan Bingyao, Chen Xinjian
School of Electronics and Information Engineering, Soochow University, 1 Shizi Street, Suzhou, 215006, Jiangsu, China.
School of Future Science and Engineering, Soochow University, 1 Shizi Street, Suzhou, 215006, Jiangsu, China.
Comput Med Imaging Graph. 2025 Sep;124:102597. doi: 10.1016/j.compmedimag.2025.102597. Epub 2025 Jul 4.
Speckle noise in Optical coherence tomography (OCT) images compromises the performance of image analysis tasks such as retinal layer boundary detection. Deep learning algorithms have demonstrated the advantage of being more cost-effective and robust compared to hardware solutions and conventional image processing algorithms. However, these methods usually require large training datasets which is time-consuming to acquire. This paper proposes a novel method called Adversarial Meta-learning for Few-shot raw retinal OCT image Despeckling (AMeta-FD) to reduce speckle noise in OCT images. Our method involves two training phases: (1) adversarial meta-training on synthetic noisy OCT image pairs, and (2) fine-tuning with a small set of raw-clean image pairs containing speckle noise. Additionally, we introduce a new suppression loss to reduce the contribution of non-tissue pixels effectively. The ground truth involved in this study is generated by registering and averaging multiple repeated images. AMeta-FD requires only 60 raw-clean image pairs, which constitute about 12% of whole training dataset, yet it achieves performance on par with traditional transfer training that utilize the entire training dataset. Extensive evaluations show that in terms of signal-to-noise ratio (SNR), AMeta-FD surpasses traditional non-learning-based despeckling methods by at least 15 dB. It also outperforms the recent meta-learning-based image denoising method, Few-Shot Meta-Denoising (FSMD), by 11.01 dB, and exceeds our previous best method by 3 dB. The code for AMeta-FD is available at https://github.com/Zhouyi-Zura/AMeta-FD.
光学相干断层扫描(OCT)图像中的散斑噪声会影响诸如视网膜层边界检测等图像分析任务的性能。与硬件解决方案和传统图像处理算法相比,深度学习算法已证明具有更高的性价比和更强的鲁棒性。然而,这些方法通常需要大量的训练数据集,而获取这些数据集很耗时。本文提出了一种名为用于少样本原始视网膜OCT图像去噪的对抗元学习(AMeta-FD)的新方法,以减少OCT图像中的散斑噪声。我们的方法包括两个训练阶段:(1)对合成噪声OCT图像对进行对抗元训练,以及(2)使用一小部分包含散斑噪声的原始-干净图像对进行微调。此外,我们引入了一种新的抑制损失,以有效减少非组织像素的贡献。本研究中涉及的真实数据是通过对多个重复图像进行配准和平均生成的。AMeta-FD仅需要60对原始-干净图像对,约占整个训练数据集的12%,但其性能与使用整个训练数据集的传统迁移训练相当。广泛的评估表明,在信噪比(SNR)方面,AMeta-FD比传统的基于非学习的去噪方法至少高出15 dB。它还比最近基于元学习的图像去噪方法少样本元去噪(FSMD)高出11.01 dB,比我们之前的最佳方法高出3 dB。AMeta-FD的代码可在https://github.com/Zhouyi-Zura/AMeta-FD获取。