Ruan Yuyan, Yang Dawei, Tang Ziqi, Ran Ran An, Wang Jiguang, Cheung Carol Y, Chen Hao
IEEE Trans Neural Netw Learn Syst. 2025 Jul;36(7):12146-12158. doi: 10.1109/TNNLS.2024.3456483.
Optical coherence tomography angiography (OCTA) can visualize retinal microvasculature and is important to qualitatively and quantitatively identify potential biomarkers for different retinal diseases. However, the resolution of optical coherence tomography (OCT) angiograms inevitably decreases when increasing the field-of-view (FOV) given a fixed acquisition time. To address this issue, we propose a novel reference-based super-resolution (RefSR) framework to preserve the resolution of the OCT angiograms while increasing the scanning area. Specifically, textures from the normal RefSR pipeline are used to train a learnable texture generator (LTG), which is designed to generate textures according to the input. The key difference between the proposed method and traditional RefSR models is that the textures used during inference are generated by the LTG instead of being searched from a single reference (Ref) image. Since the LTG is optimized throughout the whole training process, the available texture space is significantly enlarged and no longer limited to a single Ref image, but extends to all textures contained in the training samples. Moreover, our proposed LTGNet does not require an Ref image at the inference phase, thereby becoming invulnerable to the selection of the Ref image. Both experimental and visual results show that LTGNet has competitive performance and robustness over state-of-the-art methods, indicating good reliability and promise in real-life deployment. The source code is available at https://github.com/RYY0722/LTGNet.
光学相干断层扫描血管造影(OCTA)可以可视化视网膜微血管系统,对于定性和定量识别不同视网膜疾病的潜在生物标志物非常重要。然而,在给定固定采集时间的情况下,当增加视野(FOV)时,光学相干断层扫描(OCT)血管造影的分辨率不可避免地会降低。为了解决这个问题,我们提出了一种新颖的基于参考的超分辨率(RefSR)框架,以在增加扫描面积的同时保持OCT血管造影的分辨率。具体来说,来自正常RefSR管道的纹理用于训练一个可学习的纹理生成器(LTG),该生成器旨在根据输入生成纹理。所提出的方法与传统RefSR模型的关键区别在于,推理过程中使用的纹理是由LTG生成的,而不是从单个参考(Ref)图像中搜索得到的。由于LTG在整个训练过程中进行了优化,可用的纹理空间显著扩大,不再局限于单个Ref图像,而是扩展到训练样本中包含的所有纹理。此外,我们提出的LTGNet在推理阶段不需要Ref图像,从而对Ref图像的选择具有免疫能力。实验和视觉结果均表明,LTGNet相对于现有方法具有具有竞争力的性能和鲁棒性,表明在实际应用中具有良好的可靠性和前景。源代码可在https://github.com/RYY0722/LTGNet获取。