Tian Xin, Anantrasirichai Nantheera, Nicholson Lindsay, Achim Alin
Visual Information Laboratory, University of Bristol, Bristol, UK.
Autoimmune Inflammation Research, University of Bristol, Bristol, UK.
Biol Imaging. 2024 Dec 16;4:e15. doi: 10.1017/S2633903X24000163. eCollection 2024.
Optical coherence tomography (OCT) and confocal microscopy are pivotal in retinal imaging, offering distinct advantages and limitations. OCT offers rapid, noninvasive imaging but can suffer from clarity issues and motion artifacts, while confocal microscopy, providing high-resolution, cellular-detailed color images, is invasive and raises ethical concerns. To bridge the benefits of both modalities, we propose a novel framework based on unsupervised 3D CycleGAN for translating unpaired OCT to confocal microscopy images. This marks the first attempt to exploit the inherent 3D information of OCT and translate it into the rich, detailed color domain of confocal microscopy. We also introduce a unique dataset, OCT2Confocal, comprising mouse OCT and confocal retinal images, facilitating the development of and establishing a benchmark for cross-modal image translation research. Our model has been evaluated both quantitatively and qualitatively, achieving Fréchet inception distance (FID) scores of 0.766 and Kernel Inception Distance (KID) scores as low as 0.153, and leading subjective mean opinion scores (MOS). Our model demonstrated superior image fidelity and quality with limited data over existing methods. Our approach effectively synthesizes color information from 3D confocal images, closely approximating target outcomes and suggesting enhanced potential for diagnostic and monitoring applications in ophthalmology.
光学相干断层扫描(OCT)和共聚焦显微镜在视网膜成像中起着关键作用,各有其独特的优势和局限性。OCT提供快速、无创成像,但可能存在清晰度问题和运动伪影,而共聚焦显微镜虽能提供高分辨率、细胞细节丰富的彩色图像,却是侵入性的,还引发伦理问题。为融合这两种模式的优势,我们提出了一种基于无监督3D CycleGAN的新颖框架,用于将未配对的OCT图像转换为共聚焦显微镜图像。这是首次尝试利用OCT的固有3D信息并将其转换为共聚焦显微镜丰富、详细的彩色领域。我们还引入了一个独特的数据集OCT2Confocal,它由小鼠的OCT和共聚焦视网膜图像组成,有助于跨模态图像转换研究的发展并建立基准。我们的模型已进行了定量和定性评估,实现了0.766的弗雷歇 inception 距离(FID)分数以及低至0.153的核 inception 距离(KID)分数,并领先于主观平均意见分数(MOS)。与现有方法相比,我们的模型在有限数据下展示出卓越的图像保真度和质量。我们的方法有效地从3D共聚焦图像中合成颜色信息,非常接近目标结果,并显示出在眼科诊断和监测应用中增强的潜力。