Ganjee Razieh, Wang Bingjie, Wang Lingyun, Zhao Chengcheng, Sahel José-Alain, Pi Shaohua
Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA 15213, USA.
Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA.
Biomed Opt Express. 2024 Nov 6;15(12):6725-6738. doi: 10.1364/BOE.538904. eCollection 2024 Dec 1.
Visible light optical coherence tomography (vis-OCT) is gaining traction for retinal imaging due to its high resolution and functional capabilities. However, the significant absorption of hemoglobin in the visible light range leads to pronounced shadow artifacts from retinal blood vessels, posing challenges for accurate layer segmentation. In this study, we present BreakNet, a multi-scale Transformer-based segmentation model designed to address boundary discontinuities caused by these shadow artifacts. BreakNet utilizes hierarchical Transformer and convolutional blocks to extract multi-scale global and local feature maps, capturing essential contextual, textural, and edge characteristics. The model incorporates decoder blocks that expand pathways to enhance the extraction of fine details and semantic information, ensuring precise segmentation. Evaluated on rodent retinal images acquired with prototype vis-OCT, BreakNet demonstrated superior performance over state-of-the-art segmentation models, such as TCCT-BP and U-Net, even when faced with limited-quality ground truth data. Our findings indicate that BreakNet has the potential to significantly improve retinal quantification and analysis.
可见光光学相干断层扫描(vis-OCT)因其高分辨率和功能特性在视网膜成像方面越来越受到关注。然而,血红蛋白在可见光范围内的显著吸收会导致视网膜血管产生明显的阴影伪影,给准确的层分割带来挑战。在本研究中,我们提出了BreakNet,这是一种基于多尺度Transformer的分割模型,旨在解决由这些阴影伪影引起的边界不连续性问题。BreakNet利用分层Transformer和卷积块来提取多尺度全局和局部特征图,捕捉基本的上下文、纹理和边缘特征。该模型包含解码器块,这些解码器块扩展路径以增强对精细细节和语义信息的提取,确保精确分割。在使用原型vis-OCT获取的啮齿动物视网膜图像上进行评估时,即使面对质量有限的地面真值数据,BreakNet也展示出优于诸如TCCT-BP和U-Net等先进分割模型的性能。我们的研究结果表明,BreakNet有潜力显著改善视网膜定量分析。