School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China.
Br J Ophthalmol. 2023 Sep;107(9):1350-1355. doi: 10.1136/bjophthalmol-2022-321348. Epub 2022 Jun 13.
BACKGROUND/AIMS: To develop and validate a deep learning model for automated segmentation of multitype retinal fluid using optical coherence tomography (OCT) images.
We retrospectively collected a total of 2814 completely anonymised OCT images with subretinal fluid (SRF) and intraretinal fluid (IRF) from 141 patients between July 2018 and June 2020, constituting our in-house retinal OCT dataset. On this dataset, we developed a novel semisupervised retinal fluid segmentation deep network (Ref-Net) to automatically identify SRF and IRF in a coarse-to-refine fashion. We performed quantitative and qualitative analyses on the model's performance while verifying its generalisation ability by using our in-house retinal OCT dataset for training and an unseen Kermany dataset for testing. We also determined the importance of major components in the semisupervised Ref-Net through extensive ablation. The main outcome measures were Dice similarity coefficient (Dice), sensitivity (Sen), specificity (Spe) and mean absolute error (MAE).
Our model trained on a handful of labelled OCT images manifested higher performance (Dice: 81.2%, Sen: 87.3%, Spe: 98.8% and MAE: 1.1% for SRF; Dice: 78.0%, Sen: 83.6%, Spe: 99.3% and MAE: 0.5% for IRF) over most cutting-edge segmentation models. It obtained expert-level performance with only 80 labelled OCT images and even exceeded two out of three ophthalmologists with 160 labelled OCT images. Its satisfactory generalisation capability across an unseen dataset was also demonstrated.
The semisupervised Ref-Net required only la few labelled OCT images to generate outstanding performance in automate segmentation of multitype retinal fluid, which has the potential for providing assistance for clinicians in the management of ocular disease.
背景/目的:开发并验证一种基于光学相干断层扫描(OCT)图像的多类型视网膜液自动分割深度学习模型。
我们回顾性地收集了 2018 年 7 月至 2020 年 6 月期间 141 名患者的共 2814 张完全匿名的 OCT 图像,其中包括视网膜下液(SRF)和视网膜内液(IRF),构成了我们内部的视网膜 OCT 数据集。在该数据集上,我们开发了一种新颖的半监督视网膜液分割深度网络(Ref-Net),以粗到精的方式自动识别 SRF 和 IRF。我们通过使用内部视网膜 OCT 数据集进行训练和使用未见过的 Kermany 数据集进行测试来验证模型的泛化能力,对模型的性能进行了定量和定性分析。我们还通过广泛的消融确定了半监督 Ref-Net 中主要组件的重要性。主要观察指标是 Dice 相似系数(Dice)、灵敏度(Sen)、特异性(Spe)和平均绝对误差(MAE)。
我们的模型在少数几张标记的 OCT 图像上进行训练,表现出更高的性能(Dice:81.2%,Sen:87.3%,Spe:98.8%和 MAE:1.1%,用于 SRF;Dice:78.0%,Sen:83.6%,Spe:99.3%和 MAE:0.5%,用于 IRF),优于大多数最先进的分割模型。它仅使用 80 张标记的 OCT 图像就达到了专家级别的性能,甚至在使用 160 张标记的 OCT 图像时超过了三位眼科医生中的两位。它在未见过的数据集上的令人满意的泛化能力也得到了证明。
半监督 Ref-Net 仅需少量标记的 OCT 图像即可在多类型视网膜液的自动分割中产生出色的性能,这有可能为眼科疾病的临床管理提供帮助。