Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China.
Wuhan EndoAngel Medical Technology Company, Wuhan, China.
Ultrasound Med Biol. 2024 Aug;50(8):1262-1272. doi: 10.1016/j.ultrasmedbio.2024.05.004. Epub 2024 May 22.
This study aimed to develop and evaluate a deep learning-based model that could automatically measure anterior segment (AS) parameters on preoperative ultrasound biomicroscopy (UBM) images of implantable Collamer lens (ICL) surgery candidates.
A total of 1164 panoramic UBM images were preoperatively obtained from 321 patients who received ICL surgery in the Eye Center of Renmin Hospital of Wuhan University (Wuhan, China) to develop an imaging database. First, the UNet++ network was utilized to segment AS tissues automatically, such as corneal lens and iris. In addition, image processing techniques and geometric localization algorithms were developed to automatically identify the anatomical landmarks (ALs) of pupil diameter (PD), anterior chamber depth (ACD), angle-to-angle distance (ATA), and sulcus-to-sulcus distance (STS). Based on the results of the latter two processes, PD, ACD, ATA, and STS can be measured. Meanwhile, an external dataset of 294 images from Huangshi Aier Eye Hospital was employed to further assess the model's performance in other center. Lastly, a subset of 100 random images from the external test set was chosen to compare the performance of the model with senior experts.
Whether in the internal test dataset or external test dataset, using manual labeling as the reference standard, the models achieved a mean Dice coefficient exceeding 0.880. Additionally, the intra-class correlation coefficients (ICCs) of ALs' coordinates were all greater than 0.947, and the percentage of Euclidean distance distribution of ALs within 250 μm was over 95.24%.While the ICCs for PD, ACD, ATA, and STS were greater than 0.957, furthermore, the average relative error (ARE) of PD, ACD, ATA, and STS were below 2.41%. In terms of human versus machine performance, the ICCs between the measurements performed by the model and those by senior experts were all greater than 0.931.
A deep learning-based model could measure AS parameters using UBM images of ICL candidates, and exhibited a performance similar to that of a senior ophthalmologist.
本研究旨在开发并评估一种基于深度学习的模型,以自动测量植入性 Collamer 透镜(ICL)手术候选者术前超声生物显微镜(UBM)图像的眼前节(AS)参数。
共从在中国武汉大学人民医院(武汉)接受 ICL 手术的 321 名患者中获得了 1164 张全景 UBM 图像,以开发成像数据库。首先,利用 UNet++网络自动分割 AS 组织,如角膜透镜和虹膜。此外,开发了图像处理技术和几何定位算法,以自动识别瞳孔直径(PD)、前房深度(ACD)、角到角距离(ATA)和巩膜嵴到巩膜嵴距离(STS)的解剖标志(AL)。基于后两个过程的结果,可以测量 PD、ACD、ATA 和 STS。同时,利用来自黄石艾儿眼科医院的 294 张图像的外部数据集进一步评估模型在其他中心的性能。最后,从外部测试集中选择 100 张随机图像的子集,以比较模型与资深专家的性能。
无论是在内部测试数据集还是外部测试数据集,使用手动标记作为参考标准,模型的平均 Dice 系数均超过 0.880。此外,AL 坐标的组内相关系数(ICC)均大于 0.947,AL 之间的欧几里得距离分布百分比在 250 μm 内超过 95.24%。而 PD、ACD、ATA 和 STS 的 ICC 均大于 0.957,此外,PD、ACD、ATA 和 STS 的平均相对误差(ARE)均低于 2.41%。在人与机器的性能方面,模型测量值与资深专家测量值之间的 ICC 均大于 0.931。
基于深度学习的模型可以使用 ICL 候选者的 UBM 图像测量 AS 参数,并且其性能与资深眼科医生相似。