Yoshitsugu Kenji, Shimizu Eisuke, Nishimura Hiroki, Khemlani Rohan, Nakayama Shintaro, Takemura Tadamasa
Graduate School of Information Science, University of Hyogo, Kobe Information Science Campus, Kobe 6500047, Japan.
OUI Inc., Tokyo 1070062, Japan.
Bioengineering (Basel). 2024 Mar 12;11(3):273. doi: 10.3390/bioengineering11030273.
Ophthalmological services face global inadequacies, especially in low- and middle-income countries, which are marked by a shortage of practitioners and equipment. This study employed a portable slit lamp microscope with video capabilities and cloud storage for more equitable global diagnostic resource distribution. To enhance accessibility and quality of care, this study targets corneal opacity, which is a global cause of blindness. This study has two purposes. The first is to detect corneal opacity from videos in which the anterior segment of the eye is captured. The other is to develop an AI pipeline to detect corneal opacities. First, we extracted image frames from videos and processed them using a convolutional neural network (CNN) model. Second, we manually annotated the images to extract only the corneal margins, adjusted the contrast with CLAHE, and processed them using the CNN model. Finally, we performed semantic segmentation of the cornea using annotated data. The results showed an accuracy of 0.8 for image frames and 0.96 for corneal margins. Dice and IoU achieved a score of 0.94 for semantic segmentation of the corneal margins. Although corneal opacity detection from video frames seemed challenging in the early stages of this study, manual annotation, corneal extraction, and CLAHE contrast adjustment significantly improved accuracy. The incorporation of manual annotation into the AI pipeline, through semantic segmentation, facilitated high accuracy in detecting corneal opacity.
眼科服务在全球范围内存在不足,尤其是在低收入和中等收入国家,其特点是从业者和设备短缺。本研究采用了一款具有视频功能和云存储的便携式裂隙灯显微镜,以实现更公平的全球诊断资源分配。为了提高可及性和护理质量,本研究针对角膜混浊这一全球失明原因展开。本研究有两个目的。第一个目的是从捕获眼睛前段的视频中检测角膜混浊。另一个目的是开发一个用于检测角膜混浊的人工智能流程。首先,我们从视频中提取图像帧,并使用卷积神经网络(CNN)模型对其进行处理。其次,我们手动标注图像以仅提取角膜边缘,使用对比度受限自适应直方图均衡化(CLAHE)调整对比度,并使用CNN模型对其进行处理。最后,我们使用标注数据对角膜进行语义分割。结果显示,图像帧的准确率为0.8,角膜边缘的准确率为0.96。角膜边缘语义分割的骰子系数(Dice)和交并比(IoU)得分为0.94。尽管在本研究的早期阶段,从视频帧中检测角膜混浊似乎具有挑战性,但手动标注、角膜提取和CLAHE对比度调整显著提高了准确率。通过语义分割将手动标注纳入人工智能流程,有助于在检测角膜混浊时实现高精度。