Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
School of Electronics Engineering, Xi'an University of Posts and Telecommunications, Xi'an, 710121, China.
Comput Methods Programs Biomed. 2021 May;203:106048. doi: 10.1016/j.cmpb.2021.106048. Epub 2021 Mar 17.
Previous studies developed artificial intelligence (AI) diagnostic systems only using eligible slit-lamp images for detecting corneal diseases. However, images of ineligible quality (including poor-field, defocused, and poor-location images), which are inevitable in the real world, can cause diagnostic information loss and thus affect downstream AI-based image analysis. Manual evaluation for the eligibility of slit-lamp images often requires an ophthalmologist, and this procedure can be time-consuming and labor-intensive when applied on a large scale. Here, we aimed to develop a deep learning-based image quality control system (DLIQCS) to automatically detect and filter out ineligible slit-lamp images (poor-field, defocused, and poor-location images).
We developed and externally evaluated the DLIQCS based on 48,530 slit-lamp images (19,890 individuals) that were derived from 4 independent institutions using different types of digital slit lamp cameras. To find the best deep learning model for the DLIQCS, we used 3 algorithms (AlexNet, DenseNet121, and InceptionV3) to train models. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy were leveraged to assess the performance of each algorithm for the classification of poor-field, defocused, poor-location, and eligible images.
In an internal test dataset, the best algorithm DenseNet121 had AUCs of 0.999, 1.000, 1.000, and 1.000 in the detection of poor-field, defocused, poor-location, and eligible images, respectively. In external test datasets, the AUCs of the best algorithm DenseNet121 for identifying poor-field, defocused, poor-location, and eligible images were ranged from 0.997 to 0.997, 0.983 to 0.995, 0.995 to 0.998, and 0.999 to 0.999, respectively.
Our DLIQCS can accurately detect poor-field, defocused, poor-location, and eligible slit-lamp images in an automated fashion. This system may serve as a prescreening tool to filter out ineligible images and enable that only eligible images would be transferred to the subsequent AI diagnostic systems.
之前的研究仅使用合格的裂隙灯图像开发了人工智能(AI)诊断系统来检测角膜疾病。然而,在现实世界中不可避免会出现不合格质量的图像(包括视场不良、散焦和位置不佳的图像),这会导致诊断信息丢失,从而影响基于 AI 的下游图像分析。手动评估裂隙灯图像的合格性通常需要眼科医生,当大规模应用时,这个过程可能既耗时又费力。在这里,我们旨在开发一种基于深度学习的图像质量控制系统(DLIQCS),以自动检测和过滤不合格的裂隙灯图像(视场不良、散焦和位置不佳的图像)。
我们基于来自 4 个独立机构的 48530 张裂隙灯图像(19890 人)开发了 DLIQCS,并对其进行了外部评估,这些图像是使用不同类型的数字裂隙灯相机获得的。为了找到最适合 DLIQCS 的深度学习模型,我们使用了 3 种算法(AlexNet、DenseNet121 和 InceptionV3)来训练模型。我们利用受试者工作特征曲线下的面积(AUC)、敏感性、特异性和准确性来评估每种算法对不良视场、散焦、位置不佳和合格图像分类的性能。
在内部测试数据集上,最佳算法 DenseNet121 在检测不良视场、散焦、位置不佳和合格图像时的 AUC 分别为 0.999、1.000、1.000 和 1.000。在外部测试数据集上,最佳算法 DenseNet121 识别不良视场、散焦、位置不佳和合格图像的 AUC 范围分别为 0.997 至 0.997、0.983 至 0.995、0.995 至 0.998 和 0.999 至 0.999。
我们的 DLIQCS 可以自动准确地检测不良视场、散焦、位置不佳和合格的裂隙灯图像。该系统可用作不合格图像的预筛选工具,以确保仅将合格图像传输到后续的 AI 诊断系统。