Tang Ningning, Chen Qi, Meng Yunyu, Lei Daizai, Jiang Li, Qin Yikun, Huang Xiaojia, Tang Fen, Huang Shanshan, Lan Qianqian, Chen Qi, Huang Lijie, Lan Rushi, Pan Xipeng, Wang Huadeng, Xu Fan, He Wenjing
Guangxi Key Laboratory of Eye Health and Guangxi Health Commission Key Laboratory of Ophthalmology and Related Systemic Diseases Artificial Intelligence Screening Technology and Research Center of Ophthalmology, Guangxi Academy of Medical Sciences and Department of Ophthalmology, The People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China.
Information and Technology Department, Guangxi Beibu Gulf Bank Co., Ltd., Nanning, China.
Front Bioeng Biotechnol. 2025 Jun 6;13:1576513. doi: 10.3389/fbioe.2025.1576513. eCollection 2025.
confocal microscopy (IVCM) is a crucial imaging modality for assessing corneal diseases, yet distinguishing pathological features from normal variations remains challenging due to the complex multi-layered corneal structure. Existing anomaly detection methods often struggle to generalize across diverse disease manifestations. To address these limitations, we propose a Transformer-based unsupervised anomaly detection method for IVCM images, capable of identifying corneal abnormalities without prior knowledge of specific disease features.
Our method consists of three submodules: an EfficientNet network, a Multi-Scale Feature Fusion Network, and a Transformer Network. A total of 7,063 IVCM images (95 eyes) were included for analysis. The model was trained exclusively on normal IVCM images to capture and differentiate structural variations across four distinct corneal layers: epithelium, sub-basal nerve plexus, stroma, and endothelium. During inference, anomaly scores were computed to distinguish pathological from normal images. The model's performance was evaluated on both internal and external datasets, and comparative analyses were conducted against existing anomaly detection methods, including generative adversarial networks (AnoGAN), generate to detect anomaly model (G2D), and discriminatively trained reconstruction anomaly embedding model (DRAEM). Additionally, explainable anomaly maps were generated to enhance the interpretability of model decisions.
The proposed method achieved an the areas under the receiver operating characteristic curve of 0.933 on internal validation and 0.917 on an external test dataset, outperforming AnoGAN, G2D, and DRAEM in both accuracy and generalizability. The model effectively distinguished normal and pathological images, demonstrating statistically significant differences in anomaly scores (p < 0.001). Furthermore, visualization results indicated that the detected anomalous regions corresponded to morphological deviations, highlighting potential imaging biomarkers for corneal diseases.
This study presents an efficient and interpretable unsupervised anomaly detection model for IVCM images, effectively identifying corneal abnormalities without requiring labeled pathological samples. The proposed method enhances screening efficiency, reduces annotation costs, and holds great potential for scalable intelligent diagnosis of corneal diseases.
共聚焦显微镜检查(IVCM)是评估角膜疾病的关键成像方式,但由于角膜结构复杂且多层,将病理特征与正常变异区分开来仍具有挑战性。现有的异常检测方法往往难以在各种疾病表现中通用。为解决这些局限性,我们提出了一种基于Transformer的无监督IVCM图像异常检测方法,该方法能够在无需特定疾病特征先验知识的情况下识别角膜异常。
我们的方法由三个子模块组成:一个EfficientNet网络、一个多尺度特征融合网络和一个Transformer网络。总共纳入7063张IVCM图像(95只眼)进行分析。该模型仅在正常IVCM图像上进行训练,以捕捉和区分四个不同角膜层(上皮层、基底神经丛下层、基质层和内皮细胞层)的结构变异。在推理过程中,计算异常分数以区分病理图像和正常图像。在内部和外部数据集上评估该模型的性能,并与现有的异常检测方法进行比较分析,包括生成对抗网络(AnoGAN)、生成检测异常模型(G2D)和判别训练的重建异常嵌入模型(DRAEM)。此外,生成可解释的异常图以增强模型决策的可解释性。
所提出的方法在内部验证中的受试者操作特征曲线下面积为0.933,在外部测试数据集上为0.917,在准确性和通用性方面均优于AnoGAN、G2D和DRAEM。该模型有效地区分了正常图像和病理图像,异常分数显示出统计学上的显著差异(p < 0.001)。此外,可视化结果表明检测到的异常区域与形态学偏差相对应,突出了角膜疾病潜在的成像生物标志物。
本研究提出了一种用于IVCM图像的高效且可解释的无监督异常检测模型,无需标记的病理样本即可有效识别角膜异常。所提出的方法提高了筛查效率,降低了注释成本,在角膜疾病的可扩展智能诊断方面具有巨大潜力。