School of Information Engineering, Huzhou University, Huzhou 313000, China.
Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou 313000, China.
J Healthc Eng. 2022 Nov 21;2022:3942110. doi: 10.1155/2022/3942110. eCollection 2022.
A two-category model and a segmentation model of pterygium were proposed to assist ophthalmologists in establishing the diagnosis of ophthalmic diseases. A total of 367 normal anterior segment images and 367 pterygium anterior segment images were collected at the Affiliated Eye Hospital of Nanjing Medical University. AlexNet, VGG16, ResNet18, and ResNet50 models were used to train the two-category pterygium models. A total of 150 normal and 150 pterygium anterior segment images were used to test the models, and the results were compared. The main evaluation indicators, including sensitivity, specificity, area under the curve, kappa value, and receiver operator characteristic curves of the four models, were compared. Simultaneously, 367 pterygium anterior segment images were used to train two improved pterygium segmentation models based on PSPNet. A total of 150 pterygium images were used to test the models, and the results were compared with those of the other four segmentation models. The main evaluation indicators included mean intersection over union (MIOU), IOU, mean average precision (MPA), and PA. Among the two-category models of pterygium, the best diagnostic result was obtained using the VGG16 model. The diagnostic accuracy, kappa value, diagnostic sensitivity of pterygium, diagnostic specificity of pterygium, and F1-score were 99%, 98%, 98.67%, 99.33%, and 99%, respectively. Among the pterygium segmentation models, the double phase-fusion PSPNet model had the best results, with MIOU, IOU, MPA, and PA of 86.57%, 78.1%, 92.3%, and 86.96%, respectively. This study designed a pterygium two-category model and a pterygium segmentation model for the images of the normal anterior and pterygium anterior segments, which could help patients self-screen easily and assist ophthalmologists in establishing the diagnosis of ophthalmic diseases and marking the actual scope of surgery.
提出了一种翼状胬肉的二分类模型和分割模型,以协助眼科医生建立眼病的诊断。在南京医科大学附属医院收集了 367 张正常眼前节图像和 367 张翼状胬肉眼前节图像。使用 AlexNet、VGG16、ResNet18 和 ResNet50 模型训练二分类翼状胬肉模型。使用 150 张正常和 150 张翼状胬肉眼前节图像测试模型,并比较结果。比较了四个模型的主要评估指标,包括敏感性、特异性、曲线下面积、kappa 值和受试者工作特征曲线。同时,使用 367 张翼状胬肉眼前节图像基于 PSPNet 训练两个改进的翼状胬肉分割模型。使用 150 张翼状胬肉图像测试模型,并与其他四个分割模型的结果进行比较。主要评估指标包括平均交并比(MIOU)、IOU、平均准确率(MPA)和 PA。在翼状胬肉的二分类模型中,VGG16 模型获得了最佳诊断结果。翼状胬肉的诊断准确率、kappa 值、诊断敏感性、诊断特异性和 F1 评分分别为 99%、98%、98.67%、99.33%和 99%。在翼状胬肉分割模型中,双相融合 PSPNet 模型效果最好,MIOU、IOU、MPA 和 PA 分别为 86.57%、78.1%、92.3%和 86.96%。本研究为正常眼前节和翼状胬肉眼前节图像设计了翼状胬肉二分类模型和分割模型,方便患者进行自我筛查,并协助眼科医生建立眼病的诊断,标记实际手术范围。