Suppr超能文献

基于深度学习的超声生物显微镜眼前节图像中的关键点定位与参数测量

Keypoint localization and parameter measurement in ultrasound biomicroscopy anterior segment images based on deep learning.

作者信息

Qinghao Miao, Sheng Zhou, Jun Yang, Xiaochun Wang, Min Zhang

机构信息

State Key Laboratory of Advanced Medical Materials and Devices, Institute of Biomedical Engineering, Tianjin Institutes of Health Science, Chinese Academy of Medical Science and Peking Union Medical College, No. 236, Baidi Road, Nankai District, Tianjin, 300192, The People's Republic of China.

Tianjin Medical University Eye Hospital, No. 251, Fukang Road, Nankai District, Tianjin, 300384, The People's Republic of China.

出版信息

Biomed Eng Online. 2025 May 6;24(1):53. doi: 10.1186/s12938-025-01388-3.

Abstract

BACKGROUND

Accurate measurement of anterior segment parameters is crucial for diagnosing and managing ophthalmic conditions, such as glaucoma, cataracts, and refractive errors. However, traditional clinical measurement methods are often time-consuming, labor-intensive, and susceptible to inaccuracies. With the growing potential of artificial intelligence in ophthalmic diagnostics, this study aims to develop and evaluate a deep learning model capable of automatically extracting key points and precisely measuring multiple clinically significant anterior segment parameters from ultrasound biomicroscopy (UBM) images. These parameters include central corneal thickness (CCT), anterior chamber depth (ACD), pupil diameter (PD), angle-to-angle distance (ATA), sulcus-to-sulcus distance (STS), lens thickness (LT), and crystalline lens rise (CLR).

METHODS

A data set of 716 UBM anterior segment images was collected from Tianjin Medical University Eye Hospital. YOLOv8 was utilized to segment four key anatomical structures: cornea-sclera, anterior chamber, pupil, and iris-ciliary body-thereby enhancing the accuracy of keypoint localization. Only images with intact posterior capsule lentis were selected to create an effective data set for parameter measurement. Ten keypoints were localized across the data set, allowing the calculation of seven essential parameters. Control experiments were conducted to evaluate the impact of segmentation on measurement accuracy, with model predictions compared against clinical gold standards.

RESULTS

The segmentation model achieved a mean IoU of 0.8836 and mPA of 0.9795. Following segmentation, the binary classification model attained an mAP of 0.9719, with a precision of 0.9260 and a recall of 0.9615. Keypoint localization exhibited a Euclidean distance error of 58.73 ± 63.04 μm, improving from the pre-segmentation error of 71.57 ± 67.36 μm. Localization mAP was 0.9826, with a precision of 0.9699, a recall of 0.9642 and an FPS of 32.64. In addition, parameter error analysis and Bland-Altman plots demonstrated improved agreement with clinical gold standards after segmentation.

CONCLUSIONS

This deep learning approach for UBM image segmentation, keypoint localization, and parameter measurement is feasible, enhancing clinical diagnostic efficiency for anterior segment parameters.

摘要

背景

准确测量眼前节参数对于青光眼、白内障和屈光不正等眼科疾病的诊断和管理至关重要。然而,传统的临床测量方法通常耗时、费力且容易出现不准确的情况。随着人工智能在眼科诊断中的潜力不断增长,本研究旨在开发和评估一种深度学习模型,该模型能够自动提取关键点并精确测量超声生物显微镜(UBM)图像中多个具有临床意义的眼前节参数。这些参数包括中央角膜厚度(CCT)、前房深度(ACD)、瞳孔直径(PD)、角到角距离(ATA)、沟到沟距离(STS)、晶状体厚度(LT)和晶状体上升(CLR)。

方法

从天津医科大学眼科医院收集了716张UBM眼前节图像数据集。利用YOLOv8对角膜-巩膜、前房、瞳孔和虹膜-睫状体这四个关键解剖结构进行分割,从而提高关键点定位的准确性。仅选择后囊膜完整的图像来创建用于参数测量的有效数据集。在整个数据集中定位了10个关键点,从而可以计算出7个基本参数。进行了对照实验以评估分割对测量准确性的影响,并将模型预测结果与临床金标准进行比较。

结果

分割模型的平均交并比(IoU)为0.8836,平均像素精度(mPA)为0.9795。分割后,二元分类模型的平均精度均值(mAP)为0.9719,精确率为0.9260,召回率为0.9615。关键点定位的欧几里得距离误差为58.73±63.04μm,相较于分割前71.57±67.36μm的误差有所改善。定位mAP为0.9826,精确率为0.9699,召回率为0.9642,每秒帧数(FPS)为32.64。此外,参数误差分析和布兰德-奥特曼图表明分割后与临床金标准的一致性得到了改善。

结论

这种用于UBM图像分割、关键点定位和参数测量的深度学习方法是可行的,提高了眼前节参数的临床诊断效率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7fea/12056989/25f54bd2e132/12938_2025_1388_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验