Wang Ganhong, Yin Limei, Zhang Hanyue, Xia Kaijian, Su Yue, Chen Jian
Department of Gastroenterology, Changshu Hospital Affiliated to Nanjing University of Chinese Medicine, Suzhou, China.
Department of Nursing, Changshu Hospital Affiliated to Nanjing University of Chinese Medicine, Suzhou, China.
Front Physiol. 2025 Jul 10;16:1629238. doi: 10.3389/fphys.2025.1629238. eCollection 2025.
This study aims to develop an artificial intelligence model and web-based application for the automatic detection of 21 commonly used auricular acupoints based on the YOLOv11 neural network.
A total of 660 human ear images were collected from three medical centers. The LabelMe annotation tool was used to label the images with bounding boxes and key points, which were then converted into a format compatible with the YOLO model. Using this dataset, transfer learning and fine-tuning were performed on different-sized versions of the YOLOv11 neural network. The model performance was evaluated on validation and test sets, considering metrics such as mean average precision (mAP) under different thresholds, recall, and detection speed. The best-performing model was subsequently deployed as a web application using the Streamlit library in the Python environment.
Five versions of the YOLOv11 keypoint detection model were developed, namely YOLOv11n, YOLOv11s, YOLOv11m, YOLOv11l, and YOLOv11x. Among them, YOLOv11x achieved the highest performance in the validation set with a precision of 0.991, recall of 0.976, mAP of 0.983, and mAP of 0.625, though it exhibited the longest inference delay (19 ms/img). On the external test set, YOLOv11x achieved an ear recognition accuracy of 0.996, sensitivity of 0.996, and an F1-score of 0.998. For auricular acupoint localization, the model achieved an mAP of 0.982, precision of 0.975, and recall of 0.976. The model has been successfully deployed as a web application, accessible on both mobile and desktop platforms to accommodate diverse user needs.
The YoloEar21 web application, developed based on YOLOv11x and Streamlit, demonstrates superior recognition performance and user-friendly accessibility. Capable of providing automatic identification of 21 commonly used auricular acupoints across various scenarios for diverse users, it exhibits promising potential for clinical applications.
本研究旨在基于YOLOv11神经网络开发一种人工智能模型和基于网络的应用程序,用于自动检测21个常用耳穴。
从三个医疗中心收集了总共660张人耳图像。使用LabelMe注释工具用边界框和关键点对图像进行标注,然后将其转换为与YOLO模型兼容的格式。使用该数据集,对不同大小版本的YOLOv11神经网络进行迁移学习和微调。在验证集和测试集上评估模型性能,考虑不同阈值下的平均精度均值(mAP)、召回率和检测速度等指标。随后,使用Python环境中的Streamlit库将性能最佳的模型部署为网络应用程序。
开发了五个版本的YOLOv11关键点检测模型,即YOLOv11n、YOLOv11s、YOLOv11m、YOLOv11l和YOLOv11x。其中,YOLOv11x在验证集中表现最佳,精度为0.991,召回率为0.976,mAP为0.983,0.625阈值下的mAP为0.625,不过其推理延迟最长(19毫秒/图像)。在外部测试集上,YOLOv11x的耳部识别准确率为0.996,灵敏度为0.996,F1分数为0.998。对于耳穴定位,该模型的mAP为0.982,精度为0.975,召回率为0.976。该模型已成功部署为网络应用程序,可在移动和桌面平台上访问,以满足不同用户的需求。
基于YOLOv11x和Streamlit开发的YoloEar21网络应用程序具有卓越的识别性能和用户友好的可访问性。能够在各种场景下为不同用户提供21个常用耳穴的自动识别,在临床应用中展现出广阔的潜力。