Miller Mattea E, Witte Dan, Lina Ioan, Walsh Jonathan, Rameau Anaïs, Bhatti Nasir I
Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, Baltimore, Maryland, USA.
Perceptron Health, Inc, New York, New York, U.S.A.
Laryngoscope. 2025 Mar;135(3):1046-1053. doi: 10.1002/lary.31812. Epub 2024 Oct 3.
Here we describe the development and pilot testing of the first artificial intelligence (AI) software "copilot" to help train novices to competently perform flexible fiberoptic laryngoscopy (FFL) on a mannikin and improve their uptake of FFL skills.
Supervised machine learning was used to develop an image classifier model, dubbed the "anatomical region classifier," responsible for predicting the location of camera in the upper aerodigestive tract and an object detection model, dubbed the "anatomical structure detector," responsible for locating and identifying key anatomical structures in images. Training data were collected by performing FFL on an AirSim Combo Bronchi X mannikin (United Kingdom, TruCorp Ltd) using an Ambu aScope 4 RhinoLaryngo Slim connected to an Ambu® aView™ 2 Advance Displaying Unit (Ballerup, Ambu A/S). Medical students were prospectively recruited to try the FFL copilot and rate its ease of use and self-rate their skills with and without the copilot.
This model classified anatomical regions with an overall accuracy of 91.9% on the validation set and 80.1% on the test set. The model detected anatomical structures with overall mean average precision of 0.642. Through various optimizations, we were able to run the AI copilot at approximately 28 frames per second (FPS), which is imperceptible from real time and nearly matches the video frame rate of 30 FPS. Sixty-four novice medical students were recruited for feedback on the copilot. Although 90.9% strongly agreed/agreed that the AI copilot was easy to use, their self-rating of FFL skills following use of the copilot were overall equivocal to their self-rating without the copilot.
The AI copilot tracked successful capture of diagnosable views of key anatomical structures effectively guiding users through FFL to ensure all anatomical structures are sufficiently captured. This tool has the potential to assist novices in efficiently gaining competence in FFL.
NA Laryngoscope, 135:1046-1053, 2025.
在此,我们描述了首款人工智能(AI)软件“副驾驶”的开发及初步测试情况,该软件旨在帮助培训新手在人体模型上熟练进行可弯曲纤维喉镜检查(FFL),并提高他们对FFL技能的掌握程度。
使用监督式机器学习开发了一个图像分类器模型,称为“解剖区域分类器”,负责预测上呼吸道消化道中摄像头的位置;以及一个目标检测模型,称为“解剖结构检测器”,负责在图像中定位和识别关键解剖结构。训练数据是通过使用连接到Ambu® aView™ 2高级显示单元(丹麦巴勒鲁普,Ambu A/S公司)的Ambu aScope 4 RhinoLaryngo Slim喉镜,在AirSim Combo Bronchi X人体模型(英国,TruCorp有限公司)上进行FFL操作收集的。前瞻性招募医学生试用FFL副驾驶软件,并对其易用性进行评分,同时在使用和不使用副驾驶软件的情况下对自己的技能进行自我评估。
该模型在验证集上对解剖区域的分类总体准确率为91.9%,在测试集上为80.1%。该模型检测解剖结构的总体平均精度为0.642。通过各种优化,我们能够以大约每秒28帧(FPS)的速度运行AI副驾驶软件,这与实时情况几乎无差异,并且几乎与30 FPS的视频帧率相匹配。招募了64名新手医学生以获取他们对副驾驶软件的反馈。尽管90.9%的人强烈同意/同意AI副驾驶软件易于使用,但他们在使用副驾驶软件后对FFL技能的自我评估总体上与未使用副驾驶软件时的自我评估相当。
AI副驾驶软件能够有效跟踪关键解剖结构的可诊断视图的成功捕获,通过FFL有效地指导用户,以确保所有解剖结构都能被充分捕获。该工具具有帮助新手高效掌握FFL技能的潜力。
无喉镜,135:1046 - 1053,2025年。