Department of Biomedical Engineering, National Cheng Kung University, Tainan, Taiwan; School of Electrical, Electronics, and Computer Engineering, Mapua University, Manila 1002 Philippines.
Department of Otolaryngology-Head and Neck Surgery, Cardinal Tien Hospital and Schhool of Medicine, Fu Jen Catholic University, New Taipei City, Taiwan; Department of Otolaryngology-Head and Neck Surgery, National Taiwan University Hospital, Taipei, Taiwan.
Ultrasonics. 2024 Jul;141:107320. doi: 10.1016/j.ultras.2024.107320. Epub 2024 Apr 25.
Obstructive sleep apnea (OSA) presents as a respiratory disorder characterized by recurrent upper pharyngeal airway collapse during sleep. Dynamic tongue movement (DTM) analysis emerges as a promising avenue for elucidating the pathophysiological underpinnings of OSA, thereby facilitating its diagnosis. Recent endeavors have utilized artificial intelligence techniques to categorize OSA severity leveraging electrocardiography and blood oxygen saturation data. Nonetheless, the integration of ultrasound (US) imaging of the tongue remains largely untapped in the development of machine learning models aimed at determining the severity of OSA. This study endeavors to bridge this gap by capturing US images of DTM dynamics during wakefulness, encompassing transitions from normal breathing (NB) to the performance of the Müller maneuver (MM) in a cohort of 53 patients. Leveraging the modified optical flow method (MOFM), the trajectories of patients' DTM were tracked, facililtating the extraction of 27 parameters vital for model training. These parameters encompassed nine-point lateral movement, nine-point axial movement, and nine-point total displacement of the tongue, resulting in a dataset of 186,030 samples. The gated recurrent unit (GRU) method, renowned for its efficacy in motion tracking, was employed for model development in this study. Validation of the developed model was conducted via stratified k-fold cross-validation (SCV). The systems' overall performance in classifying OSA severity, as quantified by mean accuracy (MA), yielded a value of 43.49%. This pilot investigation marks an exploratory endeavor into the utilization of artificial intelligence for the classification of OSA severity based on US images and dynamic movement patterns. This novel model holds potential to assist clinicians in categorizing OSA severity and guiding the selection of pertinent treatment modalities tailored to the individual needs of patients afflicted with OSA.
阻塞性睡眠呼吸暂停(OSA)表现为一种呼吸系统疾病,其特征是睡眠期间上咽气道反复塌陷。动态舌运动(DTM)分析成为阐明 OSA 病理生理基础的有前途的途径,从而有助于其诊断。最近的研究利用人工智能技术利用心电图和血氧饱和度数据对 OSA 严重程度进行分类。然而,在开发旨在确定 OSA 严重程度的机器学习模型中,舌的超声(US)成像的整合在很大程度上仍未得到开发。本研究旨在通过在 53 名患者的队列中捕捉清醒时 DTM 动态的 US 图像来弥补这一空白,这些图像涵盖了从正常呼吸(NB)到 Muller 动作(MM)的转变。利用改进的光流法(MOFM),跟踪患者 DTM 的轨迹,方便提取 27 个对模型训练至关重要的参数。这些参数包括舌的九点侧向运动、九点轴向运动和九点总位移,从而产生了 186030 个样本的数据集。门控循环单元(GRU)方法以其在运动跟踪方面的功效而闻名,本研究采用该方法开发模型。通过分层 k 折交叉验证(SCV)对开发的模型进行验证。系统对 OSA 严重程度的分类性能,以平均准确率(MA)衡量,为 43.49%。这项初步研究标志着利用人工智能基于 US 图像和动态运动模式对 OSA 严重程度进行分类的探索性尝试。该新型模型有望帮助临床医生对 OSA 严重程度进行分类,并指导选择针对 OSA 患者个体需求的相关治疗方法。