Gao X, Li Y R, Lin G D, Xu M K, Zhang X Q, Shi Y H, Xu W, Wang X J, Han D M
Department of Otorhinolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Key Laboratory of Otorhinolaryngology Head and Neck Surgery(Capital Medical University), Ministry of Education, Beijing 100730,China.
Department of Electronic Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China.
Zhonghua Er Bi Yan Hou Tou Jing Wai Ke Za Zhi. 2021 Dec 7;56(12):1256-1262. doi: 10.3760/cma.j.cn115330-20210513-00267.
To investigate theaccuracy of artificial intelligence sleep staging model in patients with habitual snoring and obstructive sleep apnea hypopnea syndrome (OSAHS) based on single-channel EEG collected from different locations of the head. The clinical data of 114 adults with habitual snoring and OSAHS who visited to the Sleep Medicine Center of Beijing Tongren Hospital from September 2020 to March of 2021 were analyzed retrospectively, including 93 males and 21 females, aging from 20 to 64 years old. Eighty-five adults with OSAHS and 29 subjects with habitual snoring were included. Sleep staging analysis was performed on the single lead EEG signals of different locations (FP2-M1, C4-M1, F3-M2, ROG-M1, O1-M2) using the deep learning segmentation model trained by previous data. Manual scoring results were used as the gold standard to analyze the consistency rate of results and the influence of different categories of disease. EEG data in 124 747 30-second epochs were taken as the testing dataset. The model accuracy of distinguishing wake/sleep was 92.3%,92.6%,93.5%,89.2% and 83.0% respectively,based on EEG channel Fp2-M1, C4-M1, F3-M2, REOG-M1 or O1-M2. The mode accuracy of distinguishing wake/REM/NREM and wake/REM/N1-2/SWS , was 84.7% and 80.1% respectively based on channel Fp2-M1, which located in forehead skin. The AHI calculated based on total sleep time derived from the model and gold standard were 13.6[4.30,42.5] and 14.2[4.8,42.7], respectively (Z=-2.477, P=0.013), and the kappa coefficient was 0.977. The autonomic sleep staging via a deep neural network model based on forehead single-channel EEG (Fp2-M1) has a good consistency in the identification sleep stage in a population with habitual snoring and OSAHS with different categories. The AHI calculated based on this model has high consistency with manual scoring.
基于从头部不同位置采集的单通道脑电图,研究人工智能睡眠分期模型在习惯性打鼾和阻塞性睡眠呼吸暂停低通气综合征(OSAHS)患者中的准确性。回顾性分析了2020年9月至2021年3月在北京同仁医院睡眠医学中心就诊的114例习惯性打鼾和OSAHS成年患者的临床资料,其中男性93例,女性21例,年龄20至64岁。包括85例OSAHS成年患者和29例习惯性打鼾受试者。使用先前数据训练的深度学习分割模型对不同位置(FP2-M1、C4-M1、F3-M2、ROG-M1、O1-M2)的单导联脑电图信号进行睡眠分期分析。以人工评分结果作为金标准,分析结果的一致性率和不同疾病类别的影响。将124747个30秒时段的脑电图数据作为测试数据集。基于脑电图通道Fp2-M1、C4-M1、F3-M2、REOG-M1或O1-M2区分清醒/睡眠的模型准确率分别为92.3%、92.6%、93.5%、89.2%和83.0%。基于位于前额皮肤的通道Fp2-M1区分清醒/快速眼动/非快速眼动以及清醒/快速眼动/ N1-2/慢波睡眠的模型准确率分别为84.7%和80.1%。基于模型和金标准得出的总睡眠时间计算的呼吸暂停低通气指数(AHI)分别为13.6[4.30,42.5]和14.2[4.8,42.7](Z=-2.477,P=0.013),kappa系数为0.977。基于前额单通道脑电图(Fp2-M1)的深度神经网络模型进行的自主睡眠分期在不同类别的习惯性打鼾和OSAHS人群的睡眠阶段识别中具有良好的一致性。基于该模型计算的AHI与人工评分具有高度一致性。