Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong 999077, China.
School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan 430074, China.
Sensors (Basel). 2023 Feb 23;23(5):2475. doi: 10.3390/s23052475.
Sleep posture has a crucial impact on the incidence and severity of obstructive sleep apnea (OSA). Therefore, the surveillance and recognition of sleep postures could facilitate the assessment of OSA. The existing contact-based systems might interfere with sleeping, while camera-based systems introduce privacy concerns. Radar-based systems might overcome these challenges, especially when individuals are covered with blankets. The aim of this research is to develop a nonobstructive multiple ultra-wideband radar sleep posture recognition system based on machine learning models. We evaluated three single-radar configurations (top, side, and head), three dual-radar configurations (top + side, top + head, and side + head), and one tri-radar configuration (top + side + head), in addition to machine learning models, including CNN-based networks (ResNet50, DenseNet121, and EfficientNetV2) and vision transformer-based networks (traditional vision transformer and Swin Transformer V2). Thirty participants ( = 30) were invited to perform four recumbent postures (supine, left side-lying, right side-lying, and prone). Data from eighteen participants were randomly chosen for model training, another six participants' data ( = 6) for model validation, and the remaining six participants' data ( = 6) for model testing. The Swin Transformer with side and head radar configuration achieved the highest prediction accuracy (0.808). Future research may consider the application of the synthetic aperture radar technique.
睡眠姿势对阻塞性睡眠呼吸暂停(OSA)的发生和严重程度有重要影响。因此,监测和识别睡眠姿势有助于评估 OSA。现有的基于接触的系统可能会干扰睡眠,而基于摄像头的系统则会引发隐私问题。基于雷达的系统可能会克服这些挑战,尤其是当个体被毯子覆盖时。本研究旨在开发一种基于机器学习模型的非侵入式多超宽带雷达睡眠姿势识别系统。我们评估了三种单雷达配置(顶部、侧面和头部)、三种双雷达配置(顶部+侧面、顶部+头部和侧面+头部)以及一种三雷达配置(顶部+侧面+头部),以及机器学习模型,包括基于卷积神经网络(ResNet50、DenseNet121 和 EfficientNetV2)和基于视觉转换器的网络(传统视觉转换器和 Swin Transformer V2)。邀请了 30 名参与者(n=30)进行四种卧位姿势(仰卧位、左侧卧位、右侧卧位和俯卧位)。从十八名参与者中随机选择数据进行模型训练,六名参与者(n=6)的数据用于模型验证,其余六名参与者(n=6)的数据用于模型测试。带有侧面和头部雷达配置的 Swin Transformer 实现了最高的预测精度(0.808)。未来的研究可能会考虑应用合成孔径雷达技术。