Mohammadi Sara Mahvash, Kouchaki Samaneh, Khan Sofia, Dijk Derk-Jan, Hilton Adrian, Wells Kevin
Annu Int Conf IEEE Eng Med Biol Soc. 2019 Jul;2019:3115-3118. doi: 10.1109/EMBC.2019.8856873.
In this study, a novel sleep pose identification method has been proposed for classifying 12 different sleep postures using a two-step deep learning process. For this purpose, transfer learning as an initial stage retrains a well-known CNN network (VGG-19) to categorise the data into four main pose classes, namely: supine, left, right, and prone. According to the decision made by VGG-19, subsets of the image data are next passed to one of four dedicated sub-class CNNs. As a result, the pose estimation label is further refined from one of four sleep pose labels to one of 12 sleep pose labels. 10 participants contributed for recording infrared (IR) images of 12 pre-defined sleep positions. Participants were covered by a blanket to occlude the original pose and present a more realistic sleep situation. Finally, we have compared our results with (1) the traditional CNN learning from scratch and (2) retrained VGG-19 network in one stage. The average accuracy increased from 74.5% & 78.1% to 85.6% compared with (1) & (2) respectively.
在本研究中,提出了一种新颖的睡眠姿势识别方法,该方法使用两步深度学习过程对12种不同的睡眠姿势进行分类。为此,作为初始阶段的迁移学习对一个著名的卷积神经网络(VGG-19)进行重新训练,以将数据分类为四个主要姿势类别,即仰卧、左侧卧、右侧卧和俯卧。根据VGG-19做出的决策,图像数据子集接下来被传递到四个专用子类别卷积神经网络之一。结果,姿势估计标签从四个睡眠姿势标签之一进一步细化为12个睡眠姿势标签之一。10名参与者参与记录12个预定义睡眠姿势的红外(IR)图像。参与者身上盖着毯子,以遮挡原始姿势并呈现更真实的睡眠场景。最后,我们将我们的结果与(1)从零开始学习的传统卷积神经网络和(2)在一个阶段重新训练的VGG-19网络进行了比较。与(1)和(2)相比,平均准确率分别从74.5%和78.1%提高到了85.6%。