Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov;2021:6889-6893. doi: 10.1109/EMBC46164.2021.9629886.
Ear-worn devices are rapidly gaining popularity as they provide the means for measuring vital signals in an unobtrusive, 24/7 wearable and discrete fashion. Naturally, these devices are prone to motion artefacts when used in out-of-lab environments, various movements and activities cause relative movement between user's skin and the electrodes. Historically, these artefacts are seen as nuisance resulting in discarding the segments of signal wherever such artefacts are present. In this work, we make use of such artefacts to classify different daily activities that include sitting, speaking aloud, chewing and walking. To this end, multiple classification techniques are employed to identify these activities using 8 features calculated from the electrode and microphone signal embedded in a generic multimodal in-ear sensor. The results show an overall training accuracy of 93% and 90% and a testing accuracy of 85% and 80% when using a KNN and a 2-layer neural network respectively, thus providing a much needed, simple and reliable framework for real-life human activity classification.
耳戴式设备因其能够以非侵入式、24/7 可穿戴且隐蔽的方式测量生命信号而迅速普及。自然而然,当这些设备在实验室外环境中使用时,很容易受到运动伪影的影响,各种运动和活动都会导致用户皮肤和电极之间发生相对运动。从历史上看,这些伪影被视为干扰,导致只要存在这些伪影,就会丢弃信号的片段。在这项工作中,我们利用这些伪影来对包括坐、大声说话、咀嚼和行走在内的不同日常活动进行分类。为此,使用从通用多模态入耳式传感器中嵌入的电极和麦克风信号计算出的 8 个特征,采用多种分类技术来识别这些活动。结果表明,使用 KNN 和 2 层神经网络时,整体训练准确率分别为 93%和 90%,测试准确率分别为 85%和 80%,从而为现实生活中的人类活动分类提供了急需的、简单可靠的框架。