IEEE J Biomed Health Inform. 2021 Jul;25(7):2733-2743. doi: 10.1109/JBHI.2020.3046613. Epub 2021 Jul 27.
Accurate detection of individual intake gestures is a key step towards automatic dietary monitoring. Both inertial sensor data of wrist movements and video data depicting the upper body have been used for this purpose. The most advanced approaches to date use a two-stage approach, in which (i) frame-level intake probabilities are learned from the sensor data using a deep neural network, and then (ii) sparse intake events are detected by finding the maxima of the frame-level probabilities. In this study, we propose a single-stage approach which directly decodes the probabilities learned from sensor data into sparse intake detections. This is achieved by weakly supervised training using Connectionist Temporal Classification (CTC) loss, and decoding using a novel extended prefix beam search decoding algorithm. Benefits of this approach include (i) end-to-end training for detections, (ii) simplified timing requirements for intake gesture labels, and (iii) improved detection performance compared to existing approaches. Across two separate datasets, we achieve relative F score improvements between 1.9% and 6.2% over the two-stage approach for intake detection and eating/drinking detection tasks, for both video and inertial sensors.
准确检测个体摄入动作是自动饮食监测的关键步骤。腕部运动的惯性传感器数据和描绘上半身的视频数据都被用于此目的。迄今为止,最先进的方法采用两阶段方法,其中(i)使用深度神经网络从传感器数据中学习帧级摄入概率,然后(ii)通过找到帧级概率的最大值来检测稀疏摄入事件。在这项研究中,我们提出了一种单阶段方法,该方法直接将从传感器数据中学习到的概率解码为稀疏摄入检测。这是通过使用连接时间分类(CTC)损失进行弱监督训练,并使用新的扩展前缀波束搜索解码算法进行解码来实现的。该方法的优点包括(i)用于检测的端到端训练,(ii)摄入手势标签的简化定时要求,以及(iii)与现有方法相比,检测性能得到提高。在两个独立的数据集上,我们在摄入检测和进食/饮水检测任务中,与视频和惯性传感器相比,与两阶段方法相比,相对 F 分数提高了 1.9%至 6.2%。