Suppr超能文献

基于机器学习的边缘节点,用于监测人们的虚弱状态。

ML-Based Edge Node for Monitoring Peoples' Frailty Status.

机构信息

Department of Information Engineering, Università Politecnica delle Marche, via Brecce Bianche 12, 60131 Ancona, Italy.

出版信息

Sensors (Basel). 2024 Jul 5;24(13):4386. doi: 10.3390/s24134386.

Abstract

The development of contactless methods to assess the degree of personal hygiene in elderly people is crucial for detecting frailty and providing early intervention to prevent complete loss of autonomy, cognitive impairment, and hospitalisation. The unobtrusive nature of the technology is essential in the context of maintaining good quality of life. The use of cameras and edge computing with sensors provides a way of monitoring subjects without interrupting their normal routines, and has the advantages of local data processing and improved privacy. This work describes the development an intelligent system that takes the RGB frames of a video as input to classify the occurrence of brushing teeth, washing hands, and fixing hair. No action activity is considered. The RGB frames are first processed by two Mediapipe algorithms to extract body keypoints related to the pose and hands, which represent the features to be classified. The optimal feature extractor results from the most complex Mediapipe pose estimator combined with the most complex hand keypoint regressor, which achieves the best performance even when operating at one frame per second. The final classifier is a Light Gradient Boosting Machine classifier that achieves more than 94% weighted F1-score under conditions of one frame per second and observation times of seven seconds or more. When the observation window is enlarged to ten seconds, the F1-scores for each class oscillate between 94.66% and 96.35%.

摘要

发展非接触式方法来评估老年人的个人卫生程度对于检测虚弱程度以及提供早期干预以防止完全丧失自主性、认知障碍和住院至关重要。在维持良好生活质量的背景下,技术的非侵入性至关重要。使用带有传感器的摄像头和边缘计算为在不干扰其正常日常生活的情况下对受试者进行监测提供了一种方法,并且具有本地数据处理和隐私性提高的优点。这项工作描述了开发一种智能系统的过程,该系统将视频的 RGB 帧作为输入,以对刷牙、洗手和梳头的发生进行分类。不考虑任何动作活动。首先,RGB 帧由两个 Mediapipe 算法处理,以提取与姿势和手相关的身体关键点,这些关键点代表要分类的特征。最优的特征提取器是由最复杂的 Mediapipe 姿势估计器与最复杂的手部关键点回归器相结合产生的,即使在每秒一帧的情况下也能达到最佳性能。最终的分类器是一个轻量级梯度提升机分类器,在每秒一帧和观察时间为七秒或更长时间的条件下,加权 F1 得分为 94%以上。当观察窗口扩大到十秒时,每个类别的 F1 分数在 94.66%到 96.35%之间波动。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd81/11244600/b29228e7a39c/sensors-24-04386-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验