Suppr超能文献

一种用于分类等距握力施力水平的计算机视觉方法。

A computer vision approach for classifying isometric grip force exertion levels.

机构信息

School of Industrial Engineering, Purdue University, West Lafayette, IN, USA.

Department of Computer Science, Purdue University, West Lafayette, IN, USA.

出版信息

Ergonomics. 2020 Aug;63(8):1010-1026. doi: 10.1080/00140139.2020.1745898. Epub 2020 Apr 10.

Abstract

Exposure to high and/or repetitive force exertions can lead to musculoskeletal injuries. However, measuring worker force exertion levels is challenging, and existing techniques can be intrusive, interfere with human-machine interface, and/or limited by subjectivity. In this work, computer vision techniques are developed to detect isometric grip exertions using facial videos and wearable photoplethysmogram. Eighteen participants (19-24 years) performed isometric grip exertions at varying levels of maximum voluntary contraction. Novel features that predict forces were identified and extracted from video and photoplethysmogram data. Two experiments with two (High/Low) and three (0%MVC/50%MVC/100%MVC) labels were performed to classify exertions. The Deep Neural Network classifier performed the best with 96% and 87% accuracy for two- and three-level classifications, respectively. This approach was robust to leave subjects out during cross-validation (86% accuracy when 3-subjects were left out) and robust to noise (i.e. 89% accuracy for correctly classifying talking activities as low force exertions). : Forceful exertions are contributing factors to musculoskeletal injuries, yet it remains difficult to measure in work environments. This paper presents an approach to estimate force exertion levels, which is less distracting to workers, easier to implement by practitioners, and could potentially be used in a wide variety of workplaces. MSD: musculoskeletal disorders; ACGIH: American Conference of Governmental Industrial Hygienists; HAL: hand activity level; MVC: maximum voluntary contraction; PPG: photoplethysmogram; DNN: deep neural networks; LOSO: leave-one-subject-out; ROC: receiver operating characteristic; AUC: area under curve.

摘要

暴露于高强度和/或重复力会导致肌肉骨骼损伤。然而,测量工人的力施加水平具有挑战性,并且现有的技术可能具有侵入性,会干扰人机界面,并且/或者受到主观性的限制。在这项工作中,开发了计算机视觉技术,使用面部视频和可穿戴光体积描记法来检测等长握力施加。18 名参与者(19-24 岁)以不同的最大自主收缩水平进行等长握力施加。从视频和光体积描记图数据中识别并提取出可预测力的新颖特征。进行了两个具有两个(高/低)和三个(0%MVC/50%MVC/100%MVC)标签的实验来对施加进行分类。深度神经网络分类器在两个级别和三个级别分类中的准确率分别达到了 96%和 87%。该方法在交叉验证中(当排除 3 个参与者时,准确率为 86%)和抗噪能力(即正确将谈话活动分类为低力施加时的准确率为 89%)方面都具有稳健性。:剧烈的用力是肌肉骨骼损伤的促成因素,但在工作环境中仍然难以测量。本文提出了一种估计力施加水平的方法,这种方法对工人的干扰较小,更易于从业者实施,并且可能在各种工作场所中使用。 MSD:肌肉骨骼疾病;ACGIH:美国政府工业卫生学家会议;HAL:手部活动水平;MVC:最大自主收缩;PPG:光体积描记图;DNN:深度神经网络;LOSO:排除一个参与者;ROC:接收者操作特征;AUC:曲线下面积。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验