Suppr超能文献

利用计算机视觉进行高频体力活动自动观测。

Automated High-Frequency Observations of Physical Activity Using Computer Vision.

机构信息

Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA.

Department of Family Medicine and Public Health, University of California San Diego, La Jolla, CA.

出版信息

Med Sci Sports Exerc. 2020 Sep;52(9):2029-2036. doi: 10.1249/MSS.0000000000002341.

Abstract

PURPOSE

To test the validity of the Ecological Video Identification of Physical Activity (EVIP) computer vision algorithms for automated video-based ecological assessment of physical activity in settings such as parks and schoolyards.

METHODS

Twenty-seven hours of video were collected from stationary overhead video cameras across 22 visits in nine sites capturing organized activities. Each person in the setting wore an accelerometer, and each second was classified as moderate-to-vigorous physical activity or sedentary/light activity. Data with 57,987 s were used to train and test computer vision algorithms for estimating the total number of people in the video and number of people active (in moderate-to-vigorous physical activity) each second. In the testing data set (38,658 s), video-based System for Observing Play and Recreation in Communities (SOPARC) observations were conducted every 5 min (130 observations). Concordance correlation coefficients (CCC) and mean absolute errors (MAE) assessed agreement between (1) EVIP and ground truth (people counts+accelerometry) and (2) SOPARC observation and ground truth. Site and scene-level correlates of error were investigated.

RESULTS

Agreement between EVIP and ground truth was high for number of people in the scene (CCC = 0.88; MAE = 2.70) and moderate for number of people active (CCC = 0.55; MAE = 2.57). The EVIP error was uncorrelated with camera placement, presence of obstructions or shadows, and setting type. For both number in scene and number active, EVIP outperformed SOPARC observations in estimating ground truth values (CCC were larger by 0.11-0.12 and MAE smaller by 41%-48%).

CONCLUSIONS

Computer vision algorithms are promising for automated assessment of setting-based physical activity. Such tools would require less manpower than human observation, produce more and potentially more accurate data, and allow for ongoing monitoring and feedback to inform interventions.

摘要

目的

测试基于计算机视觉的生态视频识别体力活动(EVIP)算法在公园和校园等环境中基于视频的体力活动自动生态评估的有效性。

方法

从 9 个地点的 22 次固定高空摄像机拍摄的 27 小时视频中采集数据,以捕捉有组织的活动。每个环境中的人都佩戴加速度计,每秒的活动都被归类为中等到剧烈的体力活动或久坐/轻度活动。使用了 57987 秒的数据来训练和测试计算机视觉算法,以估算视频中总人数和每秒活动人数(中等到剧烈的体力活动)。在测试数据集(38658 秒)中,每 5 分钟进行一次基于视频的社区体育娱乐观测(SOPARC)观测(共 130 次观测)。使用 concordance correlation coefficients(CCC)和 mean absolute errors(MAE)评估了 EVIP 与地面实况(人数计数+加速度计)和 SOPARC 观测与地面实况之间的一致性。还调查了现场和场景水平误差的相关性。

结果

场景中人数的 EVIP 与地面实况的一致性较高(CCC = 0.88;MAE = 2.70),活动人数的一致性中等(CCC = 0.55;MAE = 2.57)。EVIP 误差与摄像机位置、障碍物或阴影的存在以及设置类型无关。对于场景中的人数和活动人数,EVIP 在估计地面实况值方面优于 SOPARC 观测(CCC 高 0.11-0.12,MAE 低 41%-48%)。

结论

计算机视觉算法有望用于自动评估基于环境的体力活动。与人工观察相比,此类工具需要更少的人力,可产生更多且潜在更准确的数据,并允许持续监测和反馈以指导干预措施。

相似文献

本文引用的文献

10
Assessing physical activity intensity by video analysis.通过视频分析评估身体活动强度。
Physiol Meas. 2015 May;36(5):1037-46. doi: 10.1088/0967-3334/36/5/1037. Epub 2015 Apr 22.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验