School of Information Engineering, Nanchang University, Nanchang 330031, China.
School of Software Engineering, Nanchang University, Nanchang 330029, China.
Sensors (Basel). 2018 Sep 16;18(9):3119. doi: 10.3390/s18093119.
Behavior analysis through posture recognition is an essential research in robotic systems. Sitting with unhealthy sitting posture for a long time seriously harms human health and may even lead to lumbar disease, cervical disease and myopia. Automatic vision-based detection of unhealthy sitting posture, as an example of posture detection in robotic systems, has become a hot research topic. However, the existing methods only focus on extracting features of human themselves and lack understanding relevancies among objects in the scene, and henceforth fail to recognize some types of unhealthy sitting postures in complicated environments. To alleviate these problems, a scene recognition and semantic analysis approach to unhealthy sitting posture detection in screen-reading is proposed in this paper. The key skeletal points of human body are detected and tracked with a Microsoft Kinect sensor. Meanwhile, a deep learning method, Faster R-CNN, is used in the scene recognition of our method to accurately detect objects and extract relevant features. Then our method performs semantic analysis through Gaussian-Mixture behavioral clustering for scene understanding. The relevant features in the scene and the skeletal features extracted from human are fused into the semantic features to discriminate various types of sitting postures. Experimental results demonstrated that our method accurately and effectively detected various types of unhealthy sitting postures in screen-reading and avoided error detection in complicated environments. Compared with the existing methods, our proposed method detected more types of unhealthy sitting postures including those that the existing methods could not detect. Our method can be potentially applied and integrated as a medical assistance in robotic systems of health care and treatment.
通过姿势识别进行行为分析是机器人系统中的一项重要研究。长时间保持不健康的坐姿会严重危害人类健康,甚至可能导致腰椎病、颈椎病和近视。基于自动视觉的坐姿检测作为机器人系统中的一种姿势检测方法,已成为研究热点。然而,现有的方法仅关注于提取人体自身的特征,缺乏对场景中物体之间相关性的理解,因此无法识别复杂环境中的某些类型的不健康坐姿。为了解决这些问题,本文提出了一种针对屏幕阅读中不健康坐姿检测的场景识别和语义分析方法。该方法使用 Microsoft Kinect 传感器检测和跟踪人体的关键骨骼点。同时,该方法采用深度学习方法 Faster R-CNN 进行场景识别,以准确检测物体并提取相关特征。然后,该方法通过高斯混合行为聚类进行语义分析,以实现对场景的理解。将场景中的相关特征和从人体中提取的骨骼特征融合到语义特征中,以区分各种类型的坐姿。实验结果表明,该方法能够准确有效地检测屏幕阅读中的各种类型的不健康坐姿,并且避免了在复杂环境中的错误检测。与现有的方法相比,我们提出的方法可以检测到更多类型的不健康坐姿,包括现有的方法无法检测到的坐姿。该方法可以作为医疗保健和治疗机器人系统中的一种医学辅助手段进行潜在的应用和集成。
Sensors (Basel). 2023-6-25
Health Informatics J. 2021
Sensors (Basel). 2021-3-9
Sensors (Basel). 2018-1-12
Sensors (Basel). 2021-1-9
IEEE Trans Cybern. 2013-8-22
BMC Public Health. 2025-3-14
Sensors (Basel). 2023-6-25
IEEE Trans Image Process. 2017-11-17
IEEE Trans Image Process. 2018-9
IEEE Trans Image Process. 2017-6-21
Sensors (Basel). 2017-3-29
Sensors (Basel). 2017-3-21
IEEE Trans Cybern. 2017-2-20
IEEE Trans Pattern Anal Mach Intell. 2016-6-6
IEEE Trans Pattern Anal Mach Intell. 2016-5-24
IEEE Trans Pattern Anal Mach Intell. 2015-9
IEEE Trans Cybern. 2014-11-20