Hilario-Acuapan Gabriela, Ordaz-Hernández Keny, Castelán Mario, Lopez-Juarez Ismael
Robotics and Advanced Manufacturing Department, Centre for Research and Advanced Studies (CINVESTAV), Ramos Arizpe 25900, Mexico.
Sensors (Basel). 2025 Jun 10;25(12):3636. doi: 10.3390/s25123636.
This paper describes ongoing work surrounding the creation of a recognition system for Mexican Sign Language (LSM). We propose a general sign decomposition that is divided into three parts, i.e., hand configuration (HC), arm movement (AM), and non-hand gestures (NHGs). This paper focuses on the AM features and reports the approach created to analyze visual patterns in arm joint movements (wrists, shoulders, and elbows). For this research, a proprietary dataset-one that does not limit the recognition of arm movements-was developed, with active participation from the deaf community and LSM experts. We analyzed two case studies involving three sign subsets. For each sign, the pose was extracted to generate shapes of the joint paths during the arm movements and fed to a CNN classifier. YOLOv8 was used for pose estimation and visual pattern classification purposes. The proposed approach, based on pose estimation, shows promising results for constructing CNN models to classify a wide range of signs.
本文介绍了围绕墨西哥手语(LSM)识别系统创建的正在进行的工作。我们提出了一种通用的手势分解方法,该方法分为三个部分,即手部构型(HC)、手臂动作(AM)和非手部手势(NHG)。本文重点关注手臂动作特征,并报告了为分析手臂关节运动(手腕、肩膀和肘部)中的视觉模式而创建的方法。对于这项研究,我们开发了一个专有数据集——一个不限制手臂动作识别的数据集,聋人社区和LSM专家积极参与其中。我们分析了两个涉及三个手势子集的案例研究。对于每个手势,提取姿势以生成手臂运动期间关节路径的形状,并将其输入到CNN分类器中。YOLOv8用于姿势估计和视觉模式分类。所提出的基于姿势估计的方法在构建用于分类各种手势的CNN模型方面显示出了有前景的结果。