School of Computing, University of Eastern Finland, FI-80101 Joensuu, Finland.
Samsung Electronics, Suwon 16677, Republic of Korea.
Sensors (Basel). 2024 Oct 8;24(19):6479. doi: 10.3390/s24196479.
Rock climbing has propelled from niche sport to mainstream free-time activity and Olympic sport. Moreover, climbing can be studied as an example of a high-stakes perception-action task. However, understanding what constitutes an expert climber is not simple or straightforward. As a dynamic and high-risk activity, climbing requires a precise interplay between cognition, perception, and precise action execution. While prior research has predominantly focused on the movement aspect of climbing (i.e., skeletal posture and individual limb movements), recent studies have also examined the climber's visual attention and its links to their performance. To associate the climber's attention with their actions, however, has traditionally required frame-by-frame manual coding of the recorded eye-tracking videos. To overcome this challenge and automatically contextualize the analysis of eye movements in indoor climbing, we present deep learning-driven (YOLOv5) hold detection that facilitates automatic grasp recognition. To demonstrate the framework, we examined the expert climber's eye movements and egocentric perspective acquired from eye-tracking glasses (SMI and Tobii Glasses 2). Using the framework, we observed that the expert climber's grasping duration was positively correlated with total fixation duration ( = 0.807) and fixation count ( = 0.864); however, it was negatively correlated with the fixation rate ( = -0.402) and saccade rate ( = -0.344). The findings indicate the moments of cognitive processing and visual search that occurred during decision making and route prospecting. Our work contributes to research on eye-body performance and coordination in high-stakes contexts, and informs the sport science and expands the applications, e.g., in training optimization, injury prevention, and coaching.
攀岩运动已经从小众运动发展成为主流的休闲活动和奥运项目。此外,攀岩可以作为一个高风险感知-动作任务的例子来进行研究。然而,要理解什么是专家级攀岩者并不简单。作为一项动态的高风险活动,攀岩需要认知、感知和精确动作执行之间的精确相互作用。虽然之前的研究主要集中在攀岩的运动方面(即骨骼姿势和单个肢体动作),但最近的研究也考察了攀岩者的视觉注意力及其与表现的关系。然而,为了将攀岩者的注意力与他们的动作联系起来,传统上需要对记录的眼动视频进行逐帧手动编码。为了克服这一挑战,并自动将室内攀岩的眼动分析置于上下文中,我们提出了基于深度学习的(YOLOv5)握把检测,以促进自动抓握识别。为了展示该框架,我们研究了从眼动追踪眼镜(SMI 和 Tobii Glasses 2)获取的专家攀岩者的眼动和自我中心视角。使用该框架,我们观察到专家攀岩者的抓握持续时间与总注视持续时间( = 0.807)和注视计数( = 0.864)呈正相关;然而,它与注视率( = -0.402)和扫视率( = -0.344)呈负相关。这些发现表明了在决策和路线勘察过程中发生的认知处理和视觉搜索时刻。我们的工作为高风险情境下的眼-身表现和协调研究做出了贡献,并为运动科学提供了信息,并扩展了应用,例如在训练优化、损伤预防和教练指导方面。