Suppr超能文献

增强跌倒风险评估:在行走过程中使用深度学习为视觉赋能。

Enhancing fall risk assessment: instrumenting vision with deep learning during walks.

机构信息

Department of Computer and Information Sciences, Northumbria University, Newcastle, NE1 8ST, UK.

Department of Kinesiology and Educational Psychology, Washington State University, Pullman, USA.

出版信息

J Neuroeng Rehabil. 2024 Jun 22;21(1):106. doi: 10.1186/s12984-024-01400-2.

Abstract

BACKGROUND

Falls are common in a range of clinical cohorts, where routine risk assessment often comprises subjective visual observation only. Typically, observational assessment involves evaluation of an individual's gait during scripted walking protocols within a lab to identify deficits that potentially increase fall risk, but subtle deficits may not be (readily) observable. Therefore, objective approaches (e.g., inertial measurement units, IMUs) are useful for quantifying high resolution gait characteristics, enabling more informed fall risk assessment by capturing subtle deficits. However, IMU-based gait instrumentation alone is limited, failing to consider participant behaviour and details within the environment (e.g., obstacles). Video-based eye-tracking glasses may provide additional insight to fall risk, clarifying how people traverse environments based on head and eye movements. Recording head and eye movements can provide insights into how the allocation of visual attention to environmental stimuli influences successful navigation around obstacles. Yet, manual review of video data to evaluate head and eye movements is time-consuming and subjective. An automated approach is needed but none currently exists. This paper proposes a deep learning-based object detection algorithm (VARFA) to instrument vision and video data during walks, complementing instrumented gait.

METHOD

The approach automatically labels video data captured in a gait lab to assess visual attention and details of the environment. The proposed algorithm uses a YoloV8 model trained on with a novel lab-based dataset.

RESULTS

VARFA achieved excellent evaluation metrics (0.93 mAP50), identifying, and localizing static objects (e.g., obstacles in the walking path) with an average accuracy of 93%. Similarly, a U-NET based track/path segmentation model achieved good metrics (IoU 0.82), suggesting that the predicted tracks (i.e., walking paths) align closely with the actual track, with an overlap of 82%. Notably, both models achieved these metrics while processing at real-time speeds, demonstrating efficiency and effectiveness for pragmatic applications.

CONCLUSION

The instrumented approach improves the efficiency and accuracy of fall risk assessment by evaluating the visual allocation of attention (i.e., information about when and where a person is attending) during navigation, improving the breadth of instrumentation in this area. Use of VARFA to instrument vision could be used to better inform fall risk assessment by providing behaviour and context data to complement instrumented e.g., IMU data during gait tasks. That may have notable (e.g., personalized) rehabilitation implications across a wide range of clinical cohorts where poor gait and increased fall risk are common.

摘要

背景

在一系列临床队列中,跌倒很常见,而常规风险评估通常仅包括主观的视觉观察。通常,观察性评估包括在实验室中根据脚本评估个体的步态,以识别可能增加跌倒风险的缺陷,但细微的缺陷可能不容易(容易)观察到。因此,客观方法(例如,惯性测量单元,IMU)可用于量化高分辨率步态特征,通过捕获细微缺陷来更有效地进行跌倒风险评估。然而,仅基于 IMU 的步态仪器受到限制,无法考虑参与者的行为和环境中的细节(例如障碍物)。基于视频的眼动追踪眼镜可能提供有关跌倒风险的更多见解,根据头部和眼部运动来阐明人们如何穿越环境。记录头部和眼部运动可以深入了解视觉注意力分配给环境刺激如何影响成功避开障碍物。然而,手动审查视频数据以评估头部和眼部运动既费时又主观。需要一种自动化方法,但目前尚不存在。本文提出了一种基于深度学习的对象检测算法(VARFA),用于在行走过程中对视觉和视频数据进行仪器化,补充仪器化的步态。

方法

该方法自动标记在步态实验室中捕获的视频数据,以评估视觉注意力和环境细节。所提出的算法使用基于 YoloV8 模型的算法,并使用新颖的基于实验室的数据集进行了训练。

结果

VARFA 实现了出色的评估指标(0.93 mAP50),能够识别和定位静态物体(例如,行走路径中的障碍物),平均准确率为 93%。同样,基于 U-NET 的跟踪/路径分割模型也取得了良好的指标(IoU 0.82),这表明预测的轨迹(即行走路径)与实际轨迹非常吻合,重叠率为 82%。值得注意的是,这两个模型都在实时速度下实现了这些指标,这表明它们在实用应用中具有高效性和有效性。

结论

通过评估在导航过程中视觉注意力的分配(即有关人员何时何地关注的信息),仪器化方法提高了跌倒风险评估的效率和准确性,从而扩大了该领域的仪器化范围。使用 VARFA 对视觉进行仪器化可以通过提供行为和上下文数据来更好地告知跌倒风险评估,从而补充仪器化的信息,例如在步态任务期间的 IMU 数据。这可能会对广泛的临床队列产生显著影响(例如,个性化),这些队列中普遍存在不良步态和增加的跌倒风险。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36f2/11193231/a0f6f82e278d/12984_2024_1400_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验