• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人类注视与机器视觉融合预测意向运动模式。

Fusion of Human Gaze and Machine Vision for Predicting Intended Locomotion Mode.

出版信息

IEEE Trans Neural Syst Rehabil Eng. 2022;30:1103-1112. doi: 10.1109/TNSRE.2022.3168796. Epub 2022 May 3.

DOI:10.1109/TNSRE.2022.3168796
PMID:35442889
Abstract

Predicting the user's intended locomotion mode is critical for wearable robot control to assist the user's seamless transitions when walking on changing terrains. Although machine vision has recently proven to be a promising tool in identifying upcoming terrains in the travel path, existing approaches are limited to environment perception rather than human intent recognition that is essential for coordinated wearable robot operation. Hence, in this study, we aim to develop a novel system that fuses the human gaze (representing user intent) and machine vision (capturing environmental information) for accurate prediction of the user's locomotion mode. The system possesses multimodal visual information and recognizes user's locomotion intent in a complex scene, where multiple terrains are present. Additionally, based on the dynamic time warping algorithm, a fusion strategy was developed to align temporal predictions from individual modalities while producing flexible decisions on the timing of locomotion mode transition for wearable robot control. System performance was validated using experimental data collected from five participants, showing high accuracy (over 96% in average) of intent recognition and reliable decision-making on locomotion transition with adjustable lead time. The promising results demonstrate the potential of fusing human gaze and machine vision for locomotion intent recognition of lower limb wearable robots.

摘要

预测用户的预期运动模式对于可穿戴机器人控制至关重要,可帮助用户在变化的地形上实现无缝过渡。尽管机器视觉最近已被证明是识别行进路径中即将出现的地形的一种有前途的工具,但现有的方法仅限于环境感知,而无法识别对协调可穿戴机器人操作至关重要的人类意图。因此,在本研究中,我们旨在开发一种新系统,该系统融合了人类注视(代表用户意图)和机器视觉(捕捉环境信息),以准确预测用户的运动模式。该系统具有多模态视觉信息,可以识别复杂场景中(存在多种地形)用户的运动意图。此外,基于动态时间规整算法,开发了一种融合策略,以对齐来自各个模态的时间预测,同时为可穿戴机器人控制的运动模式转换时间做出灵活决策。使用从五名参与者收集的实验数据验证了系统性能,结果表明意图识别的准确率很高(平均超过 96%),并且在可穿戴机器人控制的运动模式转换方面能够可靠地做出决策,并可调整前置时间。有前景的结果表明,融合人类注视和机器视觉对于下肢可穿戴机器人的运动意图识别具有潜力。

相似文献

1
Fusion of Human Gaze and Machine Vision for Predicting Intended Locomotion Mode.人类注视与机器视觉融合预测意向运动模式。
IEEE Trans Neural Syst Rehabil Eng. 2022;30:1103-1112. doi: 10.1109/TNSRE.2022.3168796. Epub 2022 May 3.
2
A Novel Coordinated Motion Fusion-Based Walking-Aid Robot System.一种基于新型协调运动融合的助行机器人系统。
Sensors (Basel). 2018 Aug 22;18(9):2761. doi: 10.3390/s18092761.
3
Locomotion Mode Recognition for Walking on Three Terrains Based on sEMG of Lower Limb and Back Muscles.基于下肢和背部肌肉表面肌电的三种地形行走运动模式识别。
Sensors (Basel). 2021 Apr 22;21(9):2933. doi: 10.3390/s21092933.
4
Locomotion Mode Recognition Algorithm Based on Gaussian Mixture Model Using IMU Sensors.基于 IMU 传感器的高斯混合模型的运动模式识别算法。
Sensors (Basel). 2021 Apr 15;21(8):2785. doi: 10.3390/s21082785.
5
A multimodal framework based on deep belief network for human locomotion intent prediction.一种基于深度信念网络的多模态人体运动意图预测框架。
Biomed Eng Lett. 2024 Feb 9;14(3):559-569. doi: 10.1007/s13534-024-00351-w. eCollection 2024 May.
6
A CNN-Based Method for Intent Recognition Using Inertial Measurement Units and Intelligent Lower Limb Prosthesis.基于 CNN 的惯性测量单元和智能下肢假肢意图识别方法。
IEEE Trans Neural Syst Rehabil Eng. 2019 May;27(5):1032-1042. doi: 10.1109/TNSRE.2019.2909585. Epub 2019 Apr 9.
7
A lower-limb power-assist robot with perception-assist.具有感知辅助功能的下肢助力机器人。
IEEE Int Conf Rehabil Robot. 2011;2011:5975445. doi: 10.1109/ICORR.2011.5975445.
8
Unsupervised Cross-Subject Adaptation for Predicting Human Locomotion Intent.无监督跨主体适应预测人类运动意图。
IEEE Trans Neural Syst Rehabil Eng. 2020 Mar;28(3):646-657. doi: 10.1109/TNSRE.2020.2966749. Epub 2020 Jan 15.
9
A Muscle Synergy-Inspired Method of Detecting Human Movement Intentions Based on Wearable Sensor Fusion.基于可穿戴传感器融合的肌肉协同启发式人类运动意图检测方法。
IEEE Trans Neural Syst Rehabil Eng. 2021;29:1089-1098. doi: 10.1109/TNSRE.2021.3087135. Epub 2021 Jun 15.
10
"Look where you're going!": gaze behaviour associated with maintaining and changing the direction of locomotion.“看看你要往哪儿走!”:与维持和改变运动方向相关的注视行为。
Exp Brain Res. 2002 Mar;143(2):221-30. doi: 10.1007/s00221-001-0983-7. Epub 2002 Jan 10.

引用本文的文献

1
L-AVATeD: The lidar and visual walking terrain dataset.L-AVATeD:激光雷达与视觉行走地形数据集。
Front Robot AI. 2024 Dec 4;11:1384575. doi: 10.3389/frobt.2024.1384575. eCollection 2024.
2
Continuous Locomotion Mode and Task Identification for an Assistive Exoskeleton Based on Neuromuscular-Mechanical Fusion.基于神经肌肉-机械融合的辅助外骨骼连续运动模式与任务识别
Bioengineering (Basel). 2024 Feb 2;11(2):150. doi: 10.3390/bioengineering11020150.
3
Wearable sensing for understanding and influencing human movement in ecological contexts.
用于在自然环境中理解和影响人体运动的可穿戴传感技术。
Curr Opin Biomed Eng. 2023 Dec;28. doi: 10.1016/j.cobme.2023.100492. Epub 2023 Jul 24.
4
Continuous A-Mode Ultrasound-Based Prediction of Transfemoral Amputee Prosthesis Kinematics Across Different Ambulation Tasks.基于连续 A 型超声的全髋关节置换术后不同步行任务中经股截肢者假体运动学的预测
IEEE Trans Biomed Eng. 2024 Jan;71(1):56-67. doi: 10.1109/TBME.2023.3292032. Epub 2023 Dec 22.
5
Data-Driven Variable Impedance Control of a Powered Knee-Ankle Prosthesis for Adaptive Speed and Incline Walking.用于自适应速度和坡度行走的动力膝盖-脚踝假肢的数据驱动可变阻抗控制
IEEE Trans Robot. 2023 Jun;39(3):2151-2169. doi: 10.1109/tro.2022.3226887. Epub 2023 Jan 13.
6
A-Mode Ultrasound-Based Prediction of Transfemoral Amputee Prosthesis Walking Kinematics via an Artificial Neural Network.基于A模式超声通过人工神经网络预测经股截肢者假肢行走运动学
IEEE Trans Neural Syst Rehabil Eng. 2023;31:1511-1520. doi: 10.1109/TNSRE.2023.3248647. Epub 2023 Mar 8.