Suppr超能文献

辅助行走环境分类的序贯决策融合。

Sequential Decision Fusion for Environmental Classification in Assistive Walking.

出版信息

IEEE Trans Neural Syst Rehabil Eng. 2019 Sep;27(9):1780-1790. doi: 10.1109/TNSRE.2019.2935765. Epub 2019 Aug 16.

Abstract

Powered prostheses are effective for helping amputees walk in a single environment, but these devices are inconvenient to use in complex environments. In order to help amputees walk in complex environments, prostheses need to understand the motion intent of amputees. Recently, researchers have found that vision sensors can be utilized to classify environments and predict the motion intent of amputees. Although previous studies have been able to classify environments accurately in offline analysis, the corresponding time delay has not been considered. To increase the accuracy and decrease the time delay of environmental classification, the present paper proposes a new decision fusion method. In this method, the sequential decisions of environmental classification are fused by constructing a hidden Markov model and designing a transition probability matrix. The developed method is evaluated by inviting five able-bodied subjects and three amputees to perform indoor and outdoor walking experiments. The results indicate that the proposed method can classify environments with accuracy improvements of 1.01% (indoor) and 2.48% (outdoor) over the previous voting method when a delay of only one frame is incorporated. The present method also achieves higher classification accuracy than with the methods of recurrent neural network (RNN), long-short term memory (LSTM), and gated recurrent unit (GRU). When achieving the same classification accuracy, the method of the present paper can decrease the time delay by 67 ms (indoor) and 733 ms (outdoor) in comparison to the previous voting method. Besides classifying environments, the proposed decision fusion method may be able to optimize the sequential predictions of the human motion intent.

摘要

动力假肢在帮助截肢者在单一环境中行走方面非常有效,但这些设备在复杂环境中使用起来很不方便。为了帮助截肢者在复杂环境中行走,假肢需要了解截肢者的运动意图。最近,研究人员发现视觉传感器可用于对环境进行分类,并预测截肢者的运动意图。尽管先前的研究已经能够在离线分析中准确地对环境进行分类,但没有考虑到相应的时间延迟。为了提高环境分类的准确性并减少时间延迟,本文提出了一种新的决策融合方法。在这种方法中,通过构建隐马尔可夫模型和设计转移概率矩阵来融合环境分类的顺序决策。通过邀请五名健全人和三名截肢者进行室内和室外行走实验来评估所提出的方法。结果表明,与先前的投票方法相比,当仅延迟一帧时,所提出的方法可以将室内和室外环境分类的准确性分别提高 1.01%和 2.48%。与递归神经网络(RNN)、长短时记忆(LSTM)和门控循环单元(GRU)的方法相比,该方法还实现了更高的分类准确性。在达到相同的分类准确性时,与先前的投票方法相比,本文的方法可以将室内和室外的时间延迟分别减少 67ms 和 733ms。除了对环境进行分类之外,所提出的决策融合方法还可能能够优化对人类运动意图的顺序预测。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验