Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287, CNRS and Univ. Bordeaux, 146 rue Léo Saignat, 33076, Bordeaux, France.
Laboratoire Bordelais de Recherche en Informatique, UMR 5800, CNRS, Univ. Bordeaux and Bordeaux INP, 351 cours de la Libération, 33405, Talence, France.
J Neuroeng Rehabil. 2021 Jan 6;18(1):3. doi: 10.1186/s12984-020-00793-0.
Prosthetic restoration of reach and grasp function after a trans-humeral amputation requires control of multiple distal degrees of freedom in elbow, wrist and fingers. However, such a high level of amputation reduces the amount of available myoelectric and kinematic information from the residual limb.
To overcome these limits, we added contextual information about the target's location and orientation such as can now be extracted from gaze tracking by computer vision tools. For the task of picking and placing a bottle in various positions and orientations in a 3D virtual scene, we trained artificial neural networks to predict postures of an intact subject's elbow, forearm and wrist (4 degrees of freedom) either solely from shoulder kinematics or with additional knowledge of the movement goal. Subjects then performed the same tasks in the virtual scene with distal joints predicted from the context-aware network.
Average movement times of 1.22s were only slightly longer than the naturally controlled movements (0.82 s). When using a kinematic-only network, movement times were much longer (2.31s) and compensatory movements from trunk and shoulder were much larger. Integrating contextual information also gave rise to motor synergies closer to natural joint coordination.
Although notable challenges remain before applying the proposed control scheme to a real-world prosthesis, our study shows that adding contextual information to command signals greatly improves prediction of distal joint angles for prosthetic control.
肱骨截肢后的假肢恢复的伸和抓握功能需要控制肘部、腕部和手指的多个远端自由度。然而,这种高程度的截肢减少了来自残肢的可用于肌电和运动学信息的量。
为了克服这些限制,我们添加了有关目标位置和方向的上下文信息,例如现在可以通过计算机视觉工具从注视跟踪中提取。对于在 3D 虚拟场景中以各种位置和方向拾取和放置瓶子的任务,我们训练人工神经网络仅从肩部运动学预测完整受试者的肘部、前臂和手腕的姿势(4 个自由度),或者在了解运动目标的情况下进行预测。然后,受试者在虚拟场景中执行相同的任务,远端关节由上下文感知网络预测。
平均运动时间为 1.22 秒,仅略长于自然控制运动(0.82 秒)。当使用仅运动学网络时,运动时间长得多(2.31 秒),并且躯干和肩部的代偿运动更大。整合上下文信息也产生了更接近自然关节协调的运动协同作用。
尽管在将提出的控制方案应用于现实世界的假肢之前仍然存在显著的挑战,但我们的研究表明,将上下文信息添加到命令信号中可以极大地改善假肢控制的远端关节角度预测。