Pinheiro Daniel J L L, Faber Jean, Micera Silvestro, Shokur Solaiman
Division of Neuroscience, Department of Neurology and Neurosurgery, Neuroengineering and Neurocognition Laboratory, Escola Paulista de Medicina, Universidade Federal de São Paulo, São Paulo, Brazil.
Translational Neural Engineering Lab, Institute Neuro X, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland.
Front Neurorobot. 2023 Jun 5;17:1154427. doi: 10.3389/fnbot.2023.1154427. eCollection 2023.
Human-machine interfaces (HMIs) can be used to decode a user's motor intention to control an external device. People that suffer from motor disabilities, such as spinal cord injury, can benefit from the uses of these interfaces. While many solutions can be found in this direction, there is still room for improvement both from a decoding, hardware, and subject-motor learning perspective. Here we show, in a series of experiments with non-disabled participants, a novel decoding and training paradigm allowing naïve participants to use their auricular muscles (AM) to control two degrees of freedom with a virtual cursor. AMs are particularly interesting because they are vestigial muscles and are often preserved after neurological diseases. Our method relies on the use of surface electromyographic records and the use of contraction levels of both AMs to modulate the velocity and direction of a cursor in a two-dimensional paradigm. We used a locking mechanism to fix the current position of each axis separately to enable the user to stop the cursor at a certain location. A five-session training procedure (20-30 min per session) with a 2D center-out task was performed by five volunteers. All participants increased their success rate (Initial: 52.78 ± 5.56%; Final: 72.22 ± 6.67%; median ± median absolute deviation) and their trajectory performances throughout the training. We implemented a dual task with visual distractors to assess the mental challenge of controlling while executing another task; our results suggest that the participants could perform the task in cognitively demanding conditions (success rate of 66.67 ± 5.56%). Finally, using the Nasa Task Load Index questionnaire, we found that participants reported lower mental demand and effort in the last two sessions. To summarize, all subjects could learn to control the movement of a cursor with two degrees of freedom using their AM, with a low impact on the cognitive load. Our study is a first step in developing AM-based decoders for HMIs for people with motor disabilities, such as spinal cord injury.
人机接口(HMIs)可用于解码用户的运动意图,以控制外部设备。患有运动障碍的人,如脊髓损伤患者,可从这些接口的使用中受益。虽然在这个方向上可以找到许多解决方案,但从解码、硬件和受试者运动学习的角度来看,仍有改进的空间。在这里,我们在一系列针对非残疾参与者的实验中展示了一种新颖的解码和训练范式,使新手参与者能够使用他们的耳廓肌肉(AM)通过虚拟光标控制两个自由度。耳廓肌肉特别有趣,因为它们是退化肌肉,在神经疾病后通常得以保留。我们的方法依赖于使用表面肌电图记录,并利用两块耳廓肌肉的收缩水平在二维范式中调节光标的速度和方向。我们使用了一种锁定机制来分别固定每个轴的当前位置,以便用户能够将光标停在某个位置。五名志愿者进行了一个包含二维中心外任务的五阶段训练程序(每个阶段20 - 30分钟)。所有参与者在整个训练过程中提高了他们的成功率(初始:52.78 ± 5.56%;最终:72.22 ± 6.67%;中位数 ± 中位数绝对偏差)以及他们的轨迹表现。我们实施了一项带有视觉干扰物的双重任务,以评估在执行另一项任务时进行控制的心理挑战;我们的结果表明,参与者能够在认知要求较高的条件下完成任务(成功率为66.67 ± 5.56%)。最后,使用美国国家航空航天局任务负荷指数问卷,我们发现参与者在最后两个阶段报告的心理需求和努力较低。总之,所有受试者都可以学会使用他们的耳廓肌肉控制具有两个自由度的光标的移动,且对认知负荷影响较小。我们的研究是为脊髓损伤等运动障碍患者开发基于耳廓肌肉的人机接口解码器的第一步。