Rojas Mario, Ponce Pedro, Molina Arturo
Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City, Mexico.
Front Hum Neurosci. 2022 Jun 10;16:867377. doi: 10.3389/fnhum.2022.867377. eCollection 2022.
Hands-free interfaces are essential to people with limited mobility for interacting with biomedical or electronic devices. However, there are not enough sensing platforms that quickly tailor the interface to these users with disabilities. Thus, this article proposes to create a sensing platform that could be used by patients with mobility impairments to manipulate electronic devices, thereby their independence will be increased. Hence, a new sensing scheme is developed by using three hands-free signals as inputs: voice commands, head movements, and eye gestures. These signals are obtained by using non-invasive sensors: a microphone for the speech commands, an accelerometer to detect inertial head movements, and an infrared oculography to register eye gestures. These signals are processed and received as the user's commands by an output unit, which provides several communication ports for sending control signals to other devices. The interaction methods are intuitive and could extend boundaries for people with disabilities to manipulate local or remote digital systems. As a study case, two volunteers with severe disabilities used the sensing platform to steer a power wheelchair. Participants performed 15 common skills for wheelchair users and their capacities were evaluated according to a standard test. By using the head control they obtained 93.3 and 86.6%, respectively for volunteers A and B; meanwhile, by using the voice control they obtained 63.3 and 66.6%, respectively. These results show that the end-users achieved high performance by developing most of the skills by using the head movements interface. On the contrary, the users were not able to develop most of the skills by using voice control. These results showed valuable information for tailoring the sensing platform according to the end-user needs.
免提接口对于行动不便的人与生物医学或电子设备进行交互至关重要。然而,目前没有足够的传感平台能够快速为这些残疾用户定制接口。因此,本文提出创建一个传感平台,供行动不便的患者用来操控电子设备,从而增强他们的独立性。为此,开发了一种新的传感方案,将三种免提信号作为输入:语音指令、头部动作和眼部手势。这些信号通过非侵入式传感器获取:用于语音指令的麦克风、用于检测头部惯性动作的加速度计以及用于记录眼部手势的红外眼动仪。这些信号由输出单元处理并作为用户指令接收,该输出单元提供多个通信端口,用于向其他设备发送控制信号。这些交互方法直观,能够拓展残疾人操控本地或远程数字系统的范围。作为一个研究案例,两名重度残疾志愿者使用该传感平台操控电动轮椅。参与者完成了针对轮椅使用者的15项常见技能,并根据标准测试对他们的能力进行了评估。通过使用头部控制,志愿者A和B分别达到了93.3%和86.6%;同时,通过使用语音控制,他们分别达到了63.3%和66.6%。这些结果表明,终端用户通过使用头部动作接口掌握了大部分技能,从而实现了较高的性能。相反,用户通过语音控制无法掌握大部分技能。这些结果为根据终端用户需求定制传感平台提供了有价值的信息。