Zhang Xiaodong, Zhang Teng, Jiang Yongyu, Zhang Weiming, Lu Zhufeng, Wang Yu, Tao Qing
School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China.
Shaanxi Key Laboratory of Intelligent Robot, Xi'an, Shannxi, 710049, China.
Heliyon. 2024 Feb 19;10(5):e26521. doi: 10.1016/j.heliyon.2024.e26521. eCollection 2024 Mar 15.
The brain-computer interface (BCI) system based on steady-state visual evoked potentials (SSVEP) is expected to help disabled patients achieve alternative prosthetic hand assistance. However, the existing study still has some shortcomings in interaction aspects such as stimulus paradigm and control logic. The purpose of this study is to innovate the visual stimulus paradigm and asynchronous decoding/control strategy by integrating augmented reality technology, and propose an asynchronous pattern recognition algorithm, thereby improving the interaction logic and practical application capabilities of the prosthetic hand with the BCI system.
An asynchronous visual stimulus paradigm based on an augmented reality (AR) interface was proposed in this paper, in which there were 8 control modes, including Grasp, Put down, Pinch, Point, Fist, Palm push, Hold pen, and Initial. According to the attentional orienting characteristics of the paradigm, a novel asynchronous pattern recognition algorithm that combines center extended canonical correlation analysis and support vector machine (Center-ECCA-SVM) was proposed. Then, this study proposed an intelligent BCI system switch based on a deep learning object detection algorithm (YOLOv4) to improve the level of user interaction. Finally, two experiments were designed to test the performance of the brain-controlled prosthetic hand system and its practical performance in real scenarios.
Under the AR paradigm of this study, compared with the liquid crystal display (LCD) paradigm, the average SSVEP spectrum amplitude of multiple subjects increased by 17.41%, and the signal-noise ratio (SNR) increased by 3.52%. The average stimulus pattern recognition accuracy was 96.71 ± 3.91%, which was 2.62% higher than the LCD paradigm. Under the data analysis time of 2s, the Center-ECCA-SVM classifier obtained 94.66 ± 3.87% and 97.40 ± 2.78% asynchronous pattern recognition accuracy under the Normal metric and the Tolerant metric, respectively. And the YOLOv4-tiny model achieves a speed of 25.29fps and a 96.4% confidence in the prosthetic hand in real-time detection. Finally, the brain-controlled prosthetic hand helped the subjects to complete 4 kinds of daily life tasks in the real scene, and the time-consuming were all within an acceptable range, which verified the effectiveness and practicability of the system.
This research is based on improving the user interaction level of the prosthetic hand with the BCI system, and has made improvements in the SSVEP paradigm, asynchronous pattern recognition, interaction, and control logic. Furthermore, it also provides support for BCI areas for alternative prosthetic control, and movement disorder rehabilitation programs.
基于稳态视觉诱发电位(SSVEP)的脑机接口(BCI)系统有望帮助残疾患者实现替代性假手辅助。然而,现有研究在刺激范式和控制逻辑等交互方面仍存在一些不足。本研究的目的是通过集成增强现实技术创新视觉刺激范式和异步解码/控制策略,提出一种异步模式识别算法,从而改善假手与BCI系统的交互逻辑和实际应用能力。
本文提出了一种基于增强现实(AR)界面的异步视觉刺激范式,其中有8种控制模式,包括抓握、放下、捏、指、握拳、推掌、握笔和初始状态。根据该范式的注意力定向特征,提出了一种结合中心扩展典型相关分析和支持向量机的新型异步模式识别算法(Center-ECCA-SVM)。然后,本研究提出了一种基于深度学习目标检测算法(YOLOv4)的智能BCI系统切换方法,以提高用户交互水平。最后,设计了两个实验来测试脑控假手系统的性能及其在真实场景中的实际表现。
在本研究的AR范式下,与液晶显示器(LCD)范式相比,多个受试者的平均SSVEP频谱幅度增加了17.41%,信噪比(SNR)增加了3.52%。平均刺激模式识别准确率为96.71±3.91%,比LCD范式高2.62%。在2s的数据分析时间内,Center-ECCA-SVM分类器在正常度量和宽容度量下分别获得了94.66±3.87%和97.40±2.78%的异步模式识别准确率。并且YOLOv4-tiny模型在假手实时检测中实现了25.29fps的速度和96.4%的置信度。最后,脑控假手帮助受试者在真实场景中完成了4种日常生活任务,耗时均在可接受范围内,验证了系统的有效性和实用性。
本研究基于提高假手与BCI系统的用户交互水平展开,在SSVEP范式、异步模式识别、交互和控制逻辑方面均有改进。此外,它还为BCI领域的替代性假肢控制和运动障碍康复计划提供了支持。