Pritchard Michael, Campelo Felipe, Goldingay Harry
Department of Applied AI and Robotics, Aston University, B4 7ET Birmingham, United Kingdom.
School of Engineering Mathematics and Technology, University of Bristol, BS8 1QU Bristol, United Kingdom.
J Neural Eng. 2025 Jul 10;22(4). doi: 10.1088/1741-2552/ade1f9.
. Upper-limb gesture identification is an important problem in the advancement of robotic prostheses. Prevailing research into classifying electromyographic (EMG) muscular data or electroencephalographic (EEG) brain data for this purpose is often limited in methodological rigour, the extent to which generalisation is demonstrated, and the granularity of gestures classified. This work evaluates three architectures for multimodal fusion of EMG & EEG data in gesture classification, including a novel Hierarchical strategy, in both subject-specific and subject-independent settings.. We propose an unbiased methodology for designing classifiers centred on Automated Machine Learning through Combined Algorithm Selection & Hyperparameter Optimisation (CASH); the first application of this technique to the biosignal domain. Using CASH, we introduce an end-to-end pipeline for data handling, algorithm development, modelling, and fair comparison, addressing established weaknesses among biosignal literature.. EMG-EEG fusion is shown to provide significantly higher subject-independent accuracy in same-hand multi-gesture classification than an equivalent EMG classifier. Our CASH-based design methodology produces a more accurate subject-specific classifier design than recommended by literature. Our novel Hierarchical ensemble of classical models outperforms a domain-standard CNN architecture. We achieve a subject-independent EEG multiclass accuracy competitive with many subject-specific approaches used for similar, or more easily separable, problems.. To our knowledge, this is the first work to establish a systematic framework for automatic, unbiased designing and testing of fusion architectures in the context of multimodal biosignal classification. We demonstrate a robust end-to-end modelling pipeline for biosignal classification problems which if adopted in future research can help address the risk of bias common in multimodal BCI studies , enabling more reliable and rigorous comparison of proposed classifiers than is usual in the domain. We apply the approach to a more complex task than typical of EMG-EEG fusion research, surpassing literature-recommended designs and verifying the efficacy of a novel Hierarchical fusion architecture.
上肢手势识别是机器人假肢发展中的一个重要问题。目前针对此目的对肌电图(EMG)肌肉数据或脑电图(EEG)脑数据进行分类的研究,在方法的严谨性、泛化的证明程度以及分类手势的粒度方面往往存在局限性。这项工作评估了三种用于手势分类中EMG和EEG数据多模态融合的架构,包括一种新颖的分层策略,在特定受试者和非特定受试者设置中均进行了评估。我们提出了一种无偏方法,通过组合算法选择和超参数优化(CASH)以自动机器学习为中心设计分类器;这是该技术在生物信号领域的首次应用。使用CASH,我们引入了一个端到端的管道,用于数据处理、算法开发、建模和公平比较,解决了生物信号文献中已有的弱点。在同手多手势分类中,EMG - EEG融合显示出比等效的EMG分类器具有显著更高的非特定受试者准确率。我们基于CASH的设计方法产生了比文献推荐更准确的特定受试者分类器设计。我们新颖的经典模型分层集成优于领域标准的卷积神经网络(CNN)架构。我们实现了与许多用于类似或更易分离问题的特定受试者方法具有竞争力的非特定受试者EEG多类准确率。据我们所知,这是第一项在多模态生物信号分类背景下建立用于自动、无偏设计和测试融合架构的系统框架的工作。我们展示了一个用于生物信号分类问题的强大端到端建模管道,如果在未来研究中采用,可有助于解决多模态脑机接口(BCI)研究中常见的偏差风险,实现比该领域通常情况更可靠、更严谨的所提出分类器的比较。我们将该方法应用于比典型的EMG - EEG融合研究更复杂的任务,超越了文献推荐的设计,并验证了一种新颖的分层融合架构的有效性。