Zhou Tie Hua, Li Dongsheng, Jian Zhiwei, Ding Wei, Wang Ling
Department of Computer Science and Technology, School of Computer Science, Northeast Electric Power University, Jilin 132013, China.
Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan 250000, China.
Sensors (Basel). 2025 Sep 6;25(17):5568. doi: 10.3390/s25175568.
Amid the intensification of demographic aging, home robots based on intelligent technology have shown great application potential in assisting the daily life of the elderly. This paper proposes a multimodal human-robot interaction system that integrates EEG signal analysis and visual perception, aiming to realize the perception ability of home robots on the intentions and environment of the elderly. Firstly, a channel selection strategy is employed to identify the most discriminative electrode channels based on Motor Imagery (MI) EEG signals; then, the signal representation ability is improved by combining Filter Bank co-Spatial Patterns (FBCSP), wavelet packet decomposition and nonlinear features, and one-to-many Support Vector Regression (SVR) is used to achieve four-class classification. Secondly, the YOLO v8 model is applied for identifying objects within indoor scenes. Subsequently, object confidence and spatial distribution are extracted, and scene recognition is performed using a Machine Learning technique. Finally, the EEG classification results are combined with the scene recognition results to establish the scene-intention correspondence, so as to realize the recognition of the intention-driven task types of the elderly in different home scenes. Performance evaluation reveals that the proposed method attains a recognition accuracy of 83.4%, which indicates that this method has good classification accuracy and practical application value in multimodal perception and human-robot collaborative interaction, and provides technical support for the development of smarter and more personalized home assistance robots.
在人口老龄化加剧的背景下,基于智能技术的家用机器人在协助老年人日常生活方面展现出了巨大的应用潜力。本文提出了一种融合脑电信号分析与视觉感知的多模态人机交互系统,旨在实现家用机器人对老年人意图和环境的感知能力。首先,采用一种通道选择策略,基于运动想象(MI)脑电信号识别最具区分性的电极通道;然后,通过结合滤波器组共空间模式(FBCSP)、小波包分解和非线性特征来提高信号表征能力,并使用一对多支持向量回归(SVR)实现四类分类。其次,应用YOLO v8模型识别室内场景中的物体。随后,提取物体置信度和空间分布,并使用机器学习技术进行场景识别。最后,将脑电分类结果与场景识别结果相结合,建立场景 - 意图对应关系,从而实现对老年人在不同家庭场景中意图驱动任务类型的识别。性能评估表明,所提出的方法达到了83.4%的识别准确率,这表明该方法在多模态感知和人机协作交互方面具有良好的分类准确性和实际应用价值,并为开发更智能、更个性化的家庭辅助机器人提供了技术支持。