Suppr超能文献

一种应用于社交机器人的多模态人机手语交互框架。

A multimodal human-robot sign language interaction framework applied in social robots.

作者信息

Li Jie, Zhong Junpei, Wang Ning

机构信息

School of Artificial Intelligence, Chongqing Technology and Business University, Chongqing, China.

Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.

出版信息

Front Neurosci. 2023 Apr 11;17:1168888. doi: 10.3389/fnins.2023.1168888. eCollection 2023.

Abstract

Deaf-mutes face many difficulties in daily interactions with hearing people through spoken language. Sign language is an important way of expression and communication for deaf-mutes. Therefore, breaking the communication barrier between the deaf-mute and hearing communities is significant for facilitating their integration into society. To help them integrate into social life better, we propose a multimodal Chinese sign language (CSL) gesture interaction framework based on social robots. The CSL gesture information including both static and dynamic gestures is captured from two different modal sensors. A wearable Myo armband and a Leap Motion sensor are used to collect human arm surface electromyography (sEMG) signals and hand 3D vectors, respectively. Two modalities of gesture datasets are preprocessed and fused to improve the recognition accuracy and to reduce the processing time cost of the network before sending it to the classifier. Since the input datasets of the proposed framework are temporal sequence gestures, the long-short term memory recurrent neural network is used to classify these input sequences. Comparative experiments are performed on an NAO robot to test our method. Moreover, our method can effectively improve CSL gesture recognition accuracy, which has potential applications in a variety of gesture interaction scenarios not only in social robots.

摘要

聋哑人在通过口语与听力正常的人进行日常交流时面临许多困难。手语是聋哑人重要的表达和交流方式。因此,打破聋哑人与听力正常人群体之间的交流障碍对于促进他们融入社会具有重要意义。为了帮助他们更好地融入社会生活,我们提出了一种基于社交机器人的多模态中国手语(CSL)手势交互框架。包括静态和动态手势在内的CSL手势信息是从两种不同的模态传感器中获取的。分别使用可穿戴的Myo臂带和Leap Motion传感器来收集人体手臂表面肌电图(sEMG)信号和手部三维向量。在将两种模态的手势数据集发送到分类器之前,先对其进行预处理和融合,以提高识别准确率并降低网络的处理时间成本。由于所提出框架的输入数据集是时间序列手势,因此使用长短期记忆循环神经网络对这些输入序列进行分类。在NAO机器人上进行了对比实验来测试我们的方法。此外,我们的方法可以有效提高CSL手势识别准确率,不仅在社交机器人中,而且在各种手势交互场景中都具有潜在应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/d1a5d1dda824/fnins-17-1168888-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验