• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种应用于社交机器人的多模态人机手语交互框架。

A multimodal human-robot sign language interaction framework applied in social robots.

作者信息

Li Jie, Zhong Junpei, Wang Ning

机构信息

School of Artificial Intelligence, Chongqing Technology and Business University, Chongqing, China.

Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.

出版信息

Front Neurosci. 2023 Apr 11;17:1168888. doi: 10.3389/fnins.2023.1168888. eCollection 2023.

DOI:10.3389/fnins.2023.1168888
PMID:37113147
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10126358/
Abstract

Deaf-mutes face many difficulties in daily interactions with hearing people through spoken language. Sign language is an important way of expression and communication for deaf-mutes. Therefore, breaking the communication barrier between the deaf-mute and hearing communities is significant for facilitating their integration into society. To help them integrate into social life better, we propose a multimodal Chinese sign language (CSL) gesture interaction framework based on social robots. The CSL gesture information including both static and dynamic gestures is captured from two different modal sensors. A wearable Myo armband and a Leap Motion sensor are used to collect human arm surface electromyography (sEMG) signals and hand 3D vectors, respectively. Two modalities of gesture datasets are preprocessed and fused to improve the recognition accuracy and to reduce the processing time cost of the network before sending it to the classifier. Since the input datasets of the proposed framework are temporal sequence gestures, the long-short term memory recurrent neural network is used to classify these input sequences. Comparative experiments are performed on an NAO robot to test our method. Moreover, our method can effectively improve CSL gesture recognition accuracy, which has potential applications in a variety of gesture interaction scenarios not only in social robots.

摘要

聋哑人在通过口语与听力正常的人进行日常交流时面临许多困难。手语是聋哑人重要的表达和交流方式。因此,打破聋哑人与听力正常人群体之间的交流障碍对于促进他们融入社会具有重要意义。为了帮助他们更好地融入社会生活,我们提出了一种基于社交机器人的多模态中国手语(CSL)手势交互框架。包括静态和动态手势在内的CSL手势信息是从两种不同的模态传感器中获取的。分别使用可穿戴的Myo臂带和Leap Motion传感器来收集人体手臂表面肌电图(sEMG)信号和手部三维向量。在将两种模态的手势数据集发送到分类器之前,先对其进行预处理和融合,以提高识别准确率并降低网络的处理时间成本。由于所提出框架的输入数据集是时间序列手势,因此使用长短期记忆循环神经网络对这些输入序列进行分类。在NAO机器人上进行了对比实验来测试我们的方法。此外,我们的方法可以有效提高CSL手势识别准确率,不仅在社交机器人中,而且在各种手势交互场景中都具有潜在应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/030f8066bc60/fnins-17-1168888-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/d1a5d1dda824/fnins-17-1168888-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/5e01bf3c1189/fnins-17-1168888-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/6f4dad3c1906/fnins-17-1168888-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/95c7c79cb7b5/fnins-17-1168888-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/253baec18309/fnins-17-1168888-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/a56c0ebfd6e3/fnins-17-1168888-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/622d3132e5a7/fnins-17-1168888-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/5ae32962f739/fnins-17-1168888-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/9ba9ca692fd8/fnins-17-1168888-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/03e61509375a/fnins-17-1168888-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/724fb1cf872a/fnins-17-1168888-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/734faacb6eda/fnins-17-1168888-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/030f8066bc60/fnins-17-1168888-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/d1a5d1dda824/fnins-17-1168888-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/5e01bf3c1189/fnins-17-1168888-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/6f4dad3c1906/fnins-17-1168888-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/95c7c79cb7b5/fnins-17-1168888-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/253baec18309/fnins-17-1168888-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/a56c0ebfd6e3/fnins-17-1168888-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/622d3132e5a7/fnins-17-1168888-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/5ae32962f739/fnins-17-1168888-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/9ba9ca692fd8/fnins-17-1168888-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/03e61509375a/fnins-17-1168888-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/724fb1cf872a/fnins-17-1168888-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/734faacb6eda/fnins-17-1168888-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9496/10126358/030f8066bc60/fnins-17-1168888-g013.jpg

相似文献

1
A multimodal human-robot sign language interaction framework applied in social robots.一种应用于社交机器人的多模态人机手语交互框架。
Front Neurosci. 2023 Apr 11;17:1168888. doi: 10.3389/fnins.2023.1168888. eCollection 2023.
2
A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.一种基于组件的词汇可扩展手语手势识别框架。
Sensors (Basel). 2016 Apr 19;16(4):556. doi: 10.3390/s16040556.
3
A Novel Phonology- and Radical-Coded Chinese Sign Language Recognition Framework Using Accelerometer and Surface Electromyography Sensors.一种使用加速度计和表面肌电图传感器的新颖的基于音韵和部首编码的中国手语识别框架。
Sensors (Basel). 2015 Sep 15;15(9):23303-24. doi: 10.3390/s150923303.
4
Development of a low-resource wearable continuous gesture-to-speech conversion system.开发一种低资源可穿戴的连续手势到语音转换系统。
Disabil Rehabil Assist Technol. 2023 Nov;18(8):1441-1452. doi: 10.1080/17483107.2021.2022787. Epub 2022 Jan 21.
5
Dynamic Hand Gesture Recognition Based on a Leap Motion Controller and Two-Layer Bidirectional Recurrent Neural Network.基于 Leap Motion 控制器和两层双向递归神经网络的动态手势识别。
Sensors (Basel). 2020 Apr 8;20(7):2106. doi: 10.3390/s20072106.
6
UltrasonicGS: A Highly Robust Gesture and Sign Language Recognition Method Based on Ultrasonic Signals.基于超声信号的高鲁棒性手势和手语识别方法:UltrasonicGS
Sensors (Basel). 2023 Feb 5;23(4):1790. doi: 10.3390/s23041790.
7
Inferring Static Hand Poses from a Low-Cost Non-Intrusive sEMG Sensor.从低成本非侵入式表面肌电传感器推断静态手姿势。
Sensors (Basel). 2019 Jan 17;19(2):371. doi: 10.3390/s19020371.
8
Breaking the silence: empowering the mute-deaf community through automatic sign language decoding.打破沉默:通过自动手语解码赋予聋哑群体力量。
Biomed Tech (Berl). 2024 Jun 4;69(6):585-595. doi: 10.1515/bmt-2023-0245. Print 2024 Dec 17.
9
Skeleton-based Chinese sign language recognition and generation for bidirectional communication between deaf and hearing people.基于骨架的中文手语识别与生成,实现聋听人群的双向交流。
Neural Netw. 2020 May;125:41-55. doi: 10.1016/j.neunet.2020.01.030. Epub 2020 Feb 6.
10
Sign Language Recognition Method Based on Palm Definition Model and Multiple Classification.基于手掌定义模型和多分类的手语识别方法。
Sensors (Basel). 2022 Sep 1;22(17):6621. doi: 10.3390/s22176621.

本文引用的文献

1
A Structured and Methodological Review on Vision-Based Hand Gesture Recognition System.基于视觉的手势识别系统的结构化与方法学综述
J Imaging. 2022 May 26;8(6):153. doi: 10.3390/jimaging8060153.
2
Hand Gesture Recognition Based on Computer Vision: A Review of Techniques.基于计算机视觉的手势识别:技术综述
J Imaging. 2020 Jul 23;6(8):73. doi: 10.3390/jimaging6080073.
3
An Incremental Learning Framework to Enhance Teaching by Demonstration Based on Multimodal Sensor Fusion.一种基于多模态传感器融合的通过示范增强教学的增量学习框架。
Front Neurorobot. 2020 Aug 27;14:55. doi: 10.3389/fnbot.2020.00055. eCollection 2020.
4
British Sign Language Recognition via Late Fusion of Computer Vision and Leap Motion with Transfer Learning to American Sign Language.基于计算机视觉和 Leap Motion 的迁移学习的英国手语识别与美国手语的融合
Sensors (Basel). 2020 Sep 9;20(18):5151. doi: 10.3390/s20185151.
5
Influence of low muscle activation levels on the ankle torque and muscle shear modulus during plantar flexor stretching.低肌肉激活水平对跖屈肌拉伸时踝关节扭矩和肌肉剪切模量的影响。
J Biomech. 2019 Aug 27;93:111-117. doi: 10.1016/j.jbiomech.2019.06.018. Epub 2019 Jun 28.
6
Feature Selection and Non-Linear Classifiers: Effects on Simultaneous Motion Recognition in Upper Limb.特征选择和非线性分类器:对上肢同时运动识别的影响。
IEEE Trans Neural Syst Rehabil Eng. 2019 Apr;27(4):743-750. doi: 10.1109/TNSRE.2019.2903986. Epub 2019 Mar 8.
7
American Sign Language Recognition Using Leap Motion Controller with Machine Learning Approach.使用 Leap Motion 控制器和机器学习方法进行美国手语识别。
Sensors (Basel). 2018 Oct 19;18(10):3554. doi: 10.3390/s18103554.
8
Robot Learning System Based on Adaptive Neural Control and Dynamic Movement Primitives.基于自适应神经控制和动态运动基元的机器人学习系统
IEEE Trans Neural Netw Learn Syst. 2019 Mar;30(3):777-787. doi: 10.1109/TNNLS.2018.2852711. Epub 2018 Jul 26.
9
Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition.深度动态神经网络用于多模态手势分割与识别。
IEEE Trans Pattern Anal Mach Intell. 2016 Aug;38(8):1583-97. doi: 10.1109/TPAMI.2016.2537340. Epub 2016 Mar 2.
10
Analysis of the accuracy and robustness of the leap motion controller.跃动控制器的精度和稳健性分析。
Sensors (Basel). 2013 May 14;13(5):6380-93. doi: 10.3390/s130506380.