Suppr超能文献

基于运动的手语解释的传感器融合与深度学习。

Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning.

机构信息

School of Computer Science, The University of Nottingham Ningbo China, Ningbo 315100, China.

Department of Electronic Engineering, Keimyung University, Daegu 42601, Korea.

出版信息

Sensors (Basel). 2020 Nov 2;20(21):6256. doi: 10.3390/s20216256.

Abstract

Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that "fuses" six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life.

摘要

手语旨在让听力受损人士能够与他人进行互动。然而,社会上对手语的了解并不普遍,这导致了与听力受损群体之间的沟通障碍。为了减少这种障碍,世界各地已经有许多利用计算机视觉 (CV) 的手语识别研究。然而,这种方法受到视角的限制,并且高度受到环境因素的影响。此外,CV 通常涉及机器学习的使用,这需要专家团队的协作和利用高成本的硬件工具;这增加了实际情况下的应用成本。因此,本研究旨在设计和实现一个使用深度学习的智能可穿戴式美国手语 (ASL) 解释系统,该系统应用了传感器融合技术,“融合”了六个惯性测量单元 (IMU)。IMU 附着在所有指尖和手背上,以识别手语手势;因此,该方法不受视角的限制。研究表明,该模型对手语动态手势的平均识别率达到了 99.81%。此外,所提出的 ASL 识别系统可以进一步与信息通信技术和物联网技术集成,为帮助听力受损人士与他人交流和提高生活质量提供可行的解决方案。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/9cc07cd5ee66/sensors-20-06256-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验