• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于运动的手语解释的传感器融合与深度学习。

Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning.

机构信息

School of Computer Science, The University of Nottingham Ningbo China, Ningbo 315100, China.

Department of Electronic Engineering, Keimyung University, Daegu 42601, Korea.

出版信息

Sensors (Basel). 2020 Nov 2;20(21):6256. doi: 10.3390/s20216256.

DOI:10.3390/s20216256
PMID:33147891
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7663682/
Abstract

Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that "fuses" six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life.

摘要

手语旨在让听力受损人士能够与他人进行互动。然而,社会上对手语的了解并不普遍,这导致了与听力受损群体之间的沟通障碍。为了减少这种障碍,世界各地已经有许多利用计算机视觉 (CV) 的手语识别研究。然而,这种方法受到视角的限制,并且高度受到环境因素的影响。此外,CV 通常涉及机器学习的使用,这需要专家团队的协作和利用高成本的硬件工具;这增加了实际情况下的应用成本。因此,本研究旨在设计和实现一个使用深度学习的智能可穿戴式美国手语 (ASL) 解释系统,该系统应用了传感器融合技术,“融合”了六个惯性测量单元 (IMU)。IMU 附着在所有指尖和手背上,以识别手语手势;因此,该方法不受视角的限制。研究表明,该模型对手语动态手势的平均识别率达到了 99.81%。此外,所提出的 ASL 识别系统可以进一步与信息通信技术和物联网技术集成,为帮助听力受损人士与他人交流和提高生活质量提供可行的解决方案。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/cc19de2b4448/sensors-20-06256-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/9cc07cd5ee66/sensors-20-06256-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/a06b9854b9ed/sensors-20-06256-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/c251de7cad7b/sensors-20-06256-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/f31b93e8c0ec/sensors-20-06256-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/f5207eb14121/sensors-20-06256-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/18ea86a56c8e/sensors-20-06256-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/36193ae4e738/sensors-20-06256-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/7dcc2e713d8f/sensors-20-06256-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/c561337793c3/sensors-20-06256-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/cc19de2b4448/sensors-20-06256-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/9cc07cd5ee66/sensors-20-06256-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/a06b9854b9ed/sensors-20-06256-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/c251de7cad7b/sensors-20-06256-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/f31b93e8c0ec/sensors-20-06256-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/f5207eb14121/sensors-20-06256-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/18ea86a56c8e/sensors-20-06256-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/36193ae4e738/sensors-20-06256-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/7dcc2e713d8f/sensors-20-06256-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/c561337793c3/sensors-20-06256-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8327/7663682/cc19de2b4448/sensors-20-06256-g010.jpg

相似文献

1
Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning.基于运动的手语解释的传感器融合与深度学习。
Sensors (Basel). 2020 Nov 2;20(21):6256. doi: 10.3390/s20216256.
2
A Novel Magnetometer Array-based wearable system for ASL gesture recognition.一种基于新型磁力计阵列的可穿戴 ASL 手势识别系统。
Annu Int Conf IEEE Eng Med Biol Soc. 2023 Jul;2023:1-4. doi: 10.1109/EMBC40787.2023.10340708.
3
American Sign Language Recognition Using Leap Motion Controller with Machine Learning Approach.使用 Leap Motion 控制器和机器学习方法进行美国手语识别。
Sensors (Basel). 2018 Oct 19;18(10):3554. doi: 10.3390/s18103554.
4
Development of a low-resource wearable continuous gesture-to-speech conversion system.开发一种低资源可穿戴的连续手势到语音转换系统。
Disabil Rehabil Assist Technol. 2023 Nov;18(8):1441-1452. doi: 10.1080/17483107.2021.2022787. Epub 2022 Jan 21.
5
American Sign Language Words Recognition of Skeletal Videos Using Processed Video Driven Multi-Stacked Deep LSTM.基于处理视频驱动的多层堆叠深度 LSTM 的骨骼视频的美国手语词识别。
Sensors (Basel). 2022 Feb 11;22(4):1406. doi: 10.3390/s22041406.
6
Wearable Sensor-Based Sign Language Recognition: A Comprehensive Review.基于可穿戴传感器的手语识别:全面综述。
IEEE Rev Biomed Eng. 2021;14:82-97. doi: 10.1109/RBME.2020.3019769. Epub 2021 Jan 26.
7
Deep Learning Technology to Recognize American Sign Language Alphabet.深度学习技术识别美国手语字母。
Sensors (Basel). 2023 Sep 19;23(18):7970. doi: 10.3390/s23187970.
8
A Kinect-Based Sign Language Hand Gesture Recognition System for Hearing- and Speech-Impaired: A Pilot Study of Pakistani Sign Language.一种基于Kinect的听力和言语障碍者手语手势识别系统:巴基斯坦手语的初步研究
Assist Technol. 2015 Spring;27(1):34-43. doi: 10.1080/10400435.2014.952845.
9
British Sign Language Recognition via Late Fusion of Computer Vision and Leap Motion with Transfer Learning to American Sign Language.基于计算机视觉和 Leap Motion 的迁移学习的英国手语识别与美国手语的融合
Sensors (Basel). 2020 Sep 9;20(18):5151. doi: 10.3390/s20185151.
10
Dataglove for Sign Language Recognition of People with Hearing and Speech Impairment via Wearable Inertial Sensors.数据手套通过可穿戴惯性传感器实现听障人士和言语障碍人士的手语识别。
Sensors (Basel). 2023 Jul 26;23(15):6693. doi: 10.3390/s23156693.

引用本文的文献

1
Intelligent sensors in assistive systems for deaf people: a comprehensive review.聋人辅助系统中的智能传感器:全面综述。
PeerJ Comput Sci. 2024 Oct 24;10:e2411. doi: 10.7717/peerj-cs.2411. eCollection 2024.
2
American Sign Language Recognition and Translation Using Perception Neuron Wearable Inertial Motion Capture System.基于感知神经元可穿戴惯性运动捕捉系统的美国手语识别与翻译。
Sensors (Basel). 2024 Jan 11;24(2):453. doi: 10.3390/s24020453.
3
A Sign Language Recognition System Applied to Deaf-Mute Medical Consultation.手语识别系统在聋哑人医疗咨询中的应用。

本文引用的文献

1
American Sign Language Recognition Using Leap Motion Controller with Machine Learning Approach.使用 Leap Motion 控制器和机器学习方法进行美国手语识别。
Sensors (Basel). 2018 Oct 19;18(10):3554. doi: 10.3390/s18103554.
2
A Review on Systems-Based Sensory Gloves for Sign Language Recognition State of the Art between 2007 and 2017.基于系统的感手套用于手语识别的研究进展:2007 年至 2017 年的综述
Sensors (Basel). 2018 Jul 9;18(7):2208. doi: 10.3390/s18072208.
3
Sign language recognition with the Kinect sensor based on conditional random fields.
Sensors (Basel). 2022 Nov 24;22(23):9107. doi: 10.3390/s22239107.
4
Artificial Intelligence of Things Applied to Assistive Technology: A Systematic Literature Review.物联网人工智能在辅助技术中的应用:系统文献回顾。
Sensors (Basel). 2022 Nov 5;22(21):8531. doi: 10.3390/s22218531.
5
Backhand-Approach-Based American Sign Language Words Recognition Using Spatial-Temporal Body Parts and Hand Relationship Patterns.基于反手接近法的美国手语单词识别,使用时空身体部位和手部关系模式。
Sensors (Basel). 2022 Jun 16;22(12):4554. doi: 10.3390/s22124554.
基于条件随机场的Kinect传感器手语识别
Sensors (Basel). 2014 Dec 24;15(1):135-47. doi: 10.3390/s150100135.
4
A comprehensive survey of Wireless Body Area Networks : on PHY, MAC, and Network layers solutions.无线体域网综述:物理层、MAC 层和网络层的解决方案。
J Med Syst. 2012 Jun;36(3):1065-94. doi: 10.1007/s10916-010-9571-3. Epub 2010 Aug 19.
5
Modelling and Recognition of the Linguistic Components in American Sign Language.美国手语中语言成分的建模与识别
Image Vis Comput. 2009 Nov 1;27(12):1826-1844. doi: 10.1016/j.imavis.2009.02.005.