• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于感知神经元可穿戴惯性运动捕捉系统的美国手语识别与翻译。

American Sign Language Recognition and Translation Using Perception Neuron Wearable Inertial Motion Capture System.

机构信息

Faculty of Informatics, Gunma University, Kiryu 3768515, Japan.

Graduate School of Engineering, Hokkaido University, Sapporo 0608628, Japan.

出版信息

Sensors (Basel). 2024 Jan 11;24(2):453. doi: 10.3390/s24020453.

DOI:10.3390/s24020453
PMID:38257544
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10819960/
Abstract

Sign language is designed as a natural communication method to convey messages among the deaf community. In the study of sign language recognition through wearable sensors, the data sources are limited, and the data acquisition process is complex. This research aims to collect an American sign language dataset with a wearable inertial motion capture system and realize the recognition and end-to-end translation of sign language sentences with deep learning models. In this work, a dataset consisting of 300 commonly used sentences is gathered from 3 volunteers. In the design of the recognition network, the model mainly consists of three layers: convolutional neural network, bi-directional long short-term memory, and connectionist temporal classification. The model achieves accuracy rates of 99.07% in word-level evaluation and 97.34% in sentence-level evaluation. In the design of the translation network, the encoder-decoder structured model is mainly based on long short-term memory with global attention. The word error rate of end-to-end translation is 16.63%. The proposed method has the potential to recognize more sign language sentences with reliable inertial data from the device.

摘要

手语是专为聋人群体设计的一种自然沟通方式,用于传递信息。在利用可穿戴传感器进行手语识别研究中,数据源有限,数据采集过程复杂。本研究旨在利用可穿戴惯性运动捕捉系统收集美国手语数据集,并利用深度学习模型实现手语句子的识别和端到端翻译。在这项工作中,我们从 3 名志愿者那里收集了一个由 300 个常用句子组成的数据集。在识别网络的设计中,模型主要由卷积神经网络、双向长短时记忆和连接时序分类三个部分组成。在单词级别的评估中,模型的准确率达到了 99.07%,在句子级别的评估中准确率达到了 97.34%。在翻译网络的设计中,编码器-解码器结构模型主要基于带有全局注意力机制的长短时记忆。端到端翻译的单词错误率为 16.63%。该方法有望利用设备中可靠的惯性数据识别更多的手语句子。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/c37018fa8ec2/sensors-24-00453-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/54639b9025b9/sensors-24-00453-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/40e7d37abcb8/sensors-24-00453-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/fcefc4106500/sensors-24-00453-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/f79d564fce98/sensors-24-00453-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/3ab849647fa4/sensors-24-00453-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/a6a92f700846/sensors-24-00453-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/58dd11e031b8/sensors-24-00453-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/c37018fa8ec2/sensors-24-00453-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/54639b9025b9/sensors-24-00453-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/40e7d37abcb8/sensors-24-00453-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/fcefc4106500/sensors-24-00453-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/f79d564fce98/sensors-24-00453-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/3ab849647fa4/sensors-24-00453-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/a6a92f700846/sensors-24-00453-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/58dd11e031b8/sensors-24-00453-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bab4/10819960/c37018fa8ec2/sensors-24-00453-g008.jpg

相似文献

1
American Sign Language Recognition and Translation Using Perception Neuron Wearable Inertial Motion Capture System.基于感知神经元可穿戴惯性运动捕捉系统的美国手语识别与翻译。
Sensors (Basel). 2024 Jan 11;24(2):453. doi: 10.3390/s24020453.
2
American Sign Language Translation Using Wearable Inertial and Electromyography Sensors for Tracking Hand Movements and Facial Expressions.使用可穿戴惯性和肌电图传感器进行美国手语翻译,以跟踪手部动作和面部表情。
Front Neurosci. 2022 Jul 19;16:962141. doi: 10.3389/fnins.2022.962141. eCollection 2022.
3
Dataglove for Sign Language Recognition of People with Hearing and Speech Impairment via Wearable Inertial Sensors.数据手套通过可穿戴惯性传感器实现听障人士和言语障碍人士的手语识别。
Sensors (Basel). 2023 Jul 26;23(15):6693. doi: 10.3390/s23156693.
4
Assessing the need for a wearable sign language recognition device for deaf individuals: Results from a national questionnaire.评估聋人对手语识别可穿戴设备的需求:全国问卷调查结果。
Assist Technol. 2022 Nov 2;34(6):684-697. doi: 10.1080/10400435.2021.1913259. Epub 2021 May 25.
5
Continuous Sign Language Recognition through a Context-Aware Generative Adversarial Network.基于上下文感知生成对抗网络的连续手语识别。
Sensors (Basel). 2021 Apr 1;21(7):2437. doi: 10.3390/s21072437.
6
Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning.基于运动的手语解释的传感器融合与深度学习。
Sensors (Basel). 2020 Nov 2;20(21):6256. doi: 10.3390/s20216256.
7
American Sign Language Recognition Using Leap Motion Controller with Machine Learning Approach.使用 Leap Motion 控制器和机器学习方法进行美国手语识别。
Sensors (Basel). 2018 Oct 19;18(10):3554. doi: 10.3390/s18103554.
8
A Portable Sign Language Collection and Translation Platform with Smart Watches Using a BLSTM-Based Multi-Feature Framework.一种基于双向长短期记忆网络的多特征框架的智能手表便携式手语采集与翻译平台。
Micromachines (Basel). 2022 Feb 20;13(2):333. doi: 10.3390/mi13020333.
9
AI enabled sign language recognition and VR space bidirectional communication using triboelectric smart glove.利用摩擦电智能手套实现 AI 手语识别和 VR 空间双向通信。
Nat Commun. 2021 Sep 10;12(1):5378. doi: 10.1038/s41467-021-25637-w.
10
Sign Language Recognition Using Wearable Electronics: Implementing k-Nearest Neighbors with Dynamic Time Warping and Convolutional Neural Network Algorithms.使用可穿戴电子设备进行手语识别:实现带动态时间规整和卷积神经网络算法的 k-最近邻算法。
Sensors (Basel). 2020 Jul 11;20(14):3879. doi: 10.3390/s20143879.

引用本文的文献

1
A novel model for expanding horizons in sign Language recognition.一种拓展手语识别视野的新型模型。
Sci Rep. 2025 Jul 8;15(1):24358. doi: 10.1038/s41598-025-09643-2.
2
Toward a Recognition System for Mexican Sign Language: Arm Movement Detection.迈向墨西哥手语识别系统:手臂动作检测
Sensors (Basel). 2025 Jun 10;25(12):3636. doi: 10.3390/s25123636.

本文引用的文献

1
Textronic Glove Translating Polish Sign Language.电子手套翻译波兰手语。
Sensors (Basel). 2022 Sep 8;22(18):6788. doi: 10.3390/s22186788.
2
AI enabled sign language recognition and VR space bidirectional communication using triboelectric smart glove.利用摩擦电智能手套实现 AI 手语识别和 VR 空间双向通信。
Nat Commun. 2021 Sep 10;12(1):5378. doi: 10.1038/s41467-021-25637-w.
3
Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning.基于运动的手语解释的传感器融合与深度学习。
Sensors (Basel). 2020 Nov 2;20(21):6256. doi: 10.3390/s20216256.
4
Development of Sign Language Motion Recognition System for Hearing-Impaired People Using Electromyography Signal.基于肌电信号的听障人士手语运动识别系统的开发。
Sensors (Basel). 2020 Oct 14;20(20):5807. doi: 10.3390/s20205807.
5
Exploration of Chinese Sign Language Recognition Using Wearable Sensors Based on Deep Belief Net.基于深度置信网的可穿戴传感器的中文手语识别探索。
IEEE J Biomed Health Inform. 2020 May;24(5):1310-1320. doi: 10.1109/JBHI.2019.2941535. Epub 2019 Sep 16.
6
An Recognition-Verification Mechanism for Real-Time Chinese Sign Language Recognition Based on Multi-Information Fusion.一种基于多信息融合的实时中文手语识别的识别-验证机制
Sensors (Basel). 2019 May 31;19(11):2495. doi: 10.3390/s19112495.
7
Finger language recognition based on ensemble artificial neural network learning using armband EMG sensors.基于使用臂带肌电传感器的集成人工神经网络学习的手语识别。
Technol Health Care. 2018;26(S1):249-258. doi: 10.3233/THC-174602.
8
Long short-term memory.长短期记忆
Neural Comput. 1997 Nov 15;9(8):1735-80. doi: 10.1162/neco.1997.9.8.1735.