• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于物联网驱动的智能辅助通信系统,用于听力障碍者,采用混合深度学习模型进行手语识别。

IoT-driven smart assistive communication system for the hearing impaired with hybrid deep learning models for sign language recognition.

作者信息

Maashi Mashael, Iskandar Huda G, Rizwanullah Mohammed

机构信息

Department of Software Engineering, College of Computer and Information Sciences, King Saud University, PO Box 103786, 11543, Riyadh, Saudi Arabia.

Department of Information Systems, Faculty of Computer and Information Technology, Sana'a University, Sana'a, Yemen.

出版信息

Sci Rep. 2025 Feb 20;15(1):6192. doi: 10.1038/s41598-025-89975-1.

DOI:10.1038/s41598-025-89975-1
PMID:39979401
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11842577/
Abstract

Deaf and hard-of-hearing people utilize sign language recognition (SLR) to interconnect. Sign language (SL) is vital for hard-of-hearing and deaf individuals to communicate. SL uses varied hand gestures to speak words, sentences, or letters. It aids in linking the gap of communication between individuals with hearing loss and other persons. Also, it creates comfortable for individuals with hearing loss to convey their feelings. The Internet of Things (IoTs) can help persons with disabilities sustain their desire to attain a good quality of life and permit them to contribute to their economic and social lives. Modern machine learning (ML) and computer vision (CV) developments have allowed SL gesture detection and decipherment. This study presents a Smart Assistive Communication System for the Hearing-Impaired using Sign Language Recognition with Hybrid Deep Learning (SACHI-SLRHDL) methodology in IoT. The SACHI-SLRHDL technique aims to assist people with hearing impairments by creating an intelligent solution. At the primary stage, the SACHI-SLRHDL technique utilizes bilateral filtering (BF) for image pre-processing to increase the excellence of the captured images by reducing noise while preserving edges. Furthermore, the improved MobileNetV3 model is employed for the feature extraction process. Moreover, the convolutional neural network with a bidirectional gated recurrent unit and attention (CNN-BiGRU-A) model classifier is implemented for the SLR process. Finally, the attraction-repulsion optimization algorithm (AROA) adjusts the hyperparameter values of the CNN-BiGRU-A method optimally, resulting in more excellent classification performance. To exhibit the more significant solution of the SACHI-SLRHDL method, a comprehensive experimental analysis is performed under an Indian SL dataset. The experimental validation of the SACHI-SLRHDL method portrayed a superior accuracy value of 99.19% over existing techniques.

摘要

聋哑人和听力有障碍的人利用手语识别(SLR)进行交流。手语(SL)对于听力有障碍和失聪的人进行沟通至关重要。手语使用各种手势来表达单词、句子或字母。它有助于弥合听力损失者与其他人之间的沟通差距。此外,它让听力损失者能够更轻松地表达自己的感受。物联网(IoTs)可以帮助残疾人维持他们追求高质量生活的愿望,并使他们能够为自己的经济和社会生活做出贡献。现代机器学习(ML)和计算机视觉(CV)的发展使得手语手势检测和解译成为可能。本研究提出了一种用于听力障碍者的智能辅助通信系统,即采用物联网中混合深度学习的手语识别(SACHI-SLRHDL)方法。SACHI-SLRHDL技术旨在通过创建一个智能解决方案来帮助听力障碍者。在初级阶段,SACHI-SLRHDL技术利用双边滤波(BF)进行图像预处理,通过减少噪声同时保留边缘来提高所捕获图像的质量。此外,改进的MobileNetV3模型用于特征提取过程。而且,实现了带有双向门控循环单元和注意力机制的卷积神经网络(CNN-BiGRU-A)模型分类器用于手语识别过程。最后,吸引排斥优化算法(AROA)对手语识别CNN-BiGRU-A方法的超参数值进行最优调整,从而获得更出色的分类性能。为了展示SACHI-SLRHDL方法的更优解决方案,在印度手语数据集下进行了全面的实验分析。SACHI-SLRHDL方法的实验验证表明,其准确率高达99.19%,优于现有技术。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/13929e03d4cb/41598_2025_89975_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/dc9159c5da9e/41598_2025_89975_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/b488ffe843d6/41598_2025_89975_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/137dee45e9ee/41598_2025_89975_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/55f82325b11c/41598_2025_89975_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/f8561e9f6686/41598_2025_89975_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/0a8942ca4449/41598_2025_89975_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/4fe4533b79d8/41598_2025_89975_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/ce71817f67eb/41598_2025_89975_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/4805a0f289dd/41598_2025_89975_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/f6786c9dee28/41598_2025_89975_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/bb16c3915f16/41598_2025_89975_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/68745f217c7e/41598_2025_89975_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/bb71e3b58dc5/41598_2025_89975_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/409a9e0a78e2/41598_2025_89975_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/13929e03d4cb/41598_2025_89975_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/dc9159c5da9e/41598_2025_89975_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/b488ffe843d6/41598_2025_89975_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/137dee45e9ee/41598_2025_89975_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/55f82325b11c/41598_2025_89975_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/f8561e9f6686/41598_2025_89975_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/0a8942ca4449/41598_2025_89975_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/4fe4533b79d8/41598_2025_89975_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/ce71817f67eb/41598_2025_89975_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/4805a0f289dd/41598_2025_89975_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/f6786c9dee28/41598_2025_89975_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/bb16c3915f16/41598_2025_89975_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/68745f217c7e/41598_2025_89975_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/bb71e3b58dc5/41598_2025_89975_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/409a9e0a78e2/41598_2025_89975_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/13929e03d4cb/41598_2025_89975_Fig15_HTML.jpg

相似文献

1
IoT-driven smart assistive communication system for the hearing impaired with hybrid deep learning models for sign language recognition.基于物联网驱动的智能辅助通信系统,用于听力障碍者,采用混合深度学习模型进行手语识别。
Sci Rep. 2025 Feb 20;15(1):6192. doi: 10.1038/s41598-025-89975-1.
2
Innovative hand pose based sign language recognition using hybrid metaheuristic optimization algorithms with deep learning model for hearing impaired persons.基于创新手部姿势的手语识别:使用混合元启发式优化算法与深度学习模型助力听力障碍者
Sci Rep. 2025 Mar 18;15(1):9320. doi: 10.1038/s41598-025-93559-4.
3
Atom Search Optimization with Deep Learning Enabled Arabic Sign Language Recognition for Speaking and Hearing Disability Persons.基于深度学习的原子搜索优化算法用于聋哑人士阿拉伯语手语识别
Healthcare (Basel). 2022 Aug 24;10(9):1606. doi: 10.3390/healthcare10091606.
4
Real-Time Arabic Sign Language Recognition Using a Hybrid Deep Learning Model.基于混合深度学习模型的实时阿拉伯手语识别
Sensors (Basel). 2024 Jun 6;24(11):3683. doi: 10.3390/s24113683.
5
Automated sign language detection and classification using reptile search algorithm with hybrid deep learning.使用带有混合深度学习的爬虫搜索算法进行自动手语检测与分类
Heliyon. 2023 Dec 8;10(1):e23252. doi: 10.1016/j.heliyon.2023.e23252. eCollection 2024 Jan 15.
6
Efhamni: A Deep Learning-Based Saudi Sign Language Recognition Application.埃法赫尼:一种基于深度学习的沙特手语识别应用。
Sensors (Basel). 2024 May 14;24(10):3112. doi: 10.3390/s24103112.
7
Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning.基于运动的手语解释的传感器融合与深度学习。
Sensors (Basel). 2020 Nov 2;20(21):6256. doi: 10.3390/s20216256.
8
An assistive interface protocol for communication between visually and hearing-speech impaired persons in internet platform.用于互联网平台上视障和听障人士之间通信的辅助接口协议。
Disabil Rehabil Assist Technol. 2024 Jan;19(1):233-246. doi: 10.1080/17483107.2022.2078898. Epub 2022 May 26.
9
Toward a Vision-Based Intelligent System: A Stacked Encoded Deep Learning Framework for Sign Language Recognition.基于视觉的智能系统:用于手语识别的堆叠编码深度学习框架。
Sensors (Basel). 2023 Nov 9;23(22):9068. doi: 10.3390/s23229068.
10
Artificial intelligence-driven ensemble deep learning models for smart monitoring of indoor activities in IoT environment for people with disabilities.用于物联网环境中智能监测残疾人室内活动的人工智能驱动的集成深度学习模型。
Sci Rep. 2025 Feb 5;15(1):4337. doi: 10.1038/s41598-025-88450-1.

引用本文的文献

1
Deep computer vision with artificial intelligence based sign language recognition to assist hearing and speech-impaired individuals.基于人工智能的深度计算机视觉手语识别技术,以帮助听力和言语障碍人士。
Sci Rep. 2025 Sep 2;15(1):32268. doi: 10.1038/s41598-025-09106-8.
2
Harnessing attention-driven hybrid deep learning with combined feature representation for precise sign language recognition to aid deaf and speech-impaired people.利用注意力驱动的混合深度学习与组合特征表示进行精确的手语识别,以帮助聋人和言语障碍者。
Sci Rep. 2025 Sep 1;15(1):32255. doi: 10.1038/s41598-025-15109-2.

本文引用的文献

1
High-precision monitoring and prediction of mining area surface subsidence using SBAS-InSAR and CNN-BiGRU-attention model.基于SBAS-InSAR和CNN-BiGRU-注意力模型的矿区地表沉陷高精度监测与预测
Sci Rep. 2024 Nov 22;14(1):28968. doi: 10.1038/s41598-024-80446-7.
2
Tea leaf disease and insect identification based on improved MobileNetV3.基于改进的MobileNetV3的茶叶病虫害识别
Front Plant Sci. 2024 Sep 27;15:1459292. doi: 10.3389/fpls.2024.1459292. eCollection 2024.
3
A bilateral filtering-based image enhancement for Alzheimer disease classification using CNN.
基于双边滤波的卷积神经网络在阿尔茨海默病分类中的图像增强
PLoS One. 2024 Apr 19;19(4):e0302358. doi: 10.1371/journal.pone.0302358. eCollection 2024.
4
Integration of federated learning with IoT for smart cities applications, challenges, and solutions.将联邦学习与物联网集成用于智慧城市应用、挑战及解决方案。
PeerJ Comput Sci. 2023 Dec 6;9:e1657. doi: 10.7717/peerj-cs.1657. eCollection 2023.
5
AI enabled sign language recognition and VR space bidirectional communication using triboelectric smart glove.利用摩擦电智能手套实现 AI 手语识别和 VR 空间双向通信。
Nat Commun. 2021 Sep 10;12(1):5378. doi: 10.1038/s41467-021-25637-w.
6
Artificial Intelligence Technologies for Sign Language.手语的人工智能技术。
Sensors (Basel). 2021 Aug 30;21(17):5843. doi: 10.3390/s21175843.
7
Assistive technologies for severe and profound hearing loss: Beyond hearing aids and implants.重度和极重度听力损失的辅助技术:助听器和植入物之外。
Assist Technol. 2020 Jul 3;32(4):182-193. doi: 10.1080/10400435.2018.1522524. Epub 2019 Jan 17.