Suppr超能文献

基于物联网驱动的智能辅助通信系统,用于听力障碍者,采用混合深度学习模型进行手语识别。

IoT-driven smart assistive communication system for the hearing impaired with hybrid deep learning models for sign language recognition.

作者信息

Maashi Mashael, Iskandar Huda G, Rizwanullah Mohammed

机构信息

Department of Software Engineering, College of Computer and Information Sciences, King Saud University, PO Box 103786, 11543, Riyadh, Saudi Arabia.

Department of Information Systems, Faculty of Computer and Information Technology, Sana'a University, Sana'a, Yemen.

出版信息

Sci Rep. 2025 Feb 20;15(1):6192. doi: 10.1038/s41598-025-89975-1.

Abstract

Deaf and hard-of-hearing people utilize sign language recognition (SLR) to interconnect. Sign language (SL) is vital for hard-of-hearing and deaf individuals to communicate. SL uses varied hand gestures to speak words, sentences, or letters. It aids in linking the gap of communication between individuals with hearing loss and other persons. Also, it creates comfortable for individuals with hearing loss to convey their feelings. The Internet of Things (IoTs) can help persons with disabilities sustain their desire to attain a good quality of life and permit them to contribute to their economic and social lives. Modern machine learning (ML) and computer vision (CV) developments have allowed SL gesture detection and decipherment. This study presents a Smart Assistive Communication System for the Hearing-Impaired using Sign Language Recognition with Hybrid Deep Learning (SACHI-SLRHDL) methodology in IoT. The SACHI-SLRHDL technique aims to assist people with hearing impairments by creating an intelligent solution. At the primary stage, the SACHI-SLRHDL technique utilizes bilateral filtering (BF) for image pre-processing to increase the excellence of the captured images by reducing noise while preserving edges. Furthermore, the improved MobileNetV3 model is employed for the feature extraction process. Moreover, the convolutional neural network with a bidirectional gated recurrent unit and attention (CNN-BiGRU-A) model classifier is implemented for the SLR process. Finally, the attraction-repulsion optimization algorithm (AROA) adjusts the hyperparameter values of the CNN-BiGRU-A method optimally, resulting in more excellent classification performance. To exhibit the more significant solution of the SACHI-SLRHDL method, a comprehensive experimental analysis is performed under an Indian SL dataset. The experimental validation of the SACHI-SLRHDL method portrayed a superior accuracy value of 99.19% over existing techniques.

摘要

聋哑人和听力有障碍的人利用手语识别(SLR)进行交流。手语(SL)对于听力有障碍和失聪的人进行沟通至关重要。手语使用各种手势来表达单词、句子或字母。它有助于弥合听力损失者与其他人之间的沟通差距。此外,它让听力损失者能够更轻松地表达自己的感受。物联网(IoTs)可以帮助残疾人维持他们追求高质量生活的愿望,并使他们能够为自己的经济和社会生活做出贡献。现代机器学习(ML)和计算机视觉(CV)的发展使得手语手势检测和解译成为可能。本研究提出了一种用于听力障碍者的智能辅助通信系统,即采用物联网中混合深度学习的手语识别(SACHI-SLRHDL)方法。SACHI-SLRHDL技术旨在通过创建一个智能解决方案来帮助听力障碍者。在初级阶段,SACHI-SLRHDL技术利用双边滤波(BF)进行图像预处理,通过减少噪声同时保留边缘来提高所捕获图像的质量。此外,改进的MobileNetV3模型用于特征提取过程。而且,实现了带有双向门控循环单元和注意力机制的卷积神经网络(CNN-BiGRU-A)模型分类器用于手语识别过程。最后,吸引排斥优化算法(AROA)对手语识别CNN-BiGRU-A方法的超参数值进行最优调整,从而获得更出色的分类性能。为了展示SACHI-SLRHDL方法的更优解决方案,在印度手语数据集下进行了全面的实验分析。SACHI-SLRHDL方法的实验验证表明,其准确率高达99.19%,优于现有技术。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d5e5/11842577/dc9159c5da9e/41598_2025_89975_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验