• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于早期阿拉伯语手语学习者的智能现实生活关键像素图像检测系统。

Intelligent real-life key-pixel image detection system for early Arabic sign language learners.

作者信息

Alamri Faten S, Rehman Amjad, Abdullahi Sunusi Bala, Saba Tanzila

机构信息

Department of Mathematical Sciences, College of Science, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia.

Artificial Intelligence & Data Analytics Lab (AIDA) CCIS Prince Sultan University, Riyadh, Saudi Arabia.

出版信息

PeerJ Comput Sci. 2024 Jun 14;10:e2063. doi: 10.7717/peerj-cs.2063. eCollection 2024.

DOI:10.7717/peerj-cs.2063
PMID:38983191
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11232623/
Abstract

Lack of an effective early sign language learning framework for a hard-of-hearing population can have traumatic consequences, causing social isolation and unfair treatment in workplaces. Alphabet and digit detection methods have been the basic framework for early sign language learning but are restricted by performance and accuracy, making it difficult to detect signs in real life. This article proposes an improved sign language detection method for early sign language learners based on the You Only Look Once version 8.0 (YOLOv8) algorithm, referred to as the intelligent sign language detection system (iSDS), which exploits the power of deep learning to detect sign language-distinct features. The iSDS method could overcome the false positive rates and improve the accuracy as well as the speed of sign language detection. The proposed iSDS framework for early sign language learners consists of three basic steps: (i) image pixel processing to extract features that are underrepresented in the frame, (ii) inter-dependence pixel-based feature extraction using YOLOv8, (iii) web-based signer independence validation. The proposed iSDS enables faster response times and reduces misinterpretation and inference delay time. The iSDS achieved state-of-the-art performance of over 97% for precision, recall, and F1-score with the best mAP of 87%. The proposed iSDS method has several potential applications, including continuous sign language detection systems and intelligent web-based sign recognition systems.

摘要

对于听力障碍人群而言,缺乏有效的早期手语学习框架可能会产生创伤性后果,导致社交孤立以及在工作场所受到不公平待遇。字母和数字检测方法一直是早期手语学习的基本框架,但受性能和准确性的限制,难以在现实生活中检测出手语。本文提出了一种基于You Only Look Once版本8.0(YOLOv8)算法的改进型早期手语学习者手语检测方法,称为智能手语检测系统(iSDS),该方法利用深度学习的力量来检测手语的独特特征。iSDS方法可以克服误报率,提高手语检测的准确性和速度。所提出的针对早期手语学习者的iSDS框架包括三个基本步骤:(i)图像像素处理,以提取帧中代表性不足的特征;(ii)使用YOLOv8基于相互依赖像素的特征提取;(iii)基于网络的手语者独立性验证。所提出的iSDS能够实现更快的响应时间,并减少误解和推理延迟时间。iSDS在精度、召回率和F1分数方面达到了超过97%的先进性能,最佳平均精度均值(mAP)为87%。所提出的iSDS方法有几个潜在应用,包括连续手语检测系统和基于网络的智能手语识别系统。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/a898b74571e4/peerj-cs-10-2063-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/b26cfdfbf2e2/peerj-cs-10-2063-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/6fb616acc225/peerj-cs-10-2063-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/d6397234a27d/peerj-cs-10-2063-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/6cb965ec9b1f/peerj-cs-10-2063-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/1c96d06c416f/peerj-cs-10-2063-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/ace700372744/peerj-cs-10-2063-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/7a89deadd0e1/peerj-cs-10-2063-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/40cda957221b/peerj-cs-10-2063-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/f8669da09cfc/peerj-cs-10-2063-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/492bdb7341bd/peerj-cs-10-2063-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/5fb94fe88e50/peerj-cs-10-2063-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/28f4858e8916/peerj-cs-10-2063-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/1b590d6353e6/peerj-cs-10-2063-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/af706a31819d/peerj-cs-10-2063-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/bff921754ac4/peerj-cs-10-2063-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/a898b74571e4/peerj-cs-10-2063-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/b26cfdfbf2e2/peerj-cs-10-2063-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/6fb616acc225/peerj-cs-10-2063-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/d6397234a27d/peerj-cs-10-2063-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/6cb965ec9b1f/peerj-cs-10-2063-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/1c96d06c416f/peerj-cs-10-2063-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/ace700372744/peerj-cs-10-2063-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/7a89deadd0e1/peerj-cs-10-2063-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/40cda957221b/peerj-cs-10-2063-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/f8669da09cfc/peerj-cs-10-2063-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/492bdb7341bd/peerj-cs-10-2063-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/5fb94fe88e50/peerj-cs-10-2063-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/28f4858e8916/peerj-cs-10-2063-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/1b590d6353e6/peerj-cs-10-2063-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/af706a31819d/peerj-cs-10-2063-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/bff921754ac4/peerj-cs-10-2063-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/43a1/11232623/a898b74571e4/peerj-cs-10-2063-g016.jpg

相似文献

1
Intelligent real-life key-pixel image detection system for early Arabic sign language learners.用于早期阿拉伯语手语学习者的智能现实生活关键像素图像检测系统。
PeerJ Comput Sci. 2024 Jun 14;10:e2063. doi: 10.7717/peerj-cs.2063. eCollection 2024.
2
Toward a Vision-Based Intelligent System: A Stacked Encoded Deep Learning Framework for Sign Language Recognition.基于视觉的智能系统:用于手语识别的堆叠编码深度学习框架。
Sensors (Basel). 2023 Nov 9;23(22):9068. doi: 10.3390/s23229068.
3
Signer-Independent Arabic Sign Language Recognition System Using Deep Learning Model.基于深度学习模型的无签名者依赖的阿拉伯手语识别系统。
Sensors (Basel). 2023 Aug 14;23(16):7156. doi: 10.3390/s23167156.
4
Improved 3D-ResNet sign language recognition algorithm with enhanced hand features.增强手部特征的改进型 3D-ResNet 手语识别算法。
Sci Rep. 2022 Oct 24;12(1):17812. doi: 10.1038/s41598-022-21636-z.
5
Real-Time Arabic Sign Language Recognition Using a Hybrid Deep Learning Model.基于混合深度学习模型的实时阿拉伯手语识别
Sensors (Basel). 2024 Jun 6;24(11):3683. doi: 10.3390/s24113683.
6
Extricating Manual and Non-Manual Features for Subunit Level Medical Sign Modelling in Automatic Sign Language Classification and Recognition.在自动手语分类和识别中对亚单位级医学符号进行建模时提取手动和非手动特征。
J Med Syst. 2017 Sep 22;41(11):175. doi: 10.1007/s10916-017-0819-z.
7
Efhamni: A Deep Learning-Based Saudi Sign Language Recognition Application.埃法赫尼:一种基于深度学习的沙特手语识别应用。
Sensors (Basel). 2024 May 14;24(10):3112. doi: 10.3390/s24103112.
8
Automated sign language detection and classification using reptile search algorithm with hybrid deep learning.使用带有混合深度学习的爬虫搜索算法进行自动手语检测与分类
Heliyon. 2023 Dec 8;10(1):e23252. doi: 10.1016/j.heliyon.2023.e23252. eCollection 2024 Jan 15.
9
Dynamic Japanese Sign Language Recognition Throw Hand Pose Estimation Using Effective Feature Extraction and Classification Approach.基于有效特征提取和分类方法的动态日本手语识别投手姿势估计
Sensors (Basel). 2024 Jan 26;24(3):826. doi: 10.3390/s24030826.
10
Light-Weight Deep Learning Techniques with Advanced Processing for Real-Time Hand Gesture Recognition.轻量级深度学习技术与高级处理的实时手势识别。
Sensors (Basel). 2022 Dec 20;23(1):2. doi: 10.3390/s23010002.

引用本文的文献

1
Attention-based hybrid deep learning model with CSFOA optimization and G-TverskyUNet3+ for Arabic sign language recognition.基于注意力的混合深度学习模型,采用CSFOA优化和G-TverskyUNet3+进行阿拉伯手语识别。
Sci Rep. 2025 Jun 26;15(1):20313. doi: 10.1038/s41598-025-03560-0.

本文引用的文献

1
Signer-Independent Arabic Sign Language Recognition System Using Deep Learning Model.基于深度学习模型的无签名者依赖的阿拉伯手语识别系统。
Sensors (Basel). 2023 Aug 14;23(16):7156. doi: 10.3390/s23167156.
2
Sign Language Recognition for Arabic Alphabets Using Transfer Learning Technique.基于迁移学习技术的阿拉伯字母手语识别。
Comput Intell Neurosci. 2022 Apr 22;2022:4567989. doi: 10.1155/2022/4567989. eCollection 2022.
3
American Sign Language Words Recognition of Skeletal Videos Using Processed Video Driven Multi-Stacked Deep LSTM.
基于处理视频驱动的多层堆叠深度 LSTM 的骨骼视频的美国手语词识别。
Sensors (Basel). 2022 Feb 11;22(4):1406. doi: 10.3390/s22041406.
4
ArASL: Arabic Alphabets Sign Language Dataset.ArASL:阿拉伯字母手语数据集。
Data Brief. 2019 Feb 23;23:103777. doi: 10.1016/j.dib.2019.103777. eCollection 2019 Apr.
5
American Sign Language Recognition Using Leap Motion Controller with Machine Learning Approach.使用 Leap Motion 控制器和机器学习方法进行美国手语识别。
Sensors (Basel). 2018 Oct 19;18(10):3554. doi: 10.3390/s18103554.