• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人机交互探索:挑战性环境中的手势识别管理

An Exploration into Human-Computer Interaction: Hand Gesture Recognition Management in a Challenging Environment.

作者信息

Chang Victor, Eniola Rahman Olamide, Golightly Lewis, Xu Qianwen Ariel

机构信息

Aston University, Aston St, Birmingham, B4 7ET UK.

Teesside University, Campus Heart, Southfield Rd, Middlesbrough, TS1 3BX UK.

出版信息

SN Comput Sci. 2023;4(5):441. doi: 10.1007/s42979-023-01751-y. Epub 2023 Jun 12.

DOI:10.1007/s42979-023-01751-y
PMID:37334142
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10258789/
Abstract

Scientists are developing hand gesture recognition systems to improve authentic, efficient, and effortless human-computer interactions without additional gadgets, particularly for the speech-impaired community, which relies on hand gestures as their only mode of communication. Unfortunately, the speech-impaired community has been underrepresented in the majority of human-computer interaction research, such as natural language processing and other automation fields, which makes it more difficult for them to interact with systems and people through these advanced systems. This system's algorithm is in two phases. The first step is the Region of Interest Segmentation, based on the color space segmentation technique, with a pre-set color range that will remove pixels (hand) of the region of interest from the background (pixels not in the desired area of interest). The system's second phase is inputting the segmented images into a Convolutional Neural Network (CNN) model for image categorization. For image training, we utilized the Python Keras package. The system proved the need for image segmentation in hand gesture recognition. The performance of the optimal model is 58 percent which is about 10 percent higher than the accuracy obtained without image segmentation.

摘要

科学家们正在开发手势识别系统,以在无需额外设备的情况下改善真实、高效且轻松的人机交互,特别是对于依赖手势作为唯一交流方式的言语障碍群体。不幸的是,在大多数人机交互研究中,如自然语言处理和其他自动化领域,言语障碍群体的代表性不足,这使得他们更难通过这些先进系统与系统及他人进行交互。该系统的算法分为两个阶段。第一步是基于颜色空间分割技术的感兴趣区域分割,通过预设的颜色范围将感兴趣区域的像素(手)与背景(不在所需感兴趣区域的像素)区分开来。系统的第二阶段是将分割后的图像输入卷积神经网络(CNN)模型进行图像分类。对于图像训练,我们使用了Python的Keras包。该系统证明了手势识别中图像分割的必要性。最优模型的性能为58%,比未进行图像分割时获得的准确率高出约10%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/e0125017d9e9/42979_2023_1751_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/3472bee90f48/42979_2023_1751_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/1f3b9d1a2b0e/42979_2023_1751_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/ace3a2d4a99c/42979_2023_1751_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/32d1719b3dad/42979_2023_1751_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/c178b5df86e7/42979_2023_1751_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/5f84e03bd875/42979_2023_1751_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/bf872f5ce59c/42979_2023_1751_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/91f303a4f4fe/42979_2023_1751_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/4054e6de0248/42979_2023_1751_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/1a7952d0c438/42979_2023_1751_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/e6074a735604/42979_2023_1751_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/5aa188d32385/42979_2023_1751_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/1026676114e0/42979_2023_1751_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/7e91444afdae/42979_2023_1751_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/dc12e40fd347/42979_2023_1751_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/b7b1ce131753/42979_2023_1751_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/7fd8142a4e37/42979_2023_1751_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/3c18296d55e0/42979_2023_1751_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/e0125017d9e9/42979_2023_1751_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/3472bee90f48/42979_2023_1751_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/1f3b9d1a2b0e/42979_2023_1751_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/ace3a2d4a99c/42979_2023_1751_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/32d1719b3dad/42979_2023_1751_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/c178b5df86e7/42979_2023_1751_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/5f84e03bd875/42979_2023_1751_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/bf872f5ce59c/42979_2023_1751_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/91f303a4f4fe/42979_2023_1751_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/4054e6de0248/42979_2023_1751_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/1a7952d0c438/42979_2023_1751_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/e6074a735604/42979_2023_1751_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/5aa188d32385/42979_2023_1751_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/1026676114e0/42979_2023_1751_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/7e91444afdae/42979_2023_1751_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/dc12e40fd347/42979_2023_1751_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/b7b1ce131753/42979_2023_1751_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/7fd8142a4e37/42979_2023_1751_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/3c18296d55e0/42979_2023_1751_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/545f/10258789/e0125017d9e9/42979_2023_1751_Fig19_HTML.jpg

相似文献

1
An Exploration into Human-Computer Interaction: Hand Gesture Recognition Management in a Challenging Environment.人机交互探索:挑战性环境中的手势识别管理
SN Comput Sci. 2023;4(5):441. doi: 10.1007/s42979-023-01751-y. Epub 2023 Jun 12.
2
Real-Time Hand Gesture Recognition Using Fine-Tuned Convolutional Neural Network.基于微调卷积神经网络的实时手势识别。
Sensors (Basel). 2022 Jan 18;22(3):706. doi: 10.3390/s22030706.
3
A deep convolutional neural network model for hand gesture recognition in 2D near-infrared images.二维近红外图像中手势识别的深度卷积神经网络模型。
Biomed Phys Eng Express. 2021 Jul 9;7(5). doi: 10.1088/2057-1976/ac0d91.
4
A Deep Learning-Based End-to-End Composite System for Hand Detection and Gesture Recognition.基于深度学习的手检测与手势识别端到端复合系统。
Sensors (Basel). 2019 Nov 30;19(23):5282. doi: 10.3390/s19235282.
5
Hand Gesture Recognition based on Surface Electromyography using Convolutional Neural Network with Transfer Learning Method.基于卷积神经网络的迁移学习方法的表面肌电手势识别。
IEEE J Biomed Health Inform. 2021 Apr;25(4):1292-1304. doi: 10.1109/JBHI.2020.3009383. Epub 2021 Apr 6.
6
Air-GR: An Over-the-Air Handwritten Character Recognition System Based on Coordinate Correction YOLOv5 Algorithm and LGR-CNN.基于坐标校正 YOLOv5 算法和 LGR-CNN 的空中手写字符识别系统:Air-GR
Sensors (Basel). 2023 Jan 28;23(3):1464. doi: 10.3390/s23031464.
7
Enhancement of surgical hand gesture recognition using a capsule network for a contactless interface in the operating room.使用胶囊网络增强手术室无接触界面的手术手势识别。
Comput Methods Programs Biomed. 2020 Jul;190:105385. doi: 10.1016/j.cmpb.2020.105385. Epub 2020 Feb 6.
8
From Signal to Image: Enabling Fine-Grained Gesture Recognition with Commercial Wi-Fi Devices.从信号到图像:利用商业 Wi-Fi 设备实现精细手势识别。
Sensors (Basel). 2018 Sep 18;18(9):3142. doi: 10.3390/s18093142.
9
CNN Deep Learning with Wavelet Image Fusion of CCD RGB-IR and Depth-Grayscale Sensor Data for Hand Gesture Intention Recognition.CNN 基于 CCD RGB-IR 与深度灰度传感器数据的子波图像融合的深度学习在手势意图识别中的应用。
Sensors (Basel). 2022 Jan 21;22(3):803. doi: 10.3390/s22030803.
10
Gesture-Based Human Machine Interaction Using RCNNs in Limited Computation Power Devices.基于手势的人机交互使用 RCNN 在有限计算能力设备中。
Sensors (Basel). 2021 Dec 8;21(24):8202. doi: 10.3390/s21248202.

引用本文的文献

1
Local pattern aware 3D video swin transformer with masked autoencoding for realtime augmented reality gesture interaction.用于实时增强现实手势交互的具有掩码自动编码的局部模式感知3D视频斯温变压器
Sci Rep. 2025 Jul 1;15(1):21318. doi: 10.1038/s41598-025-05935-9.
2
Performance Measurement of Gesture-Based Human-Machine Interfaces Within eXtended Reality Head-Mounted Displays.扩展现实头戴式显示器中基于手势的人机界面性能测量
Sensors (Basel). 2025 Apr 30;25(9):2831. doi: 10.3390/s25092831.
3
Experimental Setup for Evaluating Depth Sensors in Augmented Reality Technologies Used in Medical Devices.

本文引用的文献

1
Hand Gesture Recognition Based on Computer Vision: A Review of Techniques.基于计算机视觉的手势识别:技术综述
J Imaging. 2020 Jul 23;6(8):73. doi: 10.3390/jimaging6080073.
2
The Bitonic Filter: Linear Filtering in an Edge-Preserving Morphological Framework.双调滤波器:一种边缘保持形态学框架中的线性滤波方法。
IEEE Trans Image Process. 2016 Nov;25(11):5199-211. doi: 10.1109/TIP.2016.2605302. Epub 2016 Sep 2.
3
A gesture-based tool for sterile browsing of radiology images.一种用于放射学图像无菌浏览的基于手势的工具。
评估医疗设备中使用的增强现实技术的深度传感器的实验设置。
Sensors (Basel). 2024 Jun 17;24(12):3916. doi: 10.3390/s24123916.
J Am Med Inform Assoc. 2008 May-Jun;15(3):321-3. doi: 10.1197/jamia.M241.
4
Skin segmentation using color pixel classification: analysis and comparison.基于颜色像素分类的皮肤分割:分析与比较
IEEE Trans Pattern Anal Mach Intell. 2005 Jan;27(1):148-54. doi: 10.1109/TPAMI.2005.17.