• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于 Kinect 的手语识别的判别示例编码。

Discriminative exemplar coding for sign language recognition with Kinect.

出版信息

IEEE Trans Cybern. 2013 Oct;43(5):1418-28. doi: 10.1109/TCYB.2013.2265337. Epub 2013 Jun 19.

DOI:10.1109/TCYB.2013.2265337
PMID:23797313
Abstract

Sign language recognition is a growing research area in the field of computer vision. A challenge within it is to model various signs, varying with time resolution, visual manual appearance, and so on. In this paper, we propose a discriminative exemplar coding (DEC) approach, as well as utilizing Kinect sensor, to model various signs. The proposed DEC method can be summarized as three steps. First, a quantity of class-specific candidate exemplars are learned from sign language videos in each sign category by considering their discrimination. Then, every video of all signs is described as a set of similarities between frames within it and the candidate exemplars. Instead of simply using a heuristic distance measure, the similarities are decided by a set of exemplar-based classifiers through the multiple instance learning, in which a positive (or negative) video is treated as a positive (or negative) bag and those frames similar to the given exemplar in Euclidean space as instances. Finally, we formulate the selection of the most discriminative exemplars into a framework and simultaneously produce a sign video classifier to recognize sign. To evaluate our method, we collect an American sign language dataset, which includes approximately 2000 phrases, while each phrase is captured by Kinect sensor with color, depth, and skeleton information. Experimental results on our dataset demonstrate the feasibility and effectiveness of the proposed approach for sign language recognition.

摘要

手语识别是计算机视觉领域中一个不断发展的研究领域。其中的一个挑战是对各种手语进行建模,这些手语在时间分辨率、视觉手动外观等方面存在差异。在本文中,我们提出了一种判别示例编码(DEC)方法,并利用 Kinect 传感器对手语进行建模。所提出的 DEC 方法可以概括为三个步骤。首先,通过考虑其可辨别性,从每个手语类别中的手语视频中学习大量特定类别的候选示例。然后,将所有手语的每个视频描述为其内部帧之间与候选示例的相似性集。与简单使用启发式距离度量不同,相似性通过一组基于示例的分类器通过多实例学习来确定,其中正例(或负例)视频被视为正例(或负例)包,而在欧几里得空间中与给定示例相似的那些帧被视为实例。最后,我们将选择最具辨别力的示例的过程形式化为一个框架,并同时生成一个手语视频分类器来识别手语。为了评估我们的方法,我们收集了一个美国手语数据集,其中包含大约 2000 个短语,每个短语都通过 Kinect 传感器采集彩色、深度和骨骼信息。在我们的数据集上的实验结果表明,所提出的方法对手语识别具有可行性和有效性。

相似文献

1
Discriminative exemplar coding for sign language recognition with Kinect.基于 Kinect 的手语识别的判别示例编码。
IEEE Trans Cybern. 2013 Oct;43(5):1418-28. doi: 10.1109/TCYB.2013.2265337. Epub 2013 Jun 19.
2
Enhanced computer vision with Microsoft Kinect sensor: a review.增强计算机视觉的微软 Kinect 传感器:综述。
IEEE Trans Cybern. 2013 Oct;43(5):1318-34. doi: 10.1109/TCYB.2013.2265378. Epub 2013 Jun 25.
3
Real-time posture reconstruction for Microsoft Kinect.基于 Microsoft Kinect 的实时姿态重建。
IEEE Trans Cybern. 2013 Oct;43(5):1357-69. doi: 10.1109/TCYB.2013.2275945. Epub 2013 Aug 22.
4
Multilevel depth and image fusion for human activity detection.多层次深度和图像融合的人体活动检测。
IEEE Trans Cybern. 2013 Oct;43(5):1383-94. doi: 10.1109/TCYB.2013.2276433. Epub 2013 Aug 27.
5
Rank preserving sparse learning for Kinect based scene classification.基于 Kinect 的场景分类的保序稀疏学习。
IEEE Trans Cybern. 2013 Oct;43(5):1406-17. doi: 10.1109/TCYB.2013.2264285. Epub 2013 Jul 3.
6
Free-viewpoint video of human actors using multiple handheld Kinects.使用多个手持 Kinect 的人体演员自由视点视频。
IEEE Trans Cybern. 2013 Oct;43(5):1370-82. doi: 10.1109/TCYB.2013.2272321. Epub 2013 Jul 22.
7
Real-time multiple human perception with color-depth cameras on a mobile robot.移动机器人上的彩色深度相机的实时多人感知。
IEEE Trans Cybern. 2013 Oct;43(5):1429-41. doi: 10.1109/TCYB.2013.2275291. Epub 2013 Aug 21.
8
Depth-aware image seam carving.深度感知图像拼接。
IEEE Trans Cybern. 2013 Oct;43(5):1453-61. doi: 10.1109/TCYB.2013.2273270. Epub 2013 Jul 22.
9
3-D rigid body tracking using vision and depth sensors.使用视觉和深度传感器的三维刚体跟踪。
IEEE Trans Cybern. 2013 Oct;43(5):1395-405. doi: 10.1109/TCYB.2013.2272735. Epub 2013 Aug 15.
10
Accurate estimation of human body orientation from RGB-D sensors.基于 RGB-D 传感器的人体姿态精确估计。
IEEE Trans Cybern. 2013 Oct;43(5):1442-52. doi: 10.1109/TCYB.2013.2272636. Epub 2013 Jul 23.

引用本文的文献

1
Hypertuned Deep Convolutional Neural Network for Sign Language Recognition.超调深度卷积神经网络的手语识别。
Comput Intell Neurosci. 2022 Apr 30;2022:1450822. doi: 10.1155/2022/1450822. eCollection 2022.
2
Recognition of Non-Manual Content in Continuous Japanese Sign Language.连续式日本手语中的非手语内容的识别。
Sensors (Basel). 2020 Oct 1;20(19):5621. doi: 10.3390/s20195621.
3
A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.一种基于组件的词汇可扩展手语手势识别框架。
Sensors (Basel). 2016 Apr 19;16(4):556. doi: 10.3390/s16040556.