• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

海报摘要:利用多传感器进行三维活动定位

Poster Abstract: 3D Activity Localization With Multiple Sensors.

作者信息

Li Xinyu, Zhang Yanyi, Zhang Jianyu, Chen Shuhong, Gu Yue, Farneth Richard A, Marsic Ivan, Burd Randall S

机构信息

Rutgers University, Piscataway, New Jersey.

Children's National Medical Center, Washington, District of Columbia.

出版信息

IPSN. 2017 Apr;2017:297-298. doi: 10.1145/3055031.3055057.

DOI:10.1145/3055031.3055057
PMID:30393785
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6214452/
Abstract

We present a deep learning framework for fast 3D activity localization and tracking in a dynamic and crowded real world setting. Our training approach reverses the traditional activity localization approach, which first estimates the possible location of activities and then predicts their occurrence. Instead, we first trained a deep convolutional neural network for activity recognition using depth video and RFID data as input, and then used the activation maps of the network to locate the recognized activity in the 3D space. Our system achieved around 20cm average localization error (in a 4 × 5 room) which is comparable to Kinect's body skeleton tracking error (10-20cm), but our system tracks activities instead of Kinect's location of people.

摘要

我们提出了一种深度学习框架,用于在动态且拥挤的现实世界环境中进行快速三维活动定位与跟踪。我们的训练方法与传统的活动定位方法相反,传统方法是先估计活动的可能位置,然后预测其发生情况。相反,我们首先使用深度视频和射频识别数据作为输入,训练一个深度卷积神经网络用于活动识别,然后利用该网络的激活映射在三维空间中定位已识别的活动。我们的系统实现了约20厘米的平均定位误差(在一个4×5米的房间内),这与Kinect的人体骨骼跟踪误差(10 - 20厘米)相当,但我们的系统跟踪的是活动,而不是Kinect所跟踪的人的位置。

相似文献

1
Poster Abstract: 3D Activity Localization With Multiple Sensors.海报摘要:利用多传感器进行三维活动定位
IPSN. 2017 Apr;2017:297-298. doi: 10.1145/3055031.3055057.
2
Deep Learning for RFID-Based Activity Recognition.基于射频识别的活动识别的深度学习
Proc Int Conf Embed Netw Sens Syst. 2016 Nov;2016:164-175. doi: 10.1145/2994551.2994569.
3
Online Process Phase Detection Using Multimodal Deep Learning.基于多模态深度学习的在线过程阶段检测
Ubiquitous Comput Electron Mob Commun Conf (UEMCON) IEEE Annu. 2016 Oct;2016. doi: 10.1109/UEMCON.2016.7777912. Epub 2016 Dec 12.
4
Accuracy of Kinect's skeleton tracking for upper body rehabilitation applications.用于上身康复应用的Kinect骨骼跟踪的准确性。
Disabil Rehabil Assist Technol. 2014 Jul;9(4):344-52. doi: 10.3109/17483107.2013.805825. Epub 2013 Jun 20.
5
Analysis of Movement and Activities of Handball Players Using Deep Neural Networks.使用深度神经网络对手球运动员的运动和活动进行分析。
J Imaging. 2023 Apr 13;9(4):80. doi: 10.3390/jimaging9040080.
6
Detection, segmentation, and 3D pose estimation of surgical tools using convolutional neural networks and algebraic geometry.使用卷积神经网络和代数几何进行手术工具的检测、分割和三维姿态估计。
Med Image Anal. 2021 May;70:101994. doi: 10.1016/j.media.2021.101994. Epub 2021 Feb 7.
7
Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition.从视觉到触觉的迁移学习:用于视触 3D 物体识别的混合深度卷积神经网络。
Sensors (Basel). 2020 Dec 27;21(1):113. doi: 10.3390/s21010113.
8
3DLRA: An RFID 3D Indoor Localization Method Based on Deep Learning.3DLRA:一种基于深度学习的射频识别三维室内定位方法
Sensors (Basel). 2020 May 11;20(9):2731. doi: 10.3390/s20092731.
9
A Deep Sequence Learning Framework for Action Recognition in Small-Scale Depth Video Dataset.用于小规模深度视频数据集动作识别的深度序列学习框架。
Sensors (Basel). 2022 Sep 9;22(18):6841. doi: 10.3390/s22186841.
10
A deep learning framework for automatic detection of arbitrarily shaped fiducial markers in intrafraction fluoroscopic images.一种用于在分次透视图像中自动检测任意形状基准标记的深度学习框架。
Med Phys. 2019 May;46(5):2286-2297. doi: 10.1002/mp.13519. Epub 2019 Apr 15.

本文引用的文献

1
Online Process Phase Detection Using Multimodal Deep Learning.基于多模态深度学习的在线过程阶段检测
Ubiquitous Comput Electron Mob Commun Conf (UEMCON) IEEE Annu. 2016 Oct;2016. doi: 10.1109/UEMCON.2016.7777912. Epub 2016 Dec 12.