• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

移动通讯网络上计算机视觉的三维特征点稳健估计与优化传输

Robust Estimation and Optimized Transmission of 3D Feature Points for Computer Vision on Mobile Communication Network.

机构信息

Department of Electronic Materials Engeering, Kwangwoon University, Seoul 01897, Korea.

出版信息

Sensors (Basel). 2022 Nov 7;22(21):8563. doi: 10.3390/s22218563.

DOI:10.3390/s22218563
PMID:36366264
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9655592/
Abstract

Due to the amount of transmitted data and the security of personal or private information in wireless communication, there are cases where the information for a multimedia service should be directly transferred from the user's device to the cloud server without the captured original images. This paper proposes a new method to generate 3D (dimensional) keypoints based on a user's mobile device with a commercial RGB camera in a distributed computing environment such as a cloud server. The images are captured with a moving camera and 2D keypoints are extracted from them. After executing feature extraction between continuous frames, disparities are calculated between frames using the relationships between matched keypoints. The physical distance of the baseline is estimated by using the motion information of the camera, and the actual distance is calculated by using the calculated disparity and the estimated baseline. Finally, 3D keypoints are generated by adding the extracted 2D keypoints to the calculated distance. A keypoint-based scene change method is proposed as well. Due to the existing similarity between continuous frames captured from a camera, not all 3D keypoints are transferred and stored, only the new ones. Compared with the ground truth of the TUM dataset, the average error of the estimated 3D keypoints was measured as 5.98 mm, which shows that the proposed method has relatively good performance considering that it uses a commercial RGB camera on a mobile device. Furthermore, the transferred 3D keypoints were decreased to about 73.6%.

摘要

由于无线通信中传输的数据量和个人或私人信息的安全性,存在多媒体服务的信息应直接从用户设备传输到云服务器而无需捕获原始图像的情况。本文提出了一种新方法,即在云服务器等分布式计算环境中,使用商用 RGB 相机从用户的移动设备生成 3D(维度)关键点。使用移动相机捕获图像,并从中提取 2D 关键点。在连续帧之间执行特征提取后,使用匹配关键点之间的关系计算帧之间的视差。通过使用相机的运动信息来估计基线的物理距离,并使用计算出的视差和估计的基线来计算实际距离。最后,通过将提取的 2D 关键点添加到计算出的距离中,生成 3D 关键点。还提出了基于关键点的场景变化方法。由于从相机捕获的连续帧之间存在相似性,并非所有 3D 关键点都被传输和存储,仅传输和存储新的关键点。与 TUM 数据集的真实值相比,估计的 3D 关键点的平均误差为 5.98mm,这表明考虑到它在移动设备上使用商用 RGB 相机,该方法具有相对较好的性能。此外,传输的 3D 关键点减少到约 73.6%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/e8792785e862/sensors-22-08563-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/1f7c59d4030c/sensors-22-08563-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/85085b2d4ec7/sensors-22-08563-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/0682651709cc/sensors-22-08563-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/219f86728860/sensors-22-08563-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/bf1f048aa7e0/sensors-22-08563-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/06e12d8a4849/sensors-22-08563-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/d3ab3bafac76/sensors-22-08563-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/c2bc7326b758/sensors-22-08563-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/ae36192a9051/sensors-22-08563-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/cc8f18d95854/sensors-22-08563-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/984e003652d2/sensors-22-08563-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/ee1defcac823/sensors-22-08563-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/64e22bd9603e/sensors-22-08563-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/d03d5bd5ef79/sensors-22-08563-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/35c2586886ab/sensors-22-08563-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/1dae5422a4de/sensors-22-08563-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/e8792785e862/sensors-22-08563-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/1f7c59d4030c/sensors-22-08563-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/85085b2d4ec7/sensors-22-08563-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/0682651709cc/sensors-22-08563-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/219f86728860/sensors-22-08563-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/bf1f048aa7e0/sensors-22-08563-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/06e12d8a4849/sensors-22-08563-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/d3ab3bafac76/sensors-22-08563-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/c2bc7326b758/sensors-22-08563-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/ae36192a9051/sensors-22-08563-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/cc8f18d95854/sensors-22-08563-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/984e003652d2/sensors-22-08563-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/ee1defcac823/sensors-22-08563-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/64e22bd9603e/sensors-22-08563-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/d03d5bd5ef79/sensors-22-08563-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/35c2586886ab/sensors-22-08563-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/1dae5422a4de/sensors-22-08563-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cf6/9655592/e8792785e862/sensors-22-08563-g017.jpg

相似文献

1
Robust Estimation and Optimized Transmission of 3D Feature Points for Computer Vision on Mobile Communication Network.移动通讯网络上计算机视觉的三维特征点稳健估计与优化传输
Sensors (Basel). 2022 Nov 7;22(21):8563. doi: 10.3390/s22218563.
2
Head Pose Estimation through Keypoints Matching between Reconstructed 3D Face Model and 2D Image.基于重建 3D 人脸模型和 2D 图像关键点匹配的头部姿势估计
Sensors (Basel). 2021 Mar 6;21(5):1841. doi: 10.3390/s21051841.
3
Unsupervised distribution-aware keypoints generation from 3D point clouds.基于 3D 点云的无监督分布感知关键点生成。
Neural Netw. 2024 May;173:106158. doi: 10.1016/j.neunet.2024.106158. Epub 2024 Feb 7.
4
Noise-Robust 3D Pose Estimation Using Appearance Similarity Based on the Distributed Multiple Views.基于分布式多视图的外观相似性的抗噪声3D姿态估计
Sensors (Basel). 2024 Aug 30;24(17):5645. doi: 10.3390/s24175645.
5
Automatic landmark identification for surgical 3d-navigation - A proposed method for marker-free dental surgical navigation systems.自动手术 3D 导航地标识别——一种用于无标记牙科手术导航系统的方法。
Biomed Tech (Berl). 2022 Jul 4;67(5):411-417. doi: 10.1515/bmt-2021-0307. Print 2022 Oct 26.
6
Dynamic detection of three-dimensional crop phenotypes based on a consumer-grade RGB-D camera.基于消费级RGB-D相机的三维作物表型动态检测
Front Plant Sci. 2023 Jan 27;14:1097725. doi: 10.3389/fpls.2023.1097725. eCollection 2023.
7
Robust Video Stabilization Using Particle Keypoint Update and l₁-Optimized Camera Path.基于粒子关键点更新和l₁优化相机路径的鲁棒视频稳定技术
Sensors (Basel). 2017 Feb 10;17(2):337. doi: 10.3390/s17020337.
8
SARN: Shifted Attention Regression Network for 3D Hand Pose Estimation.SARN:用于3D手部姿态估计的注意力转移回归网络。
Bioengineering (Basel). 2023 Jan 17;10(2):126. doi: 10.3390/bioengineering10020126.
9
Estimating Ground Reaction Forces from Two-Dimensional Pose Data: A Biomechanics-Based Comparison of AlphaPose, BlazePose, and OpenPose.基于二维姿态数据估计地面反作用力:AlphaPose、BlazePose 和 OpenPose 的生物力学比较。
Sensors (Basel). 2022 Dec 21;23(1):78. doi: 10.3390/s23010078.
10
Similarity Graph-Based Camera Tracking for Effective 3D Geometry Reconstruction with Mobile RGB-D Camera.基于相似性图的移动 RGB-D 相机的有效三维几何结构重建的相机跟踪。
Sensors (Basel). 2019 Nov 9;19(22):4897. doi: 10.3390/s19224897.

引用本文的文献

1
FGCN: Image-Fused Point Cloud Semantic Segmentation with Fusion Graph Convolutional Network.FGCN:基于融合图卷积网络的图像融合点云语义分割
Sensors (Basel). 2023 Oct 9;23(19):8338. doi: 10.3390/s23198338.

本文引用的文献

1
Efficient Multi-Scale Stereo-Matching Network Using Adaptive Cost Volume Filtering.基于自适应代价体滤波的高效多尺度立体匹配网络
Sensors (Basel). 2022 Jul 23;22(15):5500. doi: 10.3390/s22155500.
2
Keypoint Detection for Injury Identification during Turkey Husbandry Using Neural Networks.基于神经网络的火鸡养殖中伤害识别的关键点检测。
Sensors (Basel). 2022 Jul 11;22(14):5188. doi: 10.3390/s22145188.
3
A Comparison and Evaluation of Stereo Matching on Active Stereo Images.主动立体图像的立体匹配比较与评价。
Sensors (Basel). 2022 Apr 26;22(9):3332. doi: 10.3390/s22093332.
4
Keypoint-Aware Single-Stage 3D Object Detector for Autonomous Driving.用于自动驾驶的关键点感知单阶段 3D 目标检测器。
Sensors (Basel). 2022 Feb 14;22(4):1451. doi: 10.3390/s22041451.
5
Evaluation of Keypoint Descriptors for Flight Simulator Cockpit Elements: WrightBroS Database.评价飞行模拟器驾驶舱元素关键点描述符:WrightBroS 数据库。
Sensors (Basel). 2021 Nov 19;21(22):7687. doi: 10.3390/s21227687.
6
A Fast Stereo Matching Network with Multi-Cross Attention.一种具有多交叉注意力的快速立体匹配网络。
Sensors (Basel). 2021 Sep 8;21(18):6016. doi: 10.3390/s21186016.
7
Head Pose Estimation through Keypoints Matching between Reconstructed 3D Face Model and 2D Image.基于重建 3D 人脸模型和 2D 图像关键点匹配的头部姿势估计
Sensors (Basel). 2021 Mar 6;21(5):1841. doi: 10.3390/s21051841.
8
Indoor Scene Change Captioning Based on Multimodality Data.基于多模态数据的室内场景字幕生成
Sensors (Basel). 2020 Aug 23;20(17):4761. doi: 10.3390/s20174761.
9
Evaluation of Several Feature Detectors/Extractors on Underwater Images towards vSLAM.面向 vSLAM 的水下图像若干特征检测器/提取器的评估
Sensors (Basel). 2020 Aug 4;20(15):4343. doi: 10.3390/s20154343.
10
Image change detection algorithms: a systematic survey.图像变化检测算法:系统综述。
IEEE Trans Image Process. 2005 Mar;14(3):294-307. doi: 10.1109/tip.2004.838698.