• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于高同步短期速度的 HFR 视频立体匹配。

HFR-Video-Based Stereo Correspondence Using High Synchronous Short-Term Velocities.

机构信息

Smart Robotics Laboratory, Graduate School of Advanced Science and Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8527, Japan.

出版信息

Sensors (Basel). 2023 Apr 26;23(9):4285. doi: 10.3390/s23094285.

DOI:10.3390/s23094285
PMID:37177489
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10181470/
Abstract

This study focuses on solving the correspondence problem of multiple moving objects with similar appearances in stereoscopic videos. Specifically, we address the multi-camera correspondence problem by taking into account the pixel-level and feature-level stereo correspondences, and object-level cross-camera multiple object correspondence. Most correspondence algorithms rely on texture and color information of the stereo images, making it challenging to distinguish between similar-looking objects, such as ballet dancers and corporate employees wearing similar dresses, or farm animals such as chickens, ducks, and cows. However, by leveraging the low latency and high synchronization of high-speed cameras, we can perceive the phase and frequency differences between the movements of similar-looking objects. In this study, we propose using short-term velocities (STVs) of objects as motion features to determine the correspondence of multiple objects by calculating the similarity of STVs. To validate our approach, we conducted stereo correspondence experiments using markers attached to a metronome and natural hand movements to simulate simple and complex motion scenes. The experimental results demonstrate that our method achieved good performance in stereo correspondence.

摘要

本研究专注于解决立体视频中多个外观相似的移动物体的对应问题。具体而言,我们通过考虑像素级和特征级立体对应以及对象级跨摄像机多个物体对应来解决多摄像机对应问题。大多数对应算法依赖于立体图像的纹理和颜色信息,因此难以区分外观相似的物体,例如穿着相似服装的芭蕾舞演员和公司员工,或者外观相似的农场动物,如鸡、鸭和牛。然而,通过利用高速摄像机的低延迟和高同步性,我们可以感知相似物体运动之间的相位和频率差异。在本研究中,我们提出使用物体的短期速度(STV)作为运动特征,通过计算 STV 的相似性来确定多个物体的对应关系。为了验证我们的方法,我们使用附着在节拍器和自然手部运动上的标记进行了立体对应实验,以模拟简单和复杂的运动场景。实验结果表明,我们的方法在立体对应中取得了良好的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/9c4c18a092fb/sensors-23-04285-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/8558bd04ae45/sensors-23-04285-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/8befc7c3852e/sensors-23-04285-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/26228db49bf9/sensors-23-04285-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/0f3c8e591aea/sensors-23-04285-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/4888ebc2aa59/sensors-23-04285-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/c2645690e646/sensors-23-04285-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/6d5aa2dc6fdc/sensors-23-04285-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/cf0467bcf106/sensors-23-04285-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/9915818d4761/sensors-23-04285-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/a9000daadcaf/sensors-23-04285-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/1fbe513e3411/sensors-23-04285-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/f5c42c406437/sensors-23-04285-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/b4763071d420/sensors-23-04285-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/d4a3fe600b20/sensors-23-04285-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/1eccfcb567b9/sensors-23-04285-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/8cf69a185502/sensors-23-04285-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/ed6af94e96fa/sensors-23-04285-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/d690d1e1e89c/sensors-23-04285-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/07854a43b315/sensors-23-04285-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/871c264b7cfa/sensors-23-04285-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/9c4c18a092fb/sensors-23-04285-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/8558bd04ae45/sensors-23-04285-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/8befc7c3852e/sensors-23-04285-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/26228db49bf9/sensors-23-04285-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/0f3c8e591aea/sensors-23-04285-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/4888ebc2aa59/sensors-23-04285-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/c2645690e646/sensors-23-04285-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/6d5aa2dc6fdc/sensors-23-04285-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/cf0467bcf106/sensors-23-04285-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/9915818d4761/sensors-23-04285-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/a9000daadcaf/sensors-23-04285-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/1fbe513e3411/sensors-23-04285-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/f5c42c406437/sensors-23-04285-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/b4763071d420/sensors-23-04285-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/d4a3fe600b20/sensors-23-04285-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/1eccfcb567b9/sensors-23-04285-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/8cf69a185502/sensors-23-04285-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/ed6af94e96fa/sensors-23-04285-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/d690d1e1e89c/sensors-23-04285-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/07854a43b315/sensors-23-04285-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/871c264b7cfa/sensors-23-04285-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3d48/10181470/9c4c18a092fb/sensors-23-04285-g021.jpg

相似文献

1
HFR-Video-Based Stereo Correspondence Using High Synchronous Short-Term Velocities.基于高同步短期速度的 HFR 视频立体匹配。
Sensors (Basel). 2023 Apr 26;23(9):4285. doi: 10.3390/s23094285.
2
Stereo geometry from 3D ego-motion streams.来自3D自我运动流的立体几何。
IEEE Trans Syst Man Cybern B Cybern. 2003;33(2):308-23. doi: 10.1109/TSMCB.2002.805698.
3
Correspondence-free activity analysis and scene modeling in multiple camera views.多视角下无对应活动分析和场景建模。
IEEE Trans Pattern Anal Mach Intell. 2010 Jan;32(1):56-71. doi: 10.1109/TPAMI.2008.241.
4
Joint Stereo Video Deblurring, Scene Flow Estimation and Moving Object Segmentation.联合立体视频去模糊、场景流估计与运动目标分割
IEEE Trans Image Process. 2019 Oct 11. doi: 10.1109/TIP.2019.2945867.
5
Stereoscopic video deblurring transformer.立体视频去模糊变压器。
Sci Rep. 2024 Jun 21;14(1):14342. doi: 10.1038/s41598-024-63860-9.
6
Efficient hybrid tree-based stereo matching with applications to postcapture image refocusing.基于高效混合树的立体匹配及其在后期重聚焦图像中的应用。
IEEE Trans Image Process. 2014 Aug;23(8):3428-42. doi: 10.1109/TIP.2014.2329389. Epub 2014 Jun 5.
7
Monocular Stereo Measurement Using High-Speed Catadioptric Tracking.基于高速折反射跟踪的单目立体测量
Sensors (Basel). 2017 Aug 9;17(8):1839. doi: 10.3390/s17081839.
8
Robust active stereo vision using Kullback-Leibler divergence.基于 KL 散度的鲁棒主动立体视觉
IEEE Trans Pattern Anal Mach Intell. 2012 Mar;34(3):548-63. doi: 10.1109/TPAMI.2011.162.
9
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.基于事件的神经形态立体视觉系统的三维感知尖峰神经网络模型。
Sci Rep. 2017 Jan 12;7:40703. doi: 10.1038/srep40703.
10
A wavelet-based multiresolution approach to solve the stereo correspondence problem using mutual information.一种基于小波的多分辨率方法,利用互信息解决立体匹配问题。
IEEE Trans Syst Man Cybern B Cybern. 2007 Aug;37(4):1009-14. doi: 10.1109/tsmcb.2007.890584.

本文引用的文献

1
A novel factor graph-based optimization technique for stereo correspondence estimation.一种用于立体匹配估计的基于因子图的新型优化技术。
Sci Rep. 2022 Sep 16;12(1):15613. doi: 10.1038/s41598-022-19336-9.
2
Motion Similarity Evaluation between Human and a Tri-Co Robot during Real-Time Imitation with a Trajectory Dynamic Time Warping Model.基于轨迹动态时间规整模型的实时模仿中人与三自由度机器人运动相似度评估。
Sensors (Basel). 2022 Mar 2;22(5):1968. doi: 10.3390/s22051968.
3
Does stereoscopic imaging improve the memorization of medical imaging by neurosurgeons? Experience of a single institution.
立体成像是否能提高神经外科医生对医学影像的记忆?单机构经验。
Neurosurg Rev. 2022 Apr;45(2):1371-1381. doi: 10.1007/s10143-021-01623-0. Epub 2021 Sep 22.
4
Multi-Target Multi-Camera Tracking of Vehicles Using Metadata-Aided Re-ID and Trajectory-Based Camera Link Model.基于元数据辅助重识别和基于轨迹的相机链接模型的车辆多目标多相机跟踪
IEEE Trans Image Process. 2021;30:5198-5210. doi: 10.1109/TIP.2021.3078124. Epub 2021 May 25.
5
A DWT-SVD based robust digital watermarking for medical image security.基于 DWT-SVD 的医学图像安全鲁棒数字水印
Forensic Sci Int. 2021 Mar;320:110691. doi: 10.1016/j.forsciint.2021.110691. Epub 2021 Jan 13.
6
Histogram of Oriented Gradient-Based Fusion of Features for Human Action Recognition in Action Video Sequences.基于方向梯度直方图的动作视频序列中人体动作识别特征融合直方图
Sensors (Basel). 2020 Dec 18;20(24):7299. doi: 10.3390/s20247299.
7
A Survey on Deep Learning Techniques for Stereo-Based Depth Estimation.基于立体视觉的深度估计深度学习技术研究综述。
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):1738-1764. doi: 10.1109/TPAMI.2020.3032602. Epub 2022 Mar 4.
8
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.深入探索脉冲神经网络:VGG和残差架构。
Front Neurosci. 2019 Mar 7;13:95. doi: 10.3389/fnins.2019.00095. eCollection 2019.
9
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.空间金字塔池化在深度卷积网络中的视觉识别。
IEEE Trans Pattern Anal Mach Intell. 2015 Sep;37(9):1904-16. doi: 10.1109/TPAMI.2015.2389824.