• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于立体匹配和光流的细节保留粗到细匹配

Detail Preserving Coarse-to-Fine Matching for Stereo Matching and Optical Flow.

作者信息

Deng Yong, Xiao Jimin, Zhou Steven Zhiying, Feng Jiashi

出版信息

IEEE Trans Image Process. 2021;30:5835-5847. doi: 10.1109/TIP.2021.3088635. Epub 2021 Jun 24.

DOI:10.1109/TIP.2021.3088635
PMID:34138709
Abstract

The Coarse-To-Fine (CTF) matching scheme has been widely applied to reduce computational complexity and matching ambiguity in stereo matching and optical flow tasks by converting image pairs into multi-scale representations and performing matching from coarse to fine levels. Despite its efficiency, it suffers from several weaknesses, such as tending to blur the edges and miss small structures like thin bars and holes. We find that the pixels of small structures and edges are often assigned with wrong disparity/flow in the upsampling process of the CTF framework, introducing errors to the fine levels and leading to such weaknesses. We observe that these wrong disparity/flow values can be avoided if we select the best-matched value among their neighborhood, which inspires us to propose a novel differentiable Neighbor-Search Upsampling (NSU) module. The NSU module first estimates the matching scores and then selects the best-matched disparity/flow for each pixel from its neighbors. It effectively preserves finer structure details by exploiting the information from the finer level while upsampling the disparity/flow. The proposed module can be a drop-in replacement of the naive upsampling in the CTF matching framework and allows the neural networks to be trained end-to-end. By integrating the proposed NSU module into a baseline CTF matching network, we design our Detail Preserving Coarse-To-Fine (DPCTF) matching network. Comprehensive experiments demonstrate that our DPCTF can boost performances for both stereo matching and optical flow tasks. Notably, our DPCTF achieves new state-of-the-art performances for both tasks - it outperforms the competitive baseline (Bi3D) by 28.8% (from 0.73 to 0.52) on EPE of the FlyingThings3D stereo dataset, and ranks first in KITTI flow 2012 benchmark. The code is available at https://github.com/Deng-Y/DPCTF.

摘要

粗到细(CTF)匹配方案已被广泛应用于立体匹配和光流任务中,通过将图像对转换为多尺度表示并从粗到细级别进行匹配,以降低计算复杂度和匹配模糊性。尽管它效率高,但也存在一些弱点,比如容易模糊边缘,错过像细条和孔洞这样的小结构。我们发现,在CTF框架的上采样过程中,小结构和边缘的像素经常被赋予错误的视差/流,从而给精细级别引入误差并导致这些弱点。我们观察到,如果在其邻域中选择最佳匹配值,这些错误的视差/流值是可以避免的,这启发我们提出一种新颖的可微邻居搜索上采样(NSU)模块。NSU模块首先估计匹配分数,然后从其邻居中为每个像素选择最佳匹配的视差/流。它在对视差/流进行上采样时,通过利用精细级别的信息有效地保留了更精细的结构细节。所提出的模块可以直接替代CTF匹配框架中的朴素上采样,并允许神经网络进行端到端训练。通过将所提出的NSU模块集成到基线CTF匹配网络中,我们设计了我们的细节保留粗到细(DPCTF)匹配网络。综合实验表明,我们的DPCTF可以提高立体匹配和光流任务的性能。值得注意的是,我们的DPCTF在这两个任务中都取得了新的最优性能——在FlyingThings3D立体数据集的端点误差(EPE)上比有竞争力的基线(Bi3D)提高了28.8%(从0.73降至0.52),并且在KITTI光流2012基准测试中排名第一。代码可在https://github.com/Deng-Y/DPCTF获取。

相似文献

1
Detail Preserving Coarse-to-Fine Matching for Stereo Matching and Optical Flow.用于立体匹配和光流的细节保留粗到细匹配
IEEE Trans Image Process. 2021;30:5835-5847. doi: 10.1109/TIP.2021.3088635. Epub 2021 Jun 24.
2
An efficient and accurate multi-level cascaded recurrent network for stereo matching.一种用于立体匹配的高效且准确的多级级联循环网络。
Sci Rep. 2024 Apr 8;14(1):8148. doi: 10.1038/s41598-024-57321-6.
3
Deep Stereo Matching With Hysteresis Attention and Supervised Cost Volume Construction.基于迟滞注意力和监督代价体构建的深度立体匹配
IEEE Trans Image Process. 2022;31:812-822. doi: 10.1109/TIP.2021.3135485. Epub 2022 Jan 4.
4
A stereo matching algorithm based on the improved PSMNet.基于改进的 PSMNet 的立体匹配算法。
PLoS One. 2021 Aug 19;16(8):e0251657. doi: 10.1371/journal.pone.0251657. eCollection 2021.
5
Parallax attention stereo matching network based on the improved group-wise correlation stereo network.基于改进的分组相关立体网络的视差注意力立体匹配网络。
PLoS One. 2022 Feb 9;17(2):e0263735. doi: 10.1371/journal.pone.0263735. eCollection 2022.
6
Self-Supervised Multiscale Adversarial Regression Network for Stereo Disparity Estimation.用于立体视差估计的自监督多尺度对抗回归网络
IEEE Trans Cybern. 2021 Oct;51(10):4770-4783. doi: 10.1109/TCYB.2020.2999492. Epub 2021 Oct 12.
7
Segment-Based Disparity Refinement With Occlusion Handling for Stereo Matching.用于立体匹配的基于片段的视差细化与遮挡处理
IEEE Trans Image Process. 2019 Aug;28(8):3885-3897. doi: 10.1109/TIP.2019.2903318. Epub 2019 Mar 6.
8
High-accuracy stereo matching based on adaptive ground control points.基于自适应地面控制点的高精度立体匹配。
IEEE Trans Image Process. 2015 Apr;24(4):1412-23. doi: 10.1109/TIP.2015.2393054. Epub 2015 Jan 15.
9
Fundamental Principles on Learning New Features for Effective Dense Matching.学习有效密集匹配新特征的基本原则。
IEEE Trans Image Process. 2018 Feb;27(2):822-836. doi: 10.1109/TIP.2017.2752370. Epub 2017 Sep 14.
10
EGOF-Net: epipolar guided optical flow network for unrectified stereo matching.EGOF-Net:用于未校正立体匹配的对极引导光流网络。
Opt Express. 2021 Oct 11;29(21):33874-33889. doi: 10.1364/OE.440241.