• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

运动算法和智能相机硬件架构的深度

Depth from a Motion Algorithm and a Hardware Architecture for Smart Cameras.

机构信息

Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE), Tonantzintla 72840, Mexico.

Institut Pascal, Université Clermont Auvergne (UCA), 63178 Clermont-Ferrand, France.

出版信息

Sensors (Basel). 2018 Dec 23;19(1):53. doi: 10.3390/s19010053.

DOI:10.3390/s19010053
PMID:30583606
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6338951/
Abstract

Applications such as autonomous navigation, robot vision, and autonomous flying require depth map information of a scene. Depth can be estimated by using a single moving camera (depth from motion). However, the traditional depth from motion algorithms have low processing speeds and high hardware requirements that limit the embedded capabilities. In this work, we propose a hardware architecture for depth from motion that consists of a flow/depth transformation and a new optical flow algorithm. Our optical flow formulation consists in an extension of the stereo matching problem. A pixel-parallel/window-parallel approach where a correlation function based on the sum of absolute difference (SAD) computes the optical flow is proposed. Further, in order to improve the SAD, the curl of the intensity gradient as a preprocessing step is proposed. Experimental results demonstrated that it is possible to reach higher accuracy (90% of accuracy) compared with previous Field Programmable Gate Array (FPGA)-based optical flow algorithms. For the depth estimation, our algorithm delivers dense maps with motion and depth information on all image pixels, with a processing speed up to 128 times faster than that of previous work, making it possible to achieve high performance in the context of embedded applications.

摘要

应用程序,如自主导航、机器人视觉和自主飞行,需要场景的深度图信息。可以通过使用单个移动摄像机(运动中的深度)来估计深度。然而,传统的运动深度算法处理速度低,硬件要求高,限制了嵌入式能力。在这项工作中,我们提出了一种用于运动深度的硬件架构,它由流/深度变换和新的光流算法组成。我们的光流公式由立体匹配问题的扩展组成。提出了一种基于绝对差和(SAD)的相关函数的像素并行/窗口并行方法来计算光流。此外,为了提高 SAD,提出了将强度梯度的旋度作为预处理步骤。实验结果表明,与以前基于现场可编程门阵列(FPGA)的光流算法相比,它可以达到更高的精度(90%的精度)。对于深度估计,我们的算法在所有图像像素上提供具有运动和深度信息的密集图,处理速度比以前的工作快 128 倍,这使得在嵌入式应用中实现高性能成为可能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/f68a8a9a48a9/sensors-19-00053-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/99c19e975fe3/sensors-19-00053-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/4fe921ffc231/sensors-19-00053-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/390b673ee9f7/sensors-19-00053-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/ee9d237cb84d/sensors-19-00053-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/a87f199d62aa/sensors-19-00053-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/ec48e8012cba/sensors-19-00053-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/dd55b21edb47/sensors-19-00053-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/635a61fd67e9/sensors-19-00053-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/349d94ec744d/sensors-19-00053-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/ef8b9bbd955b/sensors-19-00053-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/f8ae01689f98/sensors-19-00053-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/33ea9f4eea29/sensors-19-00053-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/2226b0f06faf/sensors-19-00053-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/a645e0ffc6e3/sensors-19-00053-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/67b8c97026ac/sensors-19-00053-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/897db2e489a2/sensors-19-00053-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/f68a8a9a48a9/sensors-19-00053-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/99c19e975fe3/sensors-19-00053-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/4fe921ffc231/sensors-19-00053-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/390b673ee9f7/sensors-19-00053-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/ee9d237cb84d/sensors-19-00053-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/a87f199d62aa/sensors-19-00053-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/ec48e8012cba/sensors-19-00053-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/dd55b21edb47/sensors-19-00053-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/635a61fd67e9/sensors-19-00053-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/349d94ec744d/sensors-19-00053-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/ef8b9bbd955b/sensors-19-00053-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/f8ae01689f98/sensors-19-00053-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/33ea9f4eea29/sensors-19-00053-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/2226b0f06faf/sensors-19-00053-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/a645e0ffc6e3/sensors-19-00053-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/67b8c97026ac/sensors-19-00053-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/897db2e489a2/sensors-19-00053-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8dec/6338951/f68a8a9a48a9/sensors-19-00053-g017.jpg

相似文献

1
Depth from a Motion Algorithm and a Hardware Architecture for Smart Cameras.运动算法和智能相机硬件架构的深度
Sensors (Basel). 2018 Dec 23;19(1):53. doi: 10.3390/s19010053.
2
Motion-Based Object Location on a Smart Image Sensor Using On-Pixel Memory.基于智能图像传感器上的像素内存的运动物体定位。
Sensors (Basel). 2022 Aug 30;22(17):6538. doi: 10.3390/s22176538.
3
SAD-based stereo vision machine on a System-on-Programmable-Chip (SoPC).基于 SAD 的立体视觉机在可编程片上系统 (SoPC) 上。
Sensors (Basel). 2013 Mar 4;13(3):3014-27. doi: 10.3390/s130303014.
4
Real-Time FPGA Accelerated Stereo Matching for Temporal Statistical Pattern Projector Systems.用于时间统计模式投影仪系统的实时 FPGA 加速立体匹配。
Sensors (Basel). 2021 Sep 26;21(19):6435. doi: 10.3390/s21196435.
5
Pix2Pix-Based Monocular Depth Estimation for Drones with Optical Flow on AirSim.基于 Pix2Pix 的无人机单目深度估计,结合 AirSim 中的光流。
Sensors (Basel). 2022 Mar 8;22(6):2097. doi: 10.3390/s22062097.
6
A Hardware-Friendly Optical Flow-Based Time-to-Collision Estimation Algorithm.一种基于硬件友好的光流的碰撞时间估计算法。
Sensors (Basel). 2019 Feb 16;19(4):807. doi: 10.3390/s19040807.
7
A Selective Change Driven System for High-Speed Motion Analysis.一种用于高速运动分析的选择性变化驱动系统。
Sensors (Basel). 2016 Nov 8;16(11):1875. doi: 10.3390/s16111875.
8
Parallel Optimisation and Implementation of a Real-Time Back Projection (BP) Algorithm for SAR Based on FPGA.基于FPGA的合成孔径雷达实时反投影(BP)算法的并行优化与实现
Sensors (Basel). 2022 Mar 16;22(6):2292. doi: 10.3390/s22062292.
9
A New Parallel Intelligence Based Light Field Dataset for Depth Refinement and Scene Flow Estimation.基于新型平行智能的用于深度细化和场景流估计的光场数据集。
Sensors (Basel). 2022 Dec 4;22(23):9483. doi: 10.3390/s22239483.
10
Depth Estimation Using a Sliding Camera.使用滑动相机进行深度估计。
IEEE Trans Image Process. 2016 Feb;25(2):726-39. doi: 10.1109/TIP.2015.2507984. Epub 2015 Dec 11.

引用本文的文献

1
Multi-Scale Spatio-Temporal Feature Extraction and Depth Estimation from Sequences by Ordinal Classification.基于序分类的序列多尺度时空特征提取与深度估计。
Sensors (Basel). 2020 Apr 1;20(7):1979. doi: 10.3390/s20071979.
2
High Level 3D Structure Extraction from a Single Image Using a CNN-Based Approach.基于卷积神经网络的单幅图像高层三维结构提取。
Sensors (Basel). 2019 Jan 29;19(3):563. doi: 10.3390/s19030563.

本文引用的文献

1
Deep Ordinal Regression Network for Monocular Depth Estimation.用于单目深度估计的深度序数回归网络
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2018 Jun;2018:2002-2011. doi: 10.1109/CVPR.2018.00214. Epub 2018 Dec 17.
2
Efficient smart CMOS camera based on FPGAs oriented to embedded image processing.面向嵌入式图像处理的高效智能 CMOS 相机,基于 FPGAs。
Sensors (Basel). 2011;11(3):2282-303. doi: 10.3390/s110302282. Epub 2011 Feb 24.
3
Bio-inspired motion detection in an FPGA-based smart camera module.基于FPGA的智能相机模块中受生物启发的运动检测
Bioinspir Biomim. 2009 Mar;4(1):015008. doi: 10.1088/1748-3182/4/1/015008. Epub 2009 Mar 4.