• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

SIFT 流:跨越场景的密集对应及其应用。

SIFT flow: dense correspondence across scenes and its applications.

机构信息

Microsoft Research New England, Microsoft Corp., One Memorial Drive, Cambridge, MA 02142, USA.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2011 May;33(5):978-94. doi: 10.1109/TPAMI.2010.147.

DOI:10.1109/TPAMI.2010.147
PMID:20714019
Abstract

While image alignment has been studied in different areas of computer vision for decades, aligning images depicting different scenes remains a challenging problem. Analogous to optical flow, where an image is aligned to its temporally adjacent frame, we propose SIFT flow, a method to align an image to its nearest neighbors in a large image corpus containing a variety of scenes. The SIFT flow algorithm consists of matching densely sampled, pixelwise SIFT features between two images while preserving spatial discontinuities. The SIFT features allow robust matching across different scene/object appearances, whereas the discontinuity-preserving spatial model allows matching of objects located at different parts of the scene. Experiments show that the proposed approach robustly aligns complex scene pairs containing significant spatial differences. Based on SIFT flow, we propose an alignment-based large database framework for image analysis and synthesis, where image information is transferred from the nearest neighbors to a query image according to the dense scene correspondence. This framework is demonstrated through concrete applications such as motion field prediction from a single image, motion synthesis via object transfer, satellite image registration, and face recognition.

摘要

虽然图像配准在计算机视觉的不同领域已经研究了几十年,但对齐描绘不同场景的图像仍然是一个具有挑战性的问题。类似于光流,将一个图像与它的时间相邻帧对齐,我们提出了 SIFT 流,一种将图像与其在包含各种场景的大型图像语料库中的最近邻对齐的方法。SIFT 流算法包括在两幅图像之间匹配密集采样的、逐像素的 SIFT 特征,同时保持空间不连续性。SIFT 特征允许在不同的场景/对象外观之间进行稳健匹配,而保持空间不连续性的空间模型允许匹配位于场景不同部分的对象。实验表明,所提出的方法能够稳健地对齐包含显著空间差异的复杂场景对。基于 SIFT 流,我们提出了一种基于对齐的大型数据库框架,用于图像分析和合成,根据密集的场景对应关系,将图像信息从最近邻传递到查询图像。该框架通过具体应用进行了演示,例如从单张图像预测运动场、通过对象传输进行运动合成、卫星图像配准和人脸识别。

相似文献

1
SIFT flow: dense correspondence across scenes and its applications.SIFT 流:跨越场景的密集对应及其应用。
IEEE Trans Pattern Anal Mach Intell. 2011 May;33(5):978-94. doi: 10.1109/TPAMI.2010.147.
2
Video registration using dynamic textures.视频的动态纹理注册。
IEEE Trans Pattern Anal Mach Intell. 2011 Jan;33(1):158-71. doi: 10.1109/TPAMI.2010.61.
3
Observing human-object interactions: using spatial and functional compatibility for recognition.观察人与物体的交互:利用空间和功能兼容性进行识别。
IEEE Trans Pattern Anal Mach Intell. 2009 Oct;31(10):1775-89. doi: 10.1109/TPAMI.2009.83.
4
Detecting abandoned objects with a moving camera.用移动摄像机检测被遗弃的物体。
IEEE Trans Image Process. 2010 Aug;19(8):2201-10. doi: 10.1109/TIP.2010.2045714. Epub 2010 Apr 5.
5
Nonparametric Scene Parsing via Label Transfer.基于标签转移的非参数场景解析。
IEEE Trans Pattern Anal Mach Intell. 2011 Dec;33(12):2368-82. doi: 10.1109/TPAMI.2011.131. Epub 2011 Jun 30.
6
Alignment of continuous video onto 3D point clouds.将连续视频与3D点云对齐。
IEEE Trans Pattern Anal Mach Intell. 2005 Aug;27(8):1305-18. doi: 10.1109/TPAMI.2005.152.
7
Robust object matching for persistent tracking with heterogeneous features.用于基于异构特征的持续跟踪的鲁棒对象匹配
IEEE Trans Pattern Anal Mach Intell. 2007 May;29(5):824-39. doi: 10.1109/TPAMI.2007.1052.
8
Registration of challenging image pairs: initialization, estimation, and decision.具有挑战性的图像对配准:初始化、估计与决策。
IEEE Trans Pattern Anal Mach Intell. 2007 Nov;29(11):1973-89. doi: 10.1109/TPAMI.2007.1116.
9
Error analysis of robust optical flow estimation by least median of squares methods for the varying illumination model.基于最小中位数方法的鲁棒光流估计在变化光照模型下的误差分析
IEEE Trans Pattern Anal Mach Intell. 2006 Sep;28(9):1418-35. doi: 10.1109/TPAMI.2006.185.
10
Robust multiperson tracking from a mobile platform.基于移动平台的可靠多人跟踪。
IEEE Trans Pattern Anal Mach Intell. 2009 Oct;31(10):1831-46. doi: 10.1109/TPAMI.2009.109.

引用本文的文献

1
Efficient cell-wide mapping of mitochondria in electron microscopic volumes using webKnossos.利用webKnossos在电子显微镜体积中对线粒体进行高效的全细胞映射。
Cell Rep Methods. 2025 Feb 24;5(2):100989. doi: 10.1016/j.crmeth.2025.100989.
2
Tracing the Chromatin: From 3C to Live-Cell Imaging.追踪染色质:从3C技术到活细胞成像
Chem Biomed Imaging. 2024 Jun 25;2(10):659-682. doi: 10.1021/cbmi.4c00033. eCollection 2024 Oct 28.
3
A Depth Awareness and Learnable Feature Fusion Network for Enhanced Geometric Perception in Semantic Correspondence.
一种用于增强语义对应中几何感知的深度感知与可学习特征融合网络。
Sensors (Basel). 2024 Oct 17;24(20):6680. doi: 10.3390/s24206680.
4
SC-AOF: A Sliding Camera and Asymmetric Optical-Flow-Based Blending Method for Image Stitching.SC-AOF:一种基于滑动相机和非对称光流的图像拼接融合方法
Sensors (Basel). 2024 Jun 21;24(13):4035. doi: 10.3390/s24134035.
5
Discriminative context-aware network for camouflaged object detection.用于伪装目标检测的判别式上下文感知网络。
Front Artif Intell. 2024 Mar 27;7:1347898. doi: 10.3389/frai.2024.1347898. eCollection 2024.
6
Object-Oriented and Visual-Based Localization in Urban Environments.城市环境中基于对象和视觉的定位
Sensors (Basel). 2024 Mar 21;24(6):2014. doi: 10.3390/s24062014.
7
Determining dense velocity fields for fluid images based on affine motion.基于仿射运动确定流体图像的密集速度场。
PeerJ Comput Sci. 2024 Feb 16;10:e1810. doi: 10.7717/peerj-cs.1810. eCollection 2024.
8
Ref-MEF: Reference-Guided Flexible Gated Image Reconstruction Network for Multi-Exposure Image Fusion.Ref-MEF:用于多曝光图像融合的参考引导灵活门控图像重建网络
Entropy (Basel). 2024 Feb 3;26(2):139. doi: 10.3390/e26020139.
9
A revised conceptual framework for mouse vomeronasal pumping and stimulus sampling.修订后的小鼠犁鼻器抽吸和刺激取样的概念框架。
Curr Biol. 2024 Mar 25;34(6):1206-1221.e6. doi: 10.1016/j.cub.2024.01.036. Epub 2024 Feb 5.
10
LMFD: lightweight multi-feature descriptors for image stitching.LMFD:用于图像拼接的轻量级多特征描述符
Sci Rep. 2023 Nov 30;13(1):21162. doi: 10.1038/s41598-023-48432-7.