• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过视频示例展示 3D 场景。

Presentation of 3D Scenes Through Video Example.

出版信息

IEEE Trans Vis Comput Graph. 2017 Sep;23(9):2096-2107. doi: 10.1109/TVCG.2016.2608828. Epub 2016 Sep 13.

DOI:10.1109/TVCG.2016.2608828
PMID:28113668
Abstract

Using synthetic videos to present a 3D scene is a common requirement for architects, designers, engineers or Cultural Heritage professionals however it is usually time consuming and, in order to obtain high quality results, the support of a film maker/computer animation expert is necessary. We introduce an alternative approach that takes the 3D scene of interest and an example video as input, and automatically produces a video of the input scene that resembles the given video example. In other words, our algorithm allows the user to "replicate" an existing video, on a different 3D scene. We build on the intuition that a video sequence of a static environment is strongly characterized by its optical flow, or, in other words, that two videos are similar if their optical flows are similar. We therefore recast the problem as producing a video of the input scene whose optical flow is similar to the optical flow of the input video. Our intuition is supported by a user-study specifically designed to verify this statement. We have successfully tested our approach on several scenes and input videos, some of which are reported in the accompanying material of this paper.

摘要

使用合成视频来呈现三维场景是建筑师、设计师、工程师或文化遗产专业人员的常见要求,但这通常需要花费大量时间,并且为了获得高质量的结果,需要电影制作人员/计算机动画专家的支持。我们介绍了一种替代方法,该方法将感兴趣的三维场景和示例视频作为输入,并自动生成类似于给定示例视频的输入场景的视频。换句话说,我们的算法允许用户在不同的三维场景上“复制”现有视频。我们的直觉是,静态环境的视频序列由其光流强烈表征,或者换句话说,如果两个视频的光流相似,那么它们就相似。因此,我们将问题重新表述为生成输入场景的视频,其光流与输入视频的光流相似。我们的直觉得到了一项专门设计的用户研究的支持,该研究旨在验证这一说法。我们已经成功地在几个场景和输入视频上测试了我们的方法,其中一些在本文的配套材料中有所报道。

相似文献

1
Presentation of 3D Scenes Through Video Example.通过视频示例展示 3D 场景。
IEEE Trans Vis Comput Graph. 2017 Sep;23(9):2096-2107. doi: 10.1109/TVCG.2016.2608828. Epub 2016 Sep 13.
2
Scene Adaptive Online Surveillance Video Synopsis via Dynamic Tube Rearrangement Using Octree.通过使用八叉树的动态管重排实现场景自适应在线监控视频概要
IEEE Trans Image Process. 2021;30:8318-8331. doi: 10.1109/TIP.2021.3114986. Epub 2021 Oct 5.
3
Detection and removal of fence occlusions in an image using a video of the static/dynamic scene.使用静态/动态场景视频检测并去除图像中的围栏遮挡。
J Opt Soc Am A Opt Image Sci Vis. 2016 Oct 1;33(10):1917-1930. doi: 10.1364/JOSAA.33.001917.
4
Video Salient Object Detection via Fully Convolutional Networks.基于全卷积网络的视频显著目标检测
IEEE Trans Image Process. 2018;27(1):38-49. doi: 10.1109/TIP.2017.2754941.
5
High resolution animated scenes from stills.由静态图片生成的高分辨率动画场景。
IEEE Trans Vis Comput Graph. 2007 May-Jun;13(3):562-568. doi: 10.1109/TVCG.2007.1005.
6
A Data-Driven Approach for Furniture and Indoor Scene Colorization.
IEEE Trans Vis Comput Graph. 2018 Sep;24(9):2473-2486. doi: 10.1109/TVCG.2017.2753255. Epub 2017 Sep 18.
7
Contextualized videos: combining videos with environment models to support situational understanding.情境化视频:将视频与环境模型相结合以支持情境理解。
IEEE Trans Vis Comput Graph. 2007 Nov-Dec;13(6):1568-75. doi: 10.1109/TVCG.2007.70544.
8
Video-based crowd synthesis.基于视频的人群合成。
IEEE Trans Vis Comput Graph. 2013 Nov;19(11):1935-47. doi: 10.1109/TVCG.2012.317.
9
Text2NeRF: Text-Driven 3D Scene Generation With Neural Radiance Fields.Text2NeRF:基于神经辐射场的文本驱动3D场景生成
IEEE Trans Vis Comput Graph. 2024 Dec;30(12):7749-7762. doi: 10.1109/TVCG.2024.3361502. Epub 2024 Oct 28.
10
A Diffusion and Clustering-Based Approach for Finding Coherent Motions and Understanding Crowd Scenes.基于扩散和聚类的方法来发现连贯运动和理解人群场景。
IEEE Trans Image Process. 2016 Apr;25(4):1674-87. doi: 10.1109/TIP.2016.2531281. Epub 2016 Feb 18.