Suppr超能文献

一种用于场景分割与标注的强大3D-2D交互式工具。

A Robust 3D-2D Interactive Tool for Scene Segmentation and Annotation.

作者信息

Nguyen Thanh Duc, Hua Binh-Son, Yu Lap-Fai, Yeung Sai Kit

出版信息

IEEE Trans Vis Comput Graph. 2018 Dec;24(12):3005-3018. doi: 10.1109/TVCG.2017.2772238. Epub 2017 Nov 20.

Abstract

Recent advances of 3D acquisition devices have enabled large-scale acquisition of 3D scene data. Such data, if completely and well annotated, can serve as useful ingredients for a wide spectrum of computer vision and graphics works such as data-driven modeling and scene understanding, object detection and recognition. However, annotating a vast amount of 3D scene data remains challenging due to the lack of an effective tool and/or the complexity of 3D scenes (e.g. clutter, varying illumination conditions). This paper aims to build a robust annotation tool that effectively and conveniently enables the segmentation and annotation of massive 3D data. Our tool works by coupling 2D and 3D information via an interactive framework, through which users can provide high-level semantic annotation for objects. We have experimented our tool and found that a typical indoor scene could be well segmented and annotated in less than 30 minutes by using the tool, as opposed to a few hours if done manually. Along with the tool, we created a dataset of over a hundred 3D scenes associated with complete annotations using our tool. Both the tool and dataset will be available at http://scenenn.net.

摘要

3D采集设备的最新进展使得大规模采集3D场景数据成为可能。如果对这些数据进行完整且良好的标注,它们可以作为各种计算机视觉和图形工作的有用素材,如数据驱动建模和场景理解、目标检测与识别。然而,由于缺乏有效的工具和/或3D场景的复杂性(例如杂乱、光照条件变化),对大量3D场景数据进行标注仍然具有挑战性。本文旨在构建一个强大的标注工具,能够有效且方便地对海量3D数据进行分割和标注。我们的工具通过一个交互式框架将2D和3D信息结合起来工作,通过这个框架用户可以为物体提供高级语义标注。我们对我们的工具进行了实验,发现使用该工具,一个典型的室内场景可以在不到30分钟内得到很好的分割和标注,而手动操作则需要几个小时。连同该工具,我们使用我们的工具创建了一个包含一百多个带有完整标注的3D场景的数据集。该工具和数据集均可在http://scenenn.net上获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验