Suppr超能文献

BoSCC:用于空间增强 3D 形状表示的空间上下文关联袋。

BoSCC: Bag of Spatial Context Correlations for Spatially Enhanced 3D Shape Representation.

出版信息

IEEE Trans Image Process. 2017 Aug;26(8):3707-3720. doi: 10.1109/TIP.2017.2704426. Epub 2017 May 16.

Abstract

Highly discriminative 3D shape representations can be formed by encoding the spatial relationship among virtual words into the Bag of Words (BoW) method. To achieve this challenging task, several unresolved issues in the encoding procedure must be overcome for 3D shapes, including: 1) arbitrary mesh resolution; 2) irregular vertex topology; 3) orientation ambiguity on the 3D surface; and 4) invariance to rigid and non-rigid shape transformations. In this paper, a novel spatially enhanced 3D shape representation called bag of spatial context correlations (BoSCCs) is proposed to address all these issues. Adopting a novel local perspective, BoSCC is able to describe a 3D shape by an occurrence frequency histogram of spatial context correlation patterns, which makes BoSCC become more compact and discriminative than previous global perspective-based methods. Specifically, the spatial context correlation is proposed to simultaneously encode the geometric and spatial information of a 3D local region by the correlation among spatial contexts of vertices in that region, which effectively resolves the aforementioned issues. The spatial context of each vertex is modeled by Markov chains in a multi-scale manner, which thoroughly captures the spatial relationship by the transition probabilities of intra-virtual words and the ones of inter-virtual words. The high discriminability and compactness of BoSCC are effective for classification and retrieval, especially in the scenarios of limited samples and partial shape retrieval. Experimental results show that BoSCC outperforms the state-of-the-art spatially enhanced BoW methods in three common applications: global shape retrieval, shape classification, and partial shape retrieval.

摘要

高度可区分的 3D 形状表示可以通过将虚拟词之间的空间关系编码到词袋 (BoW) 方法中形成。为了实现这一具有挑战性的任务,必须克服 3D 形状编码过程中的几个未解决的问题,包括:1)任意网格分辨率;2)不规则顶点拓扑;3)3D 表面上的方向模糊性;和 4)对刚体和非刚体形状变换的不变性。在本文中,提出了一种称为空间增强 3D 形状表示的新型方法,称为空间上下文相关的词袋 (BoSCCs),以解决所有这些问题。采用新颖的局部视角,BoSCC 能够通过空间上下文相关模式的出现频率直方图来描述 3D 形状,这使得 BoSCC 比以前基于全局视角的方法更加紧凑和具有区分度。具体来说,空间上下文相关性通过该区域中顶点的空间上下文之间的相关性来同时编码 3D 局部区域的几何和空间信息,从而有效地解决了上述问题。每个顶点的空间上下文以多尺度方式建模为马尔可夫链,通过虚拟词内和虚拟词间的转移概率彻底捕获空间关系。BoSCC 的高可区分性和紧凑性对于分类和检索非常有效,尤其是在样本有限和部分形状检索的情况下。实验结果表明,BoSCC 在三种常见应用中优于最先进的空间增强 BoW 方法:全局形状检索、形状分类和部分形状检索。

相似文献

1
BoSCC: Bag of Spatial Context Correlations for Spatially Enhanced 3D Shape Representation.
IEEE Trans Image Process. 2017 Aug;26(8):3707-3720. doi: 10.1109/TIP.2017.2704426. Epub 2017 May 16.
2
Deep Spatiality: Unsupervised Learning of Spatially-Enhanced Global and Local 3D Features by Deep Neural Network with Coupled Softmax.
IEEE Trans Image Process. 2018 Jun;27(9):3049-3063. doi: 10.1109/TIP.2018.2816821. Epub 2018 Mar 16.
3
Unsupervised 3D Local Feature Learning by Circle Convolutional Restricted Boltzmann Machine.
IEEE Trans Image Process. 2016 Nov;25(11):5331-5344. doi: 10.1109/TIP.2016.2605920. Epub 2016 Sep 2.
4
Unsupervised Learning of 3-D Local Features From Raw Voxels Based on a Novel Permutation Voxelization Strategy.
IEEE Trans Cybern. 2019 Feb;49(2):481-494. doi: 10.1109/TCYB.2017.2778764. Epub 2017 Dec 28.
5
Mesh Convolutional Restricted Boltzmann Machines for Unsupervised Learning of Features With Structure Preservation on 3-D Meshes.
IEEE Trans Neural Netw Learn Syst. 2017 Oct;28(10):2268-2281. doi: 10.1109/TNNLS.2016.2582532. Epub 2016 Jun 30.
6
Establishing point correspondence of 3D faces via sparse facial deformable model.
IEEE Trans Image Process. 2013 Nov;22(11):4170-81. doi: 10.1109/TIP.2013.2271115. Epub 2013 Jun 26.
7
Multi-Scale Representation Learning on Hypergraph for 3D Shape Retrieval and Recognition.
IEEE Trans Image Process. 2021;30:5327-5338. doi: 10.1109/TIP.2021.3082765. Epub 2021 Jun 2.
8
Pose-oblivious shape signature.
IEEE Trans Vis Comput Graph. 2007 Mar-Apr;13(2):261-71. doi: 10.1109/TVCG.2007.45.
10
Mining compact bag-of-patterns for low bit rate mobile visual search.
IEEE Trans Image Process. 2014 Jul;23(7):3099-113. doi: 10.1109/TIP.2014.2324291.

引用本文的文献

1
A Transformer-Based Capsule Network for 3D Part-Whole Relationship Learning.
Entropy (Basel). 2022 May 11;24(5):678. doi: 10.3390/e24050678.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验