Bai Jing, Wang Mengjie, Kong Dexin
School of Computer Science and Engineering, North Minzu University, Yinchuan 750021, China.
Ningxia Province Key Laboratory of Intelligent Information and Data Processing, Yinchuan 750021, China.
Entropy (Basel). 2019 Apr 4;21(4):369. doi: 10.3390/e21040369.
Sketch-based 3D model retrieval has become an important research topic in many applications, such as computer graphics and computer-aided design. Although sketches and 3D models have huge interdomain visual perception discrepancies, and sketches of the same object have remarkable intradomain visual perception diversity, the 3D models and sketches of the same class share common semantic content. Motivated by these findings, we propose a novel approach for sketch-based 3D model retrieval by constructing a deep common semantic space embedding using triplet network. First, a common data space is constructed by representing every 3D model as a group of views. Second, a common modality space is generated by translating views to sketches according to cross entropy evaluation. Third, a common semantic space embedding for two domains is learned based on a triplet network. Finally, based on the learned features of sketches and 3D models, four kinds of distance metrics between sketches and 3D models are designed, and sketch-based 3D model retrieval results are achieved. The experimental results using the Shape Retrieval Contest (SHREC) 2013 and SHREC 2014 datasets reveal the superiority of our proposed method over state-of-the-art methods.
基于草图的三维模型检索已成为计算机图形学和计算机辅助设计等许多应用中的一个重要研究课题。尽管草图和三维模型在跨领域视觉感知上存在巨大差异,且同一物体的草图在领域内视觉感知上也具有显著多样性,但同一类别的三维模型和草图共享共同的语义内容。受这些发现的启发,我们提出了一种基于草图的三维模型检索新方法,即使用三元组网络构建深度共同语义空间嵌入。首先,通过将每个三维模型表示为一组视图来构建一个共同数据空间。其次,根据交叉熵评估将视图转换为草图,从而生成一个共同模态空间。第三,基于三元组网络学习两个领域的共同语义空间嵌入。最后,基于所学习的草图和三维模型特征,设计了草图与三维模型之间的四种距离度量,并实现了基于草图的三维模型检索结果。使用2013年形状检索竞赛(SHREC)和2014年SHREC数据集的实验结果表明,我们提出的方法优于现有方法。