Suppr超能文献

基于异构动态图表示的跨模态3D形状检索

Cross-Modal 3D Shape Retrieval via Heterogeneous Dynamic Graph Representation.

作者信息

Dai Yue, Feng Yifan, Ma Nan, Zhao Xibin, Gao Yue

出版信息

IEEE Trans Pattern Anal Mach Intell. 2025 Apr;47(4):2370-2387. doi: 10.1109/TPAMI.2024.3524440. Epub 2025 Mar 6.

Abstract

Cross-modal 3D shape retrieval is a crucial and widely applied task in the field of 3D vision. Its goal is to construct retrieval representations capable of measuring the similarity between instances of different 3D modalities. However, existing methods face challenges due to the performance bottlenecks of single-modal representation extractors and the modality gap across 3D modalities. To tackle these issues, we propose a Heterogeneous Dynamic Graph Representation (HDGR) network, which incorporates context-dependent dynamic relations within a heterogeneous framework. By capturing correlations among diverse 3D objects, HDGR overcomes the limitations of ambiguous representations obtained solely from instances. Within the context of varying mini-batches, dynamic graphs are constructed to capture proximal intra-modal relations, and dynamic bipartite graphs represent implicit cross-modal relations, effectively addressing the two challenges above. Subsequently, message passing and aggregation are performed using Dynamic Graph Convolution (DGConv) and Dynamic Bipartite Graph Convolution (DBConv), enhancing features through heterogeneous dynamic relation learning. Finally, intra-modal, cross-modal, and self-transformed features are redistributed and integrated into a heterogeneous dynamic representation for cross-modal 3D shape retrieval. HDGR establishes a stable, context-enhanced, structure-aware 3D shape representation by capturing heterogeneous inter-object relationships and adapting to varying contextual dynamics. Extensive experiments conducted on the ModelNet10, ModelNet40, and real-world ABO datasets demonstrate the state-of-the-art performance of HDGR in cross-modal and intra-modal retrieval tasks. Moreover, under the supervision of robust loss functions, HDGR achieves remarkable cross-modal retrieval against label noise on the 3D MNIST dataset. The comprehensive experimental results highlight the effectiveness and efficiency of HDGR on cross-modal 3D shape retrieval.

摘要

跨模态3D形状检索是3D视觉领域中一项至关重要且应用广泛的任务。其目标是构建能够衡量不同3D模态实例之间相似性的检索表示。然而,由于单模态表示提取器的性能瓶颈以及3D模态之间的模态差距,现有方法面临挑战。为了解决这些问题,我们提出了一种异构动态图表示(HDGR)网络,该网络在异构框架内纳入了上下文相关的动态关系。通过捕捉不同3D对象之间的相关性,HDGR克服了仅从实例中获得的模糊表示的局限性。在不同的小批量数据背景下,构建动态图以捕捉近端模态内关系,动态二分图表示隐式跨模态关系,有效解决了上述两个挑战。随后,使用动态图卷积(DGConv)和动态二分图卷积(DBConv)进行消息传递和聚合,通过异构动态关系学习增强特征。最后,对模态内、跨模态和自变换特征进行重新分配并集成到异构动态表示中,用于跨模态3D形状检索。HDGR通过捕捉异构对象间关系并适应不同的上下文动态,建立了一种稳定、上下文增强、结构感知的3D形状表示。在ModelNet10、ModelNet40和真实世界的ABO数据集上进行的大量实验证明了HDGR在跨模态和模态内检索任务中的领先性能。此外,在鲁棒损失函数的监督下,HDGR在3D MNIST数据集上针对标签噪声实现了卓越的跨模态检索。综合实验结果突出了HDGR在跨模态3D形状检索方面的有效性和效率。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验