Key Laboratory of Intelligent Information Processing and Control, Chongqing Municipal Institutions of Higher Education, Chongqing Three Gorges University, Chongqing 40044, China.
Department of Electrical and Computer Engineering, University of the District of Columbia, Washington, D. C., SC 20008, USA.
Comput Intell Neurosci. 2021 Jul 2;2021:9968716. doi: 10.1155/2021/9968716. eCollection 2021.
Recently, benefitting from the storage and retrieval efficiency of hashing and the powerful discriminative feature extraction capability of deep neural networks, deep cross-modal hashing retrieval has drawn more and more attention. To preserve the semantic similarities of cross-modal instances during the hash mapping procedure, most existing deep cross-modal hashing methods usually learn deep hashing networks with a pairwise loss or a triplet loss. However, these methods may not fully explore the similarity relation across modalities. To solve this problem, in this paper, we introduce a quadruplet loss into deep cross-modal hashing and propose a quadruplet-based deep cross-modal hashing (termed QDCMH) method. Extensive experiments on two benchmark cross-modal retrieval datasets show that our proposed method achieves state-of-the-art performance and demonstrate the efficiency of the quadruplet loss in cross-modal hashing.
最近,受益于哈希的存储和检索效率以及深度神经网络强大的判别特征提取能力,深度跨模态哈希检索越来越受到关注。为了在哈希映射过程中保留跨模态实例的语义相似性,大多数现有的深度跨模态哈希方法通常使用对损失或三元组损失来学习深度哈希网络。然而,这些方法可能无法充分探索模态之间的相似关系。为了解决这个问题,本文将四元组损失引入深度跨模态哈希,并提出了一种基于四元组的深度跨模态哈希方法(称为 QDCMH)。在两个基准跨模态检索数据集上的广泛实验表明,我们提出的方法达到了最先进的性能,并证明了四元组损失在跨模态哈希中的有效性。