Xie De, Deng Cheng, Li Chao, Liu Xianglong, Tao Dacheng
IEEE Trans Image Process. 2020 Jan 9. doi: 10.1109/TIP.2020.2963957.
Owing to the advantages of low storage cost and high query efficiency, cross-modal hashing has received increasing attention recently. As failing to bridge the inherent modality gap between modalities, most existing cross-modal hashing methods have limited capability to explore the semantic consistency information between different modality data, leading to unsatisfactory search performance. To address this problem, we propose a novel deep hashing method named Multi-Task Consistency- Preserving Adversarial Hashing (CPAH) to fully explore the semantic consistency and correlation between different modalities for efficient cross-modal retrieval. First, we design a consistency refined module (CR) to divide the representations of different modality into two irrelevant parts, i.e., modality-common and modality-private representations. Then, a multi-task adversarial learning module (MA) is presented, which can make the modality-common representation of different modalities close to each other on feature distribution and semantic consistency. Finally, the compact and powerful hash codes can be generated from modality-common representation. Comprehensive evaluations conducted on three representative cross-modal benchmark datasets illustrate our method is superior to the state-of-the-art cross-modal hashing methods.
由于具有存储成本低和查询效率高的优点,跨模态哈希最近受到了越来越多的关注。由于未能弥合模态之间固有的模态差距,大多数现有的跨模态哈希方法在探索不同模态数据之间的语义一致性信息方面能力有限,导致搜索性能不尽人意。为了解决这个问题,我们提出了一种新颖的深度哈希方法,称为多任务一致性保持对抗哈希(CPAH),以充分探索不同模态之间的语义一致性和相关性,实现高效的跨模态检索。首先,我们设计了一个一致性细化模块(CR),将不同模态的表示划分为两个不相关的部分,即模态通用表示和模态私有表示。然后,提出了一个多任务对抗学习模块(MA),它可以使不同模态的模态通用表示在特征分布和语义一致性上彼此接近。最后,可以从模态通用表示中生成紧凑而强大的哈希码。在三个具有代表性的跨模态基准数据集上进行的综合评估表明,我们的方法优于当前最先进的跨模态哈希方法。