Wang Zheng, Xu Xing, Wei Jiwei, Xie Ning, Yang Yang, Shen Heng Tao
IEEE Trans Image Process. 2024;33:2226-2237. doi: 10.1109/TIP.2024.3374111. Epub 2024 Mar 25.
Cross-modal retrieval (e.g., query a given image to obtain a semantically similar sentence, and vice versa) is an important but challenging task, as the heterogeneous gap and inconsistent distributions exist between different modalities. The dominant approaches struggle to bridge the heterogeneity by capturing the common representations among heterogeneous data in a constructed subspace which can reflect the semantic closeness. However, insufficient consideration is taken into the fact that learned latent representations are actually heavily entangled with those semantic-unrelated features, which obviously further compounds the challenges of cross-modal retrieval. To alleviate the difficulty, this work makes an assumption that the data are jointly characterized by two independent features: semantic-shared and semantic-unrelated representations. The former presents characteristics of consistent semantics shared by different modalities, while the latter reflects the characteristics with respect to the modality yet unrelated to semantics, such as background, illumination, and other low-level information. Therefore, this paper aims to disentangle the shared semantics from the entangled features, andthus the purer semantic representation can promote the closeness of paired data. Specifically, this paper designs a novel Semantics Disentangling approach for Cross-Modal Retrieval (termed as SDCMR) to explicitly decouple the two different features based on variational auto-encoder. Next, the reconstruction is performed by exchanging shared semantics to ensure the learning of semantic consistency. Moreover, a dual adversarial mechanism is designed to disentangle the two independent features via a pushing-and-pulling strategy. Comprehensive experiments on four widely used datasets demonstrate the effectiveness and superiority of the proposed SDCMR method by achieving a new bar on performance when compared against 15 state-of-the-art methods.
跨模态检索(例如,查询给定图像以获得语义相似的句子,反之亦然)是一项重要但具有挑战性的任务,因为不同模态之间存在异构差距和不一致的分布。主流方法通过在能够反映语义接近度的构造子空间中捕获异构数据之间的共同表示来努力弥合异构性。然而,人们没有充分考虑到这样一个事实,即学习到的潜在表示实际上与那些语义无关的特征严重纠缠在一起,这显然进一步加剧了跨模态检索的挑战。为了缓解这一困难,这项工作做出了一个假设,即数据由两个独立的特征共同表征:语义共享和语义无关的表示。前者呈现出不同模态共享的一致语义特征,而后者反映了与模态相关但与语义无关的特征,例如背景、光照和其他低级信息。因此,本文旨在从纠缠特征中分离出共享语义,从而更纯净的语义表示可以促进配对数据的接近度。具体来说,本文设计了一种新颖的用于跨模态检索的语义解缠方法(称为SDCMR),以基于变分自编码器显式地解耦这两个不同的特征。接下来,通过交换共享语义来进行重构,以确保语义一致性的学习。此外,设计了一种对偶对抗机制,通过推拉策略来解缠这两个独立的特征。在四个广泛使用的数据集上进行的综合实验表明,与15种最先进的方法相比,所提出的SDCMR方法通过达到新的性能标准,证明了其有效性和优越性。