Bhatt Nikita, Ganatra Amit
U & P U. Patel Department of Computer Engineering, Chandubhai S. Patel Institute of Technology, Charotar University of Science and Technology (CHARUSAT), Changa, India.
Devang Patel Institute of Advance Technology and Research, Charotar University of Science and Technology (CHARUSAT), Changa, India.
PeerJ Comput Sci. 2021 Apr 27;7:e491. doi: 10.7717/peerj-cs.491. eCollection 2021.
The cross-modal retrieval (CMR) has attracted much attention in the research community due to flexible and comprehensive retrieval. The core challenge in CMR is the heterogeneity gap, which is generated due to different statistical properties of multi-modal data. The most common solution to bridge the heterogeneity gap is representation learning, which generates a common sub-space. In this work, we propose a framework called "Improvement of Deep Cross-Modal Retrieval (IDCMR)", which generates real-valued representation. The IDCMR preserves both intra-modal and inter-modal similarity. The intra-modal similarity is preserved by selecting an appropriate training model for text and image modality. The inter-modal similarity is preserved by reducing modality-invariance loss. The mean average precision (mAP) is used as a performance measure in the CMR system. Extensive experiments are performed, and results show that IDCMR outperforms over state-of-the-art methods by a margin 4% and 2% relatively with mAP in the text to image and image to text retrieval tasks on MSCOCO and Xmedia dataset respectively.
跨模态检索(CMR)因其灵活且全面的检索方式在研究领域备受关注。CMR的核心挑战是异质性差距,它是由多模态数据的不同统计特性产生的。弥合异质性差距最常见的解决方案是表示学习,它会生成一个公共子空间。在这项工作中,我们提出了一个名为“深度跨模态检索改进(IDCMR)”的框架,该框架生成实值表示。IDCMR同时保留了模态内和模态间的相似性。通过为文本和图像模态选择合适的训练模型来保留模态内相似性。通过减少模态不变性损失来保留模态间相似性。在CMR系统中,平均精度均值(mAP)用作性能度量。我们进行了大量实验,结果表明,在MSCOCO和Xmedia数据集上的文本到图像以及图像到文本检索任务中,IDCMR相对于最先进的方法分别以4%和2%的优势在mAP上表现更优。