• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过生成实值表示改进深度跨模态检索。

Improvement of deep cross-modal retrieval by generating real-valued representation.

作者信息

Bhatt Nikita, Ganatra Amit

机构信息

U & P U. Patel Department of Computer Engineering, Chandubhai S. Patel Institute of Technology, Charotar University of Science and Technology (CHARUSAT), Changa, India.

Devang Patel Institute of Advance Technology and Research, Charotar University of Science and Technology (CHARUSAT), Changa, India.

出版信息

PeerJ Comput Sci. 2021 Apr 27;7:e491. doi: 10.7717/peerj-cs.491. eCollection 2021.

DOI:10.7717/peerj-cs.491
PMID:33987458
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8093956/
Abstract

The cross-modal retrieval (CMR) has attracted much attention in the research community due to flexible and comprehensive retrieval. The core challenge in CMR is the heterogeneity gap, which is generated due to different statistical properties of multi-modal data. The most common solution to bridge the heterogeneity gap is representation learning, which generates a common sub-space. In this work, we propose a framework called "Improvement of Deep Cross-Modal Retrieval (IDCMR)", which generates real-valued representation. The IDCMR preserves both intra-modal and inter-modal similarity. The intra-modal similarity is preserved by selecting an appropriate training model for text and image modality. The inter-modal similarity is preserved by reducing modality-invariance loss. The mean average precision (mAP) is used as a performance measure in the CMR system. Extensive experiments are performed, and results show that IDCMR outperforms over state-of-the-art methods by a margin 4% and 2% relatively with mAP in the text to image and image to text retrieval tasks on MSCOCO and Xmedia dataset respectively.

摘要

跨模态检索(CMR)因其灵活且全面的检索方式在研究领域备受关注。CMR的核心挑战是异质性差距,它是由多模态数据的不同统计特性产生的。弥合异质性差距最常见的解决方案是表示学习,它会生成一个公共子空间。在这项工作中,我们提出了一个名为“深度跨模态检索改进(IDCMR)”的框架,该框架生成实值表示。IDCMR同时保留了模态内和模态间的相似性。通过为文本和图像模态选择合适的训练模型来保留模态内相似性。通过减少模态不变性损失来保留模态间相似性。在CMR系统中,平均精度均值(mAP)用作性能度量。我们进行了大量实验,结果表明,在MSCOCO和Xmedia数据集上的文本到图像以及图像到文本检索任务中,IDCMR相对于最先进的方法分别以4%和2%的优势在mAP上表现更优。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/c167bf8d8882/peerj-cs-07-491-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/78a3d7034a4f/peerj-cs-07-491-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/228d71d2fc92/peerj-cs-07-491-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/6745c8dff616/peerj-cs-07-491-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/7913e510ce0d/peerj-cs-07-491-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/fdaf78090863/peerj-cs-07-491-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/f5435b21d8e9/peerj-cs-07-491-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/796d5f4d3daf/peerj-cs-07-491-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/c167bf8d8882/peerj-cs-07-491-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/78a3d7034a4f/peerj-cs-07-491-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/228d71d2fc92/peerj-cs-07-491-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/6745c8dff616/peerj-cs-07-491-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/7913e510ce0d/peerj-cs-07-491-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/fdaf78090863/peerj-cs-07-491-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/f5435b21d8e9/peerj-cs-07-491-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/796d5f4d3daf/peerj-cs-07-491-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f317/8093956/c167bf8d8882/peerj-cs-07-491-g008.jpg

相似文献

1
Improvement of deep cross-modal retrieval by generating real-valued representation.通过生成实值表示改进深度跨模态检索。
PeerJ Comput Sci. 2021 Apr 27;7:e491. doi: 10.7717/peerj-cs.491. eCollection 2021.
2
Bridging multimedia heterogeneity gap via Graph Representation Learning for cross-modal retrieval.通过图表示学习弥合多媒体异质鸿沟进行跨模态检索。
Neural Netw. 2021 Feb;134:143-162. doi: 10.1016/j.neunet.2020.11.011. Epub 2020 Nov 28.
3
Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective.从多任务视角看自然保护图像数据中的细粒度跨模态语义一致性
Sensors (Basel). 2024 May 14;24(10):3130. doi: 10.3390/s24103130.
4
Deep Relation Embedding for Cross-Modal Retrieval.深度关系嵌入的跨模态检索。
IEEE Trans Image Process. 2021;30:617-627. doi: 10.1109/TIP.2020.3038354. Epub 2020 Dec 1.
5
Integrating Multi-Label Contrastive Learning With Dual Adversarial Graph Neural Networks for Cross-Modal Retrieval.将多标签对比学习与双对抗图神经网络相结合用于跨模态检索
IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):4794-4811. doi: 10.1109/TPAMI.2022.3188547. Epub 2023 Mar 7.
6
Disambiguity and Alignment: An Effective Multi-Modal Alignment Method for Cross-Modal Recipe Retrieval.消除歧义与对齐:一种用于跨模态食谱检索的有效多模态对齐方法。
Foods. 2024 May 23;13(11):1628. doi: 10.3390/foods13111628.
7
Modality-specific Cross-modal Similarity Measurement with Recurrent Attention Network.基于循环注意力网络的模态特定跨模态相似性度量
IEEE Trans Image Process. 2018 Jul 2. doi: 10.1109/TIP.2018.2852503.
8
Hybrid DAER Based Cross-Modal Retrieval Exploiting Deep Representation Learning.基于深度表示学习的混合DAER跨模态检索
Entropy (Basel). 2023 Aug 16;25(8):1216. doi: 10.3390/e25081216.
9
[Cross-modal retrieval method for thyroid ultrasound image and text based on generative adversarial network].基于生成对抗网络的甲状腺超声图像与文本跨模态检索方法
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi. 2020 Aug 25;37(4):641-651. doi: 10.7507/1001-5515.201812042.
10
Deep Semantic-Preserving Ordinal Hashing for Cross-Modal Similarity Search.用于跨模态相似性搜索的深度语义保持序数哈希
IEEE Trans Neural Netw Learn Syst. 2019 May;30(5):1429-1440. doi: 10.1109/TNNLS.2018.2869601. Epub 2018 Oct 1.

本文引用的文献

1
Canonical correlation analysis: an overview with application to learning methods.典型相关分析:概述及其在学习方法中的应用
Neural Comput. 2004 Dec;16(12):2639-64. doi: 10.1162/0899766042321814.