• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

SMAN:用于跨模态图像-文本检索的堆叠多模态注意力网络。

SMAN: Stacked Multimodal Attention Network for Cross-Modal Image-Text Retrieval.

出版信息

IEEE Trans Cybern. 2022 Feb;52(2):1086-1097. doi: 10.1109/TCYB.2020.2985716. Epub 2022 Feb 16.

DOI:10.1109/TCYB.2020.2985716
PMID:32386178
Abstract

This article focuses on tackling the task of the cross-modal image-text retrieval which has been an interdisciplinary topic in both computer vision and natural language processing communities. Existing global representation alignment-based methods fail to pinpoint the semantically meaningful portion of images and texts, while the local representation alignment schemes suffer from the huge computational burden for aggregating the similarity of visual fragments and textual words exhaustively. In this article, we propose a stacked multimodal attention network (SMAN) that makes use of the stacked multimodal attention mechanism to exploit the fine-grained interdependencies between image and text, thereby mapping the aggregation of attentive fragments into a common space for measuring cross-modal similarity. Specifically, we sequentially employ intramodal information and multimodal information as guidance to perform multiple-step attention reasoning so that the fine-grained correlation between image and text can be modeled. As a consequence, we are capable of discovering the semantically meaningful visual regions or words in a sentence which contributes to measuring the cross-modal similarity in a more precise manner. Moreover, we present a novel bidirectional ranking loss that enforces the distance among pairwise multimodal instances to be closer. Doing so allows us to make full use of pairwise supervised information to preserve the manifold structure of heterogeneous pairwise data. Extensive experiments on two benchmark datasets demonstrate that our SMAN consistently yields competitive performance compared to state-of-the-art methods.

摘要

本文专注于解决跨模态图像-文本检索任务,这是计算机视觉和自然语言处理领域的一个交叉学科课题。现有的基于全局表示对齐的方法无法准确识别图像和文本的语义有意义部分,而局部表示对齐方案则因需要全面聚合视觉片段和文本单词的相似性而面临巨大的计算负担。在本文中,我们提出了一种堆叠多模态注意网络(SMAN),它利用堆叠多模态注意机制来挖掘图像和文本之间的细粒度相关性,从而将注意力片段的聚合映射到一个共同的空间中,以衡量跨模态相似性。具体来说,我们依次使用单模态信息和多模态信息作为指导,进行多步注意力推理,从而可以对图像和文本之间的细粒度相关性进行建模。因此,我们能够发现句子中语义有意义的视觉区域或单词,从而更准确地衡量跨模态相似性。此外,我们提出了一种新颖的双向排序损失,强制对两两多模态实例之间的距离更近。这样做可以充分利用成对监督信息来保留异构成对数据的流形结构。在两个基准数据集上的广泛实验表明,与最先进的方法相比,我们的 SMAN 始终能够获得有竞争力的性能。

相似文献

1
SMAN: Stacked Multimodal Attention Network for Cross-Modal Image-Text Retrieval.SMAN:用于跨模态图像-文本检索的堆叠多模态注意力网络。
IEEE Trans Cybern. 2022 Feb;52(2):1086-1097. doi: 10.1109/TCYB.2020.2985716. Epub 2022 Feb 16.
2
Cross-Modal Attention With Semantic Consistence for Image-Text Matching.用于图像-文本匹配的具有语义一致性的跨模态注意力机制
IEEE Trans Neural Netw Learn Syst. 2020 Dec;31(12):5412-5425. doi: 10.1109/TNNLS.2020.2967597. Epub 2020 Nov 30.
3
Hybrid Attention Network for Language-Based Person Search.基于语言的人物搜索的混合注意力网络。
Sensors (Basel). 2020 Sep 15;20(18):5279. doi: 10.3390/s20185279.
4
Learning Relationship-Enhanced Semantic Graph for Fine-Grained Image-Text Matching.用于细粒度图像-文本匹配的学习关系增强语义图
IEEE Trans Cybern. 2024 Feb;54(2):948-961. doi: 10.1109/TCYB.2022.3179020. Epub 2024 Jan 17.
5
MAVA: Multi-level Adaptive Visual-textual Alignment by Cross-media Bi-attention Mechanism.MAVA:基于跨媒体双向注意力机制的多层次自适应视觉文本对齐
IEEE Trans Image Process. 2019 Nov 22. doi: 10.1109/TIP.2019.2952085.
6
Latent Space Semantic Supervision Based on Knowledge Distillation for Cross-Modal Retrieval.基于知识蒸馏的潜在空间语义监督用于跨模态检索
IEEE Trans Image Process. 2022;31:7154-7164. doi: 10.1109/TIP.2022.3220051. Epub 2022 Nov 16.
7
Efficient Token-Guided Image-Text Retrieval With Consistent Multimodal Contrastive Training.高效的基于令牌的图像-文本检索与一致的多模态对比训练。
IEEE Trans Image Process. 2023;32:3622-3633. doi: 10.1109/TIP.2023.3286710. Epub 2023 Jul 3.
8
Relation-Aggregated Cross-Graph Correlation Learning for Fine-Grained Image-Text Retrieval.用于细粒度图像-文本检索的关系聚合跨图相关性学习
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2194-2207. doi: 10.1109/TNNLS.2022.3188569. Epub 2024 Feb 5.
9
Unsupervised Visual-Textual Correlation Learning With Fine-Grained Semantic Alignment.无监督视觉-文本关联学习与细粒度语义对齐。
IEEE Trans Cybern. 2022 May;52(5):3669-3683. doi: 10.1109/TCYB.2020.3015084. Epub 2022 May 19.
10
Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective.从多任务视角看自然保护图像数据中的细粒度跨模态语义一致性
Sensors (Basel). 2024 May 14;24(10):3130. doi: 10.3390/s24103130.

引用本文的文献

1
uRP: An integrated research platform for one-stop analysis of medical images.uRP:一个用于医学图像一站式分析的集成研究平台。
Front Radiol. 2023 Apr 18;3:1153784. doi: 10.3389/fradi.2023.1153784. eCollection 2023.
2
Auditory Attention Detection via Cross-Modal Attention.通过跨模态注意力进行听觉注意力检测
Front Neurosci. 2021 Jul 21;15:652058. doi: 10.3389/fnins.2021.652058. eCollection 2021.