• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于图像-文本匹配的具有语义一致性的跨模态注意力机制

Cross-Modal Attention With Semantic Consistence for Image-Text Matching.

作者信息

Xu Xing, Wang Tan, Yang Yang, Zuo Lin, Shen Fumin, Shen Heng Tao

出版信息

IEEE Trans Neural Netw Learn Syst. 2020 Dec;31(12):5412-5425. doi: 10.1109/TNNLS.2020.2967597. Epub 2020 Nov 30.

DOI:10.1109/TNNLS.2020.2967597
PMID:32071004
Abstract

The task of image-text matching refers to measuring the visual-semantic similarity between an image and a sentence. Recently, the fine-grained matching methods that explore the local alignment between the image regions and the sentence words have shown advance in inferring the image-text correspondence by aggregating pairwise region-word similarity. However, the local alignment is hard to achieve as some important image regions may be inaccurately detected or even missing. Meanwhile, some words with high-level semantics cannot be strictly corresponding to a single-image region. To tackle these problems, we address the importance of exploiting the global semantic consistence between image regions and sentence words as complementary for the local alignment. In this article, we propose a novel hybrid matching approach named Cross-modal Attention with Semantic Consistency (CASC) for image-text matching. The proposed CASC is a joint framework that performs cross-modal attention for local alignment and multilabel prediction for global semantic consistence. It directly extracts semantic labels from available sentence corpus without additional labor cost, which further provides a global similarity constraint for the aggregated region-word similarity obtained by the local alignment. Extensive experiments on Flickr30k and Microsoft COCO (MSCOCO) data sets demonstrate the effectiveness of the proposed CASC on preserving global semantic consistence along with the local alignment and further show its superior image-text matching performance compared with more than 15 state-of-the-art methods.

摘要

图像-文本匹配任务指的是衡量图像与句子之间的视觉语义相似度。最近,通过聚合成对的区域-单词相似度来推断图像-文本对应关系的细粒度匹配方法已取得进展,这类方法探索了图像区域与句子单词之间的局部对齐。然而,由于一些重要的图像区域可能检测不准确甚至缺失,局部对齐很难实现。同时,一些具有高级语义的单词无法严格对应单个图像区域。为了解决这些问题,我们强调利用图像区域与句子单词之间的全局语义一致性作为局部对齐的补充的重要性。在本文中,我们提出了一种用于图像-文本匹配的新颖混合匹配方法,名为具有语义一致性的跨模态注意力(CASC)。所提出的CASC是一个联合框架,它对局部对齐执行跨模态注意力,并对全局语义一致性进行多标签预测。它直接从可用的句子语料库中提取语义标签,无需额外的人工成本,这进一步为通过局部对齐获得的聚合区域-单词相似度提供了全局相似性约束。在Flickr30k和微软COCO(MSCOCO)数据集上进行的大量实验证明了所提出的CASC在保持全局语义一致性以及局部对齐方面的有效性,并进一步表明其与15种以上的最新方法相比具有卓越的图像-文本匹配性能。

相似文献

1
Cross-Modal Attention With Semantic Consistence for Image-Text Matching.用于图像-文本匹配的具有语义一致性的跨模态注意力机制
IEEE Trans Neural Netw Learn Syst. 2020 Dec;31(12):5412-5425. doi: 10.1109/TNNLS.2020.2967597. Epub 2020 Nov 30.
2
Learning Relationship-Enhanced Semantic Graph for Fine-Grained Image-Text Matching.用于细粒度图像-文本匹配的学习关系增强语义图
IEEE Trans Cybern. 2024 Feb;54(2):948-961. doi: 10.1109/TCYB.2022.3179020. Epub 2024 Jan 17.
3
Decoupled Cross-Modal Phrase-Attention Network for Image-Sentence Matching.用于图像-句子匹配的解耦跨模态短语注意力网络
IEEE Trans Image Process. 2024;33:1326-1337. doi: 10.1109/TIP.2022.3197972. Epub 2024 Feb 13.
4
Latent Space Semantic Supervision Based on Knowledge Distillation for Cross-Modal Retrieval.基于知识蒸馏的潜在空间语义监督用于跨模态检索
IEEE Trans Image Process. 2022;31:7154-7164. doi: 10.1109/TIP.2022.3220051. Epub 2022 Nov 16.
5
Image-Text Embedding Learning via Visual and Textual Semantic Reasoning.通过视觉和文本语义推理进行图像-文本嵌入学习
IEEE Trans Pattern Anal Mach Intell. 2023 Jan;45(1):641-656. doi: 10.1109/TPAMI.2022.3148470. Epub 2022 Dec 5.
6
SMAN: Stacked Multimodal Attention Network for Cross-Modal Image-Text Retrieval.SMAN:用于跨模态图像-文本检索的堆叠多模态注意力网络。
IEEE Trans Cybern. 2022 Feb;52(2):1086-1097. doi: 10.1109/TCYB.2020.2985716. Epub 2022 Feb 16.
7
Relation-Aggregated Cross-Graph Correlation Learning for Fine-Grained Image-Text Retrieval.用于细粒度图像-文本检索的关系聚合跨图相关性学习
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2194-2207. doi: 10.1109/TNNLS.2022.3188569. Epub 2024 Feb 5.
8
Learning Aligned Image-Text Representations Using Graph Attentive Relational Network.使用图注意力关系网络学习对齐的图像-文本表示
IEEE Trans Image Process. 2021;30:1840-1852. doi: 10.1109/TIP.2020.3048627. Epub 2021 Jan 18.
9
Unsupervised Visual-Textual Correlation Learning With Fine-Grained Semantic Alignment.无监督视觉-文本关联学习与细粒度语义对齐。
IEEE Trans Cybern. 2022 May;52(5):3669-3683. doi: 10.1109/TCYB.2020.3015084. Epub 2022 May 19.
10
Plug-and-Play Regulators for Image-Text Matching.即插即用的图像-文本匹配调节器。
IEEE Trans Image Process. 2023;32:2322-2334. doi: 10.1109/TIP.2023.3266887. Epub 2023 Apr 21.

引用本文的文献

1
Novel cross-dimensional coarse-fine-grained complementary network for image-text matching.用于图像-文本匹配的新型跨维度粗细粒度互补网络。
PeerJ Comput Sci. 2025 Mar 3;11:e2725. doi: 10.7717/peerj-cs.2725. eCollection 2025.
2
Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective.从多任务视角看自然保护图像数据中的细粒度跨模态语义一致性
Sensors (Basel). 2024 May 14;24(10):3130. doi: 10.3390/s24103130.
3
Convolutional Neural Network-Based Cross-Media Semantic Matching and User Adaptive Satisfaction Analysis Model.
基于卷积神经网络的跨媒体语义匹配与用户自适应满意度分析模型。
Comput Intell Neurosci. 2022 Apr 30;2022:4244675. doi: 10.1155/2022/4244675. eCollection 2022.
4
TransConver: transformer and convolution parallel network for developing automatic brain tumor segmentation in MRI images.TransConver:用于在MRI图像中开发自动脑肿瘤分割的变压器与卷积并行网络。
Quant Imaging Med Surg. 2022 Apr;12(4):2397-2415. doi: 10.21037/qims-21-919.
5
Auditory Attention Detection via Cross-Modal Attention.通过跨模态注意力进行听觉注意力检测
Front Neurosci. 2021 Jul 21;15:652058. doi: 10.3389/fnins.2021.652058. eCollection 2021.