Suppr超能文献

基于端到端属性检测及后续属性预测的图像字幕生成

Image Captioning with End-to-end Attribute Detection and Subsequent Attributes Prediction.

作者信息

Huang Yiqing, Chen Jiansheng, Ouyang Wanli, Wan Weitao, Xue Youze

出版信息

IEEE Trans Image Process. 2020 Jan 30. doi: 10.1109/TIP.2020.2969330.

Abstract

Semantic attention has been shown to be effective in improving the performance of image captioning. The core of semantic attention based methods is to drive the model to attend to semantically important words, or attributes. In previous works, the attribute detector and the captioning network are usually independent, leading to the insufficient usage of the semantic information. Also, all the detected attributes, no matter whether they are appropriate for the linguistic context at the current step, are attended to through the whole caption generation process. This may sometimes disrupt the captioning model to attend to incorrect visual concepts. To solve these problems, we introduce two end-to-end trainable modules to closely couple attribute detection with image captioning as well as prompt the effective uses of attributes by predicting appropriate attributes at each time step. The multimodal attribute detector (MAD) module improves the attribute detection accuracy by using not only the image features but also the word embedding of attributes already existing in most captioning models. MAD models the similarity between the semantics of attributes and the image object features to facilitate accurate detection. The subsequent attribute predictor (SAP) module dynamically predicts a concise attribute subset at each time step to mitigate the diversity of image attributes. Compared to previous attribute based methods, our approach enhances the explainability in how the attributes affect the generated words and achieves a state-of-the-art single model performance of 128.8 CIDEr-D on the MSCOCO dataset. Extensive experiments on the MSCOCO dataset show that our proposal actually improves the performances in both image captioning and attribute detection simultaneously. The codes are available at: https://github.com/ RubickH/Image-Captioning-with-MAD-and-SAP.

摘要

语义注意力已被证明在提高图像字幕性能方面是有效的。基于语义注意力的方法的核心是驱动模型关注语义上重要的单词或属性。在先前的工作中,属性检测器和字幕网络通常是独立的,导致语义信息使用不足。此外,所有检测到的属性,无论它们是否适合当前步骤的语言上下文,都会在整个字幕生成过程中被关注。这有时可能会干扰字幕模型关注不正确的视觉概念。为了解决这些问题,我们引入了两个端到端可训练模块,将属性检测与图像字幕紧密结合,并通过在每个时间步预测适当的属性来促进属性的有效使用。多模态属性检测器(MAD)模块不仅使用图像特征,还使用大多数字幕模型中已有的属性词嵌入来提高属性检测精度。MAD对属性语义与图像对象特征之间的相似性进行建模,以促进准确检测。随后的属性预测器(SAP)模块在每个时间步动态预测一个简洁的属性子集,以减轻图像属性的多样性。与先前基于属性的方法相比,我们的方法增强了属性如何影响生成单词的可解释性,并在MSCOCO数据集上实现了128.8 CIDEr-D的单模型性能。在MSCOCO数据集上进行的大量实验表明,我们的提议实际上同时提高了图像字幕和属性检测的性能。代码可在以下网址获取:https://github.com/ RubickH/Image-Captioning-with-MAD-and-SAP 。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验