Suppr超能文献

视觉问答中的跨模态偏差:基于可能世界视觉问答的因果观点

Cross Modality Bias in Visual Question Answering: A Causal View with Possible Worlds VQA.

作者信息

Vosoughi Ali, Deng Shijian, Zhang Songyang, Tian Yapeng, Xu Chenliang, Luo Jiebo

机构信息

Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY 14620.

Department of Computer Science, University of Texas Dallas, Dallas, TX 12345.

出版信息

IEEE Trans Multimedia. 2024;26:8609-8624. doi: 10.1109/tmm.2024.3380259. Epub 2024 Mar 21.

Abstract

To increase the generalization capability of VQA systems, many recent studies have tried to de-bias spurious language or vision associations that shortcut the question or image to the answer. Despite these efforts, the literature fails to address the confounding effect of vision and language simultaneously. As a result, when they reduce bias learned from one modality, they usually increase bias from the other. In this paper, we first model a confounding effect that causes language and vision bias simultaneously, then propose a counterfactual inference to remove the influence of this effect. The model trained in this strategy can concurrently and efficiently reduce vision and language bias. To the best of our knowledge, this is the first work to reduce biases resulting from confounding effects of vision and language in VQA, leveraging causal explain-away relations. We accompany our method with an explain-away strategy, pushing the accuracy of the questions with numerical answers results compared to existing methods that have been an open problem. The proposed method outperforms the state-of-the-art methods in VQA-CP v2 datasets. R2: Providing brief insights into the experimental setup and results would add valuable context for readers. In response to R2, we released the code and documentation for the implementation as follows. Our codes are available at https://github.com/ali-vosoughi/PW-VQA.

摘要

为了提高视觉问答(VQA)系统的泛化能力,最近许多研究试图消除虚假的语言或视觉关联,这些关联会绕过问题或图像直接给出答案。尽管做出了这些努力,但相关文献未能同时解决视觉和语言的混杂效应。因此,当它们减少从一种模态中学到的偏差时,通常会增加另一种模态的偏差。在本文中,我们首先对同时导致语言和视觉偏差的混杂效应进行建模,然后提出一种反事实推理来消除这种效应的影响。用这种策略训练的模型可以同时有效地减少视觉和语言偏差。据我们所知,这是第一项利用因果消除关系减少VQA中视觉和语言混杂效应所导致偏差的工作。我们用一种消除策略来配合我们的方法,与一直以来作为一个开放问题的现有方法相比,在有数值答案的问题上提高了准确率。所提出的方法在VQA-CP v2数据集上优于现有最先进的方法。回复2:简要介绍实验设置和结果将为读者增添有价值的背景信息。针对回复2,我们如下发布了实现代码和文档。我们的代码可在https://github.com/ali-vosoughi/PW-VQA获取。

相似文献

5
Rich Visual Knowledge-Based Augmentation Network for Visual Question Answering.用于视觉问答的基于丰富视觉知识的增强网络
IEEE Trans Neural Netw Learn Syst. 2021 Oct;32(10):4362-4373. doi: 10.1109/TNNLS.2020.3017530. Epub 2021 Oct 5.
6
Advancing surgical VQA with scene graph knowledge.利用场景图知识推进外科视觉问答。
Int J Comput Assist Radiol Surg. 2024 Jul;19(7):1409-1417. doi: 10.1007/s11548-024-03141-y. Epub 2024 May 23.
7
Reducing Vision-Answer Biases for Multiple-Choice VQA.减少多项选择题视觉问答中的视觉答案偏差
IEEE Trans Image Process. 2023;32:4621-4634. doi: 10.1109/TIP.2023.3302162. Epub 2023 Aug 16.
8
Robust visual question answering via polarity enhancement and contrast.通过极性增强和对比实现鲁棒的视觉问答。
Neural Netw. 2024 Nov;179:106560. doi: 10.1016/j.neunet.2024.106560. Epub 2024 Jul 20.

引用本文的文献

本文引用的文献

2
Reducing Vision-Answer Biases for Multiple-Choice VQA.减少多项选择题视觉问答中的视觉答案偏差
IEEE Trans Image Process. 2023;32:4621-4634. doi: 10.1109/TIP.2023.3302162. Epub 2023 Aug 16.
3
Deconfounded Image Captioning: A Causal Retrospect.去混淆图像字幕:因果回顾
IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):12996-13010. doi: 10.1109/TPAMI.2021.3121705. Epub 2023 Oct 3.
4
Effects of Language on Visual Perception.语言对视觉感知的影响。
Trends Cogn Sci. 2020 Nov;24(11):930-944. doi: 10.1016/j.tics.2020.08.005. Epub 2020 Oct 1.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验