Vosoughi Ali, Deng Shijian, Zhang Songyang, Tian Yapeng, Xu Chenliang, Luo Jiebo
Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY 14620.
Department of Computer Science, University of Texas Dallas, Dallas, TX 12345.
IEEE Trans Multimedia. 2024;26:8609-8624. doi: 10.1109/tmm.2024.3380259. Epub 2024 Mar 21.
To increase the generalization capability of VQA systems, many recent studies have tried to de-bias spurious language or vision associations that shortcut the question or image to the answer. Despite these efforts, the literature fails to address the confounding effect of vision and language simultaneously. As a result, when they reduce bias learned from one modality, they usually increase bias from the other. In this paper, we first model a confounding effect that causes language and vision bias simultaneously, then propose a counterfactual inference to remove the influence of this effect. The model trained in this strategy can concurrently and efficiently reduce vision and language bias. To the best of our knowledge, this is the first work to reduce biases resulting from confounding effects of vision and language in VQA, leveraging causal explain-away relations. We accompany our method with an explain-away strategy, pushing the accuracy of the questions with numerical answers results compared to existing methods that have been an open problem. The proposed method outperforms the state-of-the-art methods in VQA-CP v2 datasets. R2: Providing brief insights into the experimental setup and results would add valuable context for readers. In response to R2, we released the code and documentation for the implementation as follows. Our codes are available at https://github.com/ali-vosoughi/PW-VQA.
为了提高视觉问答(VQA)系统的泛化能力,最近许多研究试图消除虚假的语言或视觉关联,这些关联会绕过问题或图像直接给出答案。尽管做出了这些努力,但相关文献未能同时解决视觉和语言的混杂效应。因此,当它们减少从一种模态中学到的偏差时,通常会增加另一种模态的偏差。在本文中,我们首先对同时导致语言和视觉偏差的混杂效应进行建模,然后提出一种反事实推理来消除这种效应的影响。用这种策略训练的模型可以同时有效地减少视觉和语言偏差。据我们所知,这是第一项利用因果消除关系减少VQA中视觉和语言混杂效应所导致偏差的工作。我们用一种消除策略来配合我们的方法,与一直以来作为一个开放问题的现有方法相比,在有数值答案的问题上提高了准确率。所提出的方法在VQA-CP v2数据集上优于现有最先进的方法。回复2:简要介绍实验设置和结果将为读者增添有价值的背景信息。针对回复2,我们如下发布了实现代码和文档。我们的代码可在https://github.com/ali-vosoughi/PW-VQA获取。