Suppr超能文献

学习从网络视频中回答视觉问题。

Learning to Answer Visual Questions From Web Videos.

作者信息

Yang Antoine, Miech Antoine, Sivic Josef, Laptev Ivan, Schmid Cordelia

出版信息

IEEE Trans Pattern Anal Mach Intell. 2025 May;47(5):3202-3218. doi: 10.1109/TPAMI.2022.3173208. Epub 2025 Apr 8.

Abstract

Recent methods for visual question answering rely on large-scale annotated datasets. Manual annotation of questions and answers for videos, however, is tedious, expensive and prevents scalability. In this work, we propose to avoid manual annotation and generate a large-scale training dataset for video question answering making use of automatic cross-modal supervision. We leverage a question generation transformer trained on text data and use it to generate question-answer pairs from transcribed video narrations. Given narrated videos, we then automatically generate the HowToVQA69M dataset with 69M video-question-answer triplets. To handle the open vocabulary of diverse answers in this dataset, we propose a training procedure based on a contrastive loss between a video-question multi-modal transformer and an answer transformer. We introduce the zero-shot VideoQA task and the VideoQA feature probe evaluation setting and show excellent results, in particular for rare answers. Furthermore, our method achieves competitive results on MSRVTT-QA, ActivityNet-QA, MSVD-QA and How2QA datasets. We also show that our VideoQA dataset generation approach generalizes to another source of web video and text data. We use our method to generate the WebVidVQA3M dataset from the WebVid dataset, i.e., videos with alt-text annotations, and show its benefits for training VideoQA models. Finally, for a detailed evaluation we introduce iVQA, a new VideoQA dataset with reduced language bias and high-quality manual annotations. Code, datasets and trained models are available on our project webpage (https://antoyang.github.io/just-ask.html).

摘要

近期的视觉问答方法依赖于大规模的带注释数据集。然而,对视频的问题和答案进行人工注释既繁琐又昂贵,还会阻碍可扩展性。在这项工作中,我们提议避免人工注释,并利用自动跨模态监督生成用于视频问答的大规模训练数据集。我们利用在文本数据上训练的问题生成变换器,并使用它从转录的视频旁白中生成问答对。给定带旁白的视频,我们随后自动生成包含6900万个视频-问题-答案三元组的HowToVQA69M数据集。为了处理该数据集中多样答案的开放词汇问题,我们提出一种基于视频-问题多模态变换器与答案变换器之间对比损失的训练方法。我们引入了零样本视频问答任务和视频问答特征探测评估设置,并展示了出色的结果,特别是对于罕见答案。此外,我们的方法在MSRVTT-QA、ActivityNet-QA、MSVD-QA和How2QA数据集上取得了有竞争力的结果。我们还表明,我们的视频问答数据集生成方法可以推广到另一种网络视频和文本数据来源。我们使用我们的方法从WebVid数据集(即带有替代文本注释的视频)生成WebVidVQA3M数据集,并展示了其对训练视频问答模型的益处。最后,为了进行详细评估,我们引入了iVQA,这是一个新的视频问答数据集,具有减少的语言偏差和高质量的人工注释。代码、数据集和训练好的模型可在我们的项目网页(https://antoyang.github.io/just-ask.html)上获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验