IEEE Trans Image Process. 2023;32:3136-3149. doi: 10.1109/TIP.2023.3278474. Epub 2023 Jun 2.
Benefiting from the intuitiveness and naturalness of sketch interaction, sketch-based video retrieval (SBVR) has received considerable attention in the video retrieval research area. However, most existing SBVR research still lacks the capability of accurate video retrieval with fine-grained scene content. To address this problem, in this paper we investigate a new task, which focuses on retrieving the target video by utilizing a fine-grained storyboard sketch depicting the scene layout and major foreground instances' visual characteristics (e.g., appearance, size, pose, etc.) of video; we call such a task "fine-grained scene-level SBVR". The most challenging issue in this task is how to perform scene-level cross-modal alignment between sketch and video. Our solution consists of two parts. First, we construct a scene-level sketch-video dataset called SketchVideo, in which sketch-video pairs are provided and each pair contains a clip-level storyboard sketch and several keyframe sketches (corresponding to video frames). Second, we propose a novel deep learning architecture called Sketch Query Graph Convolutional Network (SQ-GCN). In SQ-GCN, we first adaptively sample the video frames to improve video encoding efficiency, and then construct appearance and category graphs to jointly model visual and semantic alignment between sketch and video. Experiments show that our fine-grained scene-level SBVR framework with SQ-GCN architecture outperforms the state-of-the-art fine-grained retrieval methods. The SketchVideo dataset and SQ-GCN code are available in the project webpage https://iscas-mmsketch.github.io/FG-SL-SBVR/.
得益于草图交互的直观性和自然性,基于草图的视频检索 (SBVR) 在视频检索研究领域受到了广泛关注。然而,大多数现有的 SBVR 研究仍然缺乏对精细场景内容的准确视频检索能力。为了解决这个问题,本文研究了一个新的任务,该任务侧重于利用描述场景布局和主要前景实例视觉特征(例如外观、大小、姿势等)的精细故事板草图来检索目标视频;我们将这样的任务称为“细粒度场景级 SBVR”。在这项任务中最具挑战性的问题是如何在草图和视频之间执行场景级的跨模态对齐。我们的解决方案包括两部分。首先,我们构建了一个名为 SketchVideo 的场景级草图-视频数据集,其中提供了草图-视频对,每个对包含一个剪辑级故事板草图和几个关键帧草图(对应于视频帧)。其次,我们提出了一种名为草图查询图卷积网络 (SQ-GCN) 的新的深度学习架构。在 SQ-GCN 中,我们首先自适应地采样视频帧以提高视频编码效率,然后构建外观和类别图以联合建模草图和视频之间的视觉和语义对齐。实验表明,我们的具有 SQ-GCN 架构的细粒度场景级 SBVR 框架优于最先进的细粒度检索方法。SketchVideo 数据集和 SQ-GCN 代码可在项目网页 https://iscas-mmsketch.github.io/FG-SL-SBVR/ 上获得。