Miao Jiaxu, Wei Yunchao, Wang Xiaohan, Yang Yi
IEEE Trans Pattern Anal Mach Intell. 2023 Sep;45(9):11297-11308. doi: 10.1109/TPAMI.2023.3266023. Epub 2023 Aug 7.
Scene understanding through pixel-level semantic parsing is one of the main problems in computer vision. Till now, image-based methods and datasets for scene parsing have been well explored. However, the real world is naturally dynamic instead of a static state. Thus, learning to perform video scene parsing is more practical for real-world applications. Considering that few datasets cover an extensive range of scenes and object categories with temporal pixel-level annotations, in this work, we present a large-scale video scene parsing dataset, namely VSPW (Video Scene Parsing in the Wild). To be specific, there are a total of 251,633 frames from 3,536 videos with densely pixel-wise annotations in VSPW, including a large variety of 231 scenes and 124 object categories. Besides, VSPW is densely annotated with a high frame rate of 15 f/s, and over 96% of videos from VSPW have high spatial resolutions from 720P to 4 K. To the best of our knowledge, VSPW is the first attempt to address the challenging video scene parsing task in the wild by considering diverse scenes. Based on our VSPW, we further propose Temporal Attention Blending (TAB) Networks to harness temporal context information for better pixel-level semantic understanding of videos. Extensive experiments on VSPW well demonstrate the superiority of the proposed TAB over other baseline approaches. We hope the new proposed dataset and the explorations in this work can help advance the challenging yet practical video scene parsing task in the future. Both the dataset and the code are available at www.vspwdataset.com.
通过像素级语义解析进行场景理解是计算机视觉中的主要问题之一。到目前为止,基于图像的场景解析方法和数据集已经得到了充分的探索。然而,现实世界本质上是动态的,而非静态的。因此,学习执行视频场景解析对于实际应用更为实用。考虑到很少有数据集涵盖具有时间像素级注释的广泛场景和对象类别,在这项工作中,我们提出了一个大规模的视频场景解析数据集,即VSPW(野外视频场景解析)。具体来说,VSPW中共有来自3536个视频的251633帧,带有密集的逐像素注释,包括231种各种各样的场景和124个对象类别。此外,VSPW以15帧/秒的高帧率进行密集注释,并且VSPW中超过96%的视频具有从720P到4K的高空间分辨率。据我们所知,VSPW是首次尝试通过考虑不同场景来解决野外具有挑战性的视频场景解析任务。基于我们的VSPW,我们进一步提出了时间注意力融合(TAB)网络,以利用时间上下文信息来更好地对视频进行像素级语义理解。在VSPW上进行的大量实验充分证明了所提出的TAB相对于其他基线方法的优越性。我们希望新提出的数据集以及这项工作中的探索能够在未来推动具有挑战性但实用的视频场景解析任务的发展。数据集和代码均可在www.vspwdataset.com上获取。