Tian Yuan, Lu Guo, Yan Yichao, Zhai Guangtao, Chen Li, Gao Zhiyong
IEEE Trans Pattern Anal Mach Intell. 2024 Aug;46(8):5852-5872. doi: 10.1109/TPAMI.2024.3367879. Epub 2024 Jul 2.
Video compression is indispensable to most video analysis systems. Despite saving the transportation bandwidth, it also deteriorates downstream video understanding tasks, especially at low-bitrate settings. To systematically investigate this problem, we first thoroughly review the previous methods, revealing that three principles, i.e., task-decoupled, label-free, and data-emerged semantic prior, are critical to a machine-friendly coding framework but are not fully satisfied so far. In this paper, we propose a traditional-neural mixed coding framework that simultaneously fulfills all these principles, by taking advantage of both traditional codecs and neural networks (NNs). On one hand, the traditional codecs can efficiently encode the pixel signal of videos but may distort the semantic information. On the other hand, highly non-linear NNs are proficient in condensing video semantics into a compact representation. The framework is optimized by ensuring that a transportation-efficient semantic representation of the video is preserved w.r.t. the coding procedure, which is spontaneously learned from unlabeled data in a self-supervised manner. The videos collaboratively decoded from two streams (codec and NN) are of rich semantics, as well as visually photo-realistic, empirically boosting several mainstream downstream video analysis task performances without any post-adaptation procedure. Furthermore, by introducing the attention mechanism and adaptive modeling scheme, the video semantic modeling ability of our approach is further enhanced. Fianlly, we build a low-bitrate video understanding benchmark with three downstream tasks on eight datasets, demonstrating the notable superiority of our approach. All codes, data, and models will be open-sourced for facilitating future research.
视频压缩对于大多数视频分析系统来说是不可或缺的。尽管它节省了传输带宽,但也会使下游的视频理解任务恶化,尤其是在低比特率设置下。为了系统地研究这个问题,我们首先全面回顾了以前的方法,发现三个原则,即任务解耦、无标签和数据涌现语义先验,对于一个机器友好的编码框架至关重要,但目前尚未得到充分满足。在本文中,我们提出了一种传统-神经混合编码框架,该框架通过利用传统编解码器和神经网络(NN)同时满足所有这些原则。一方面,传统编解码器可以有效地对视频的像素信号进行编码,但可能会扭曲语义信息。另一方面,高度非线性的神经网络擅长将视频语义压缩成紧凑的表示形式。该框架通过确保在编码过程中保留视频的高效传输语义表示来进行优化,这种表示是从无标签数据中以自监督方式自发学习得到的。从两个流(编解码器和神经网络)协同解码的视频具有丰富的语义,并且在视觉上具有照片般的真实感,通过实验提高了几个主流下游视频分析任务的性能,而无需任何后适配过程。此外,通过引入注意力机制和自适应建模方案,我们方法的视频语义建模能力得到了进一步增强。最后,我们在八个数据集上构建了一个具有三个下游任务的低比特率视频理解基准,证明了我们方法的显著优越性。所有代码、数据和模型都将开源,以促进未来的研究。