Suppr超能文献

基于网络的 H.264/AVC 整帧丢失可见性模型和丢帧方法。

Network-based H.264/AVC whole frame loss visibility model and frame dropping methods.

出版信息

IEEE Trans Image Process. 2012 Aug;21(8):3353-63. doi: 10.1109/TIP.2012.2191567. Epub 2012 Mar 21.

Abstract

We examine the visual effect of whole frame loss by different decoders. Whole frame losses are introduced in H.264/AVC compressed videos which are then decoded by two different decoders with different common concealment effects: frame copy and frame interpolation. The videos are seen by human observers who respond to each glitch they spot. We found that about 39% of whole frame losses of B frames are not observed by any of the subjects, and over 58% of the B frame losses are observed by 20% or fewer of the subjects. Using simple predictive features which can be calculated inside a network node with no access to the original video and no pixel level reconstruction of the frame, we developed models which can predict the visibility of whole B frame losses. The models are then used in a router to predict the visual impact of a frame loss and perform intelligent frame dropping to relieve network congestion. Dropping frames based on their visual scores proves superior to random dropping of B frames.

摘要

我们研究了不同解码器造成的整帧丢失的视觉效果。在 H.264/AVC 压缩视频中引入整帧丢失,然后由两个具有不同常见隐藏效果的解码器进行解码:帧复制和帧内插。视频由人类观察者观看,他们对每个发现的故障做出响应。我们发现,大约 39%的 B 帧整帧丢失没有被任何一个对象观察到,超过 58%的 B 帧丢失被 20%或更少的对象观察到。使用可以在网络节点内部计算的简单预测特征,而无需访问原始视频且无需对帧进行像素级重建,我们开发了可以预测整个 B 帧丢失可见性的模型。然后,这些模型被用于路由器中,以预测帧丢失的视觉影响并执行智能帧丢弃以缓解网络拥塞。根据视觉得分丢弃帧比随机丢弃 B 帧效果更好。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验