• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过头戴式显示器中的显著内容增强 360 视频流

Enhancing 360 Video Streaming through Salient Content in Head-Mounted Displays.

机构信息

Department of Information Sciences and Technology, School of Computing, George Mason University, Fairfax, VA 22030, USA.

出版信息

Sensors (Basel). 2023 Apr 15;23(8):4016. doi: 10.3390/s23084016.

DOI:10.3390/s23084016
PMID:37112356
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10143939/
Abstract

Predicting where users will look inside head-mounted displays (HMDs) and fetching only the relevant content is an effective approach for streaming bulky 360 videos over bandwidth-constrained networks. Despite previous efforts, anticipating users' fast and sudden head movements is still difficult because there is a lack of clear understanding of the unique visual attention in 360 videos that dictates the users' head movement in HMDs. This in turn reduces the effectiveness of streaming systems and degrades the users' Quality of Experience. To address this issue, we propose to extract salient cues unique in the 360 video content to capture the attentive behavior of HMD users. Empowered by the newly discovered saliency features, we devise a head-movement prediction algorithm to accurately predict users' head orientations in the near future. A 360 video streaming framework that takes full advantage of the head movement predictor is proposed to enhance the quality of delivered 360 videos. Practical trace-driven results show that the proposed saliency-based 360 video streaming system reduces the stall duration by 65% and the stall count by 46%, while saving 31% more bandwidth than state-of-the-art approaches.

摘要

预测头戴式显示器 (HMD) 内用户将看向何处,并仅获取相关内容,这是在带宽受限的网络上流式传输庞大的 360 视频的有效方法。尽管之前已经做了很多努力,但预测用户快速而突然的头部运动仍然很困难,因为人们对决定 HMD 中用户头部运动的 360 视频中独特的视觉注意缺乏清晰的理解。这反过来又降低了流媒体系统的有效性,并降低了用户的体验质量。为了解决这个问题,我们建议提取 360 视频内容中独特的显著线索,以捕捉 HMD 用户的注意力行为。借助新发现的显著特征,我们设计了一种头部运动预测算法,以准确预测用户在不久的将来的头部方向。提出了一种充分利用头部运动预测器的 360 视频流媒体框架,以提高所提供的 360 视频的质量。实际的跟踪驱动结果表明,与最先进的方法相比,基于显著度的 360 视频流媒体系统将停顿持续时间减少了 65%,停顿次数减少了 46%,同时节省了 31%的带宽。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/0726afe78a92/sensors-23-04016-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/f6a374804062/sensors-23-04016-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/fe92616839fd/sensors-23-04016-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/c0aa57a69114/sensors-23-04016-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/5e1e57d73bd7/sensors-23-04016-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/8d3552fe4f90/sensors-23-04016-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/70beeefc3cbb/sensors-23-04016-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/8df4dabd10c1/sensors-23-04016-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/67c27519acf7/sensors-23-04016-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/0726afe78a92/sensors-23-04016-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/f6a374804062/sensors-23-04016-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/fe92616839fd/sensors-23-04016-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/c0aa57a69114/sensors-23-04016-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/5e1e57d73bd7/sensors-23-04016-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/8d3552fe4f90/sensors-23-04016-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/70beeefc3cbb/sensors-23-04016-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/8df4dabd10c1/sensors-23-04016-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/67c27519acf7/sensors-23-04016-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c16/10143939/0726afe78a92/sensors-23-04016-g009.jpg

相似文献

1
Enhancing 360 Video Streaming through Salient Content in Head-Mounted Displays.通过头戴式显示器中的显著内容增强 360 视频流
Sensors (Basel). 2023 Apr 15;23(8):4016. doi: 10.3390/s23084016.
2
Gaze-Aware Streaming Solutions for the Next Generation of Mobile VR Experiences.面向下一代移动 VR 体验的注视感知流媒体解决方案。
IEEE Trans Vis Comput Graph. 2018 Apr;24(4):1535-1544. doi: 10.1109/TVCG.2018.2794119.
3
Graph Learning Based Head Movement Prediction for Interactive 360 Video Streaming.基于图学习的交互式 360 视频流中头部运动预测。
IEEE Trans Image Process. 2021;30:4622-4636. doi: 10.1109/TIP.2021.3073283. Epub 2021 May 3.
4
A Log-Rectilinear Transformation for Foveated 360-degree Video Streaming.用于中心凹360度视频流的对数直线变换
IEEE Trans Vis Comput Graph. 2021 May;27(5):2638-2647. doi: 10.1109/TVCG.2021.3067762. Epub 2021 Apr 19.
5
Monitoring with head-mounted displays: performance and safety in a full-scale simulator and part-task trainer.头戴式显示器监测:全尺寸模拟器和部分任务训练器中的性能与安全性
Anesth Analg. 2009 Oct;109(4):1135-46. doi: 10.1213/ANE.0b013e3181b5a200.
6
Predicting Popularity of Video Streaming Services with Representation Learning: A Survey and a Real-World Case Study.用表示学习预测视频流媒体服务的流行度:调查和现实案例研究。
Sensors (Basel). 2021 Nov 3;21(21):7328. doi: 10.3390/s21217328.
7
BCI to Potentially Enhance Streaming Images to a VR Headset by Predicting Head Rotation.脑机接口有望通过预测头部旋转来增强向虚拟现实头戴设备传输图像的效果。
Front Hum Neurosci. 2018 Oct 16;12:420. doi: 10.3389/fnhum.2018.00420. eCollection 2018.
8
Find who to look at: Turning from action to saliency.确定观察对象:从行动转向显著性。
IEEE Trans Image Process. 2018 Sep;27(9):4529-4544. doi: 10.1109/TIP.2018.2837106. Epub 2018 May 16.
9
LiveObj: Object Semantics-based Viewport Prediction for Live Mobile Virtual Reality Streaming.基于对象语义的移动实时虚拟现实直播视口预测
IEEE Trans Vis Comput Graph. 2021 May;27(5):2736-2745. doi: 10.1109/TVCG.2021.3067686. Epub 2021 Apr 15.
10
Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.使用速率适配来模拟 HTTP 视频流的时变主观质量。
IEEE Trans Image Process. 2014 May;23(5):2206-21. doi: 10.1109/TIP.2014.2312613.

本文引用的文献

1
The (In)effectiveness of Attention Guidance Methods for Enhancing Brand Memory in 360° Video.注意力引导方法在增强 360° 视频中品牌记忆方面的无效性。
Sensors (Basel). 2022 Nov 15;22(22):8809. doi: 10.3390/s22228809.
2
Dissecting Latency in 360° Video Camera Sensing Systems.剖析 360° 视频摄像传感系统中的延迟。
Sensors (Basel). 2022 Aug 11;22(16):6001. doi: 10.3390/s22166001.
3
DisCaaS: Micro Behavior Analysis on Discussion by Camera as a Sensor.DisCaaS:基于摄像头传感器的讨论微行为分析。
Sensors (Basel). 2021 Aug 25;21(17):5719. doi: 10.3390/s21175719.
4
Prediction of Head Movement in 360-Degree Videos Using Attention Model.基于注意力模型的 360 度视频中头部运动预测
Sensors (Basel). 2021 May 25;21(11):3678. doi: 10.3390/s21113678.
5
Graph Learning Based Head Movement Prediction for Interactive 360 Video Streaming.基于图学习的交互式 360 视频流中头部运动预测。
IEEE Trans Image Process. 2021;30:4622-4636. doi: 10.1109/TIP.2021.3073283. Epub 2021 May 3.
6
Virtual Reality with 360-Video Storytelling in Cultural Heritage: Study of Presence, Engagement, and Immersion.虚拟现实与 360 度视频叙事在文化遗产中的应用:存在、参与和沉浸感的研究。
Sensors (Basel). 2020 Oct 16;20(20):5851. doi: 10.3390/s20205851.
7
Automatic 360° Mono-Stereo Panorama Generation Using a Cost-Effective Multi-Camera System.使用经济高效的多相机系统生成自动 360° 单声道立体声全景。
Sensors (Basel). 2020 May 30;20(11):3097. doi: 10.3390/s20113097.
8
3DoF+ 360 Video Location-Based Asymmetric Down-Sampling for View Synthesis to Immersive VR Video Streaming.3DoF+360 视频基于位置的非对称下采样用于沉浸式 VR 视频流的视图合成。
Sensors (Basel). 2018 Sep 18;18(9):3148. doi: 10.3390/s18093148.
9
Predicting Head Movement in Panoramic Video: A Deep Reinforcement Learning Approach.全景视频中头部运动预测:一种深度强化学习方法。
IEEE Trans Pattern Anal Mach Intell. 2019 Nov;41(11):2693-2708. doi: 10.1109/TPAMI.2018.2858783. Epub 2018 Jul 24.
10
Saliency in VR: How Do People Explore Virtual Environments?虚拟现实中的显著性:人们如何探索虚拟环境?
IEEE Trans Vis Comput Graph. 2018 Apr;24(4):1633-1642. doi: 10.1109/TVCG.2018.2793599.