• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于眼动数据分析的短窗网页用户视觉注意力持续预测。

Continuous Prediction of Web User Visual Attention on Short Span Windows Based on Gaze Data Analytics.

机构信息

Department of Industrial Engineering, University of Chile, Santiago 8370456, Chile.

Engineering Complex Systems Institute, Santiago 8370398, Chile.

出版信息

Sensors (Basel). 2023 Feb 18;23(4):2294. doi: 10.3390/s23042294.

DOI:10.3390/s23042294
PMID:36850892
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9960063/
Abstract

Understanding users' visual attention on websites is paramount to enhance the browsing experience, such as providing emergent information or dynamically adapting Web interfaces. Existing approaches to accomplish these challenges are generally based on the computation of salience maps of static Web interfaces, while websites increasingly become more dynamic and interactive. This paper proposes a method and provides a proof-of-concept to predict user's visual attention on specific regions of a website with dynamic components. This method predicts the regions of a user's visual attention without requiring a constant recording of the current layout of the website, but rather by knowing the structure it presented in a past period. To address this challenge, the concept of visit intention is introduced in this paper, defined as the probability that a user, while browsing, will fixate their gaze on a specific region of the website in the next period. Our approach uses the gaze patterns of a population that browsed a specific website, captured via an eye-tracker device, to aid personalized prediction models built with individual visual kinetics features. We show experimentally that it is possible to conduct such a prediction through multilabel classification models using a small number of users, obtaining an average area under curve of 84.3%, and an average accuracy of 79%. Furthermore, the user's visual kinetics features are consistently selected in every set of a cross-validation evaluation.

摘要

理解用户在网站上的视觉注意力对于增强浏览体验至关重要,例如提供紧急信息或动态适应网页界面。现有的实现这些挑战的方法通常基于静态网页界面的显著度图的计算,而网站越来越具有动态性和交互性。本文提出了一种方法,并提供了一个概念验证,以预测具有动态组件的网站特定区域的用户视觉注意力。该方法预测用户的视觉注意力区域,而不需要不断记录网站的当前布局,而是通过了解用户在过去一段时间内呈现的结构来实现。为了解决这一挑战,本文引入了访问意图的概念,将其定义为用户在浏览时将目光固定在网站特定区域的概率。我们的方法使用通过眼动追踪设备捕获的浏览特定网站的人群的注视模式,来辅助使用个体视觉动力学特征构建的个性化预测模型。我们通过使用少数用户进行多标签分类模型的实验表明,通过这种方法进行预测是可行的,得到的平均曲线下面积为 84.3%,平均准确率为 79%。此外,在每一组交叉验证评估中,用户的视觉动力学特征都被一致地选择。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/d355e8eb040a/sensors-23-02294-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/f83e48831202/sensors-23-02294-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/d97afa017131/sensors-23-02294-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/7325d152fbfe/sensors-23-02294-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/291cd4a8f20a/sensors-23-02294-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/5b52c14f8ce0/sensors-23-02294-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/12c7164f0d82/sensors-23-02294-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/373e45b31530/sensors-23-02294-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/b3546618aa00/sensors-23-02294-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/e0c142dccc5a/sensors-23-02294-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/10e1faea4c66/sensors-23-02294-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/964e7d533c86/sensors-23-02294-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/d355e8eb040a/sensors-23-02294-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/f83e48831202/sensors-23-02294-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/d97afa017131/sensors-23-02294-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/7325d152fbfe/sensors-23-02294-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/291cd4a8f20a/sensors-23-02294-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/5b52c14f8ce0/sensors-23-02294-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/12c7164f0d82/sensors-23-02294-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/373e45b31530/sensors-23-02294-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/b3546618aa00/sensors-23-02294-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/e0c142dccc5a/sensors-23-02294-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/10e1faea4c66/sensors-23-02294-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/964e7d533c86/sensors-23-02294-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/36d8/9960063/d355e8eb040a/sensors-23-02294-g012.jpg

相似文献

1
Continuous Prediction of Web User Visual Attention on Short Span Windows Based on Gaze Data Analytics.基于眼动数据分析的短窗网页用户视觉注意力持续预测。
Sensors (Basel). 2023 Feb 18;23(4):2294. doi: 10.3390/s23042294.
2
Automatic Classification of Users' Health Information Need Context: Logistic Regression Analysis of Mouse-Click and Eye-Tracker Data.用户健康信息需求上下文的自动分类:基于鼠标点击和眼动追踪数据的逻辑回归分析
J Med Internet Res. 2017 Dec 21;19(12):e424. doi: 10.2196/jmir.8354.
3
Design and application of real-time visual attention model for the exploration of 3D virtual environments.三维虚拟环境探索的实时视觉注意模型的设计与应用。
IEEE Trans Vis Comput Graph. 2012 Mar;18(3):356-68. doi: 10.1109/TVCG.2011.154.
4
Attention-Aware Visualization: Tracking and Responding to User Perception Over Time.注意力感知可视化:随时间跟踪并响应用户感知
IEEE Trans Vis Comput Graph. 2025 Jan;31(1):1017-1027. doi: 10.1109/TVCG.2024.3456300. Epub 2024 Nov 25.
5
Auto-Suggesting Browsing Actions for Personalized Web Screen Reading.用于个性化网页屏幕阅读的自动建议浏览操作。
UMAP Proc Conf User Model Adapt Personal. 2019 Jun;2019:252-260. doi: 10.1145/3320435.3320460. Epub 2019 Jun 7.
6
Gaze-Assisted User Intention Prediction for Initial Delay Reduction in Web Video Access.用于减少网络视频访问初始延迟的注视辅助用户意图预测
Sensors (Basel). 2015 Jun 19;15(6):14679-700. doi: 10.3390/s150614679.
7
Methodological Guidelines for Systematic Assessments of Health Care Websites Using Web Analytics: Tutorial.系统评估健康保健网站的方法学指南:教程。
J Med Internet Res. 2022 Apr 15;24(4):e28291. doi: 10.2196/28291.
8
DGaze: CNN-Based Gaze Prediction in Dynamic Scenes.DGaze:基于 CNN 的动态场景中的注视预测。
IEEE Trans Vis Comput Graph. 2020 May;26(5):1902-1911. doi: 10.1109/TVCG.2020.2973473. Epub 2020 Feb 13.
9
A Newly Developed Web-Based Resource on Genetic Eye Disorders for Users With Visual Impairment (Gene.Vision): Usability Study.面向视力障碍用户的新型遗传性眼病网络资源 (Gene.Vision):可用性研究。
J Med Internet Res. 2021 Jan 20;23(1):e19151. doi: 10.2196/19151.
10
The effect of color coding and layout coding on users' visual search on mobile map navigation icons.颜色编码和布局编码对用户在移动地图导航图标上视觉搜索的影响。
Front Psychol. 2022 Dec 13;13:1040533. doi: 10.3389/fpsyg.2022.1040533. eCollection 2022.

本文引用的文献

1
Siamese Network for RGB-D Salient Object Detection and Beyond.用于RGB-D显著目标检测及其他应用的连体网络
IEEE Trans Pattern Anal Mach Intell. 2021 Apr 16;PP. doi: 10.1109/TPAMI.2021.3073689.
2
Re-Thinking Co-Salient Object Detection.重新思考协同显著目标检测。
IEEE Trans Pattern Anal Mach Intell. 2022 Aug;44(8):4339-4354. doi: 10.1109/TPAMI.2021.3060412. Epub 2022 Jul 1.
3
Salient Object Detection in the Deep Learning Era: An In-Depth Survey.深度学习时代的显著目标检测:深入调查。
IEEE Trans Pattern Anal Mach Intell. 2022 Jun;44(6):3239-3259. doi: 10.1109/TPAMI.2021.3051099. Epub 2022 May 5.
4
Paying Attention to Video Object Pattern Understanding.关注视频对象模式理解。
IEEE Trans Pattern Anal Mach Intell. 2021 Jul;43(7):2413-2428. doi: 10.1109/TPAMI.2020.2966453. Epub 2021 Jun 8.
5
Revisiting Video Saliency Prediction in the Deep Learning Era.深度学习时代的视频显著度预测再探讨。
IEEE Trans Pattern Anal Mach Intell. 2021 Jan;43(1):220-237. doi: 10.1109/TPAMI.2019.2924417. Epub 2020 Dec 4.
6
Inferring Salient Objects from Human Fixations.从人类注视点推断显著物体。
IEEE Trans Pattern Anal Mach Intell. 2020 Aug;42(8):1913-1927. doi: 10.1109/TPAMI.2019.2905607. Epub 2019 Mar 18.
7
What Do Different Evaluation Metrics Tell Us About Saliency Models?不同的评估指标能告诉我们关于显著性模型的哪些信息?
IEEE Trans Pattern Anal Mach Intell. 2019 Mar;41(3):740-757. doi: 10.1109/TPAMI.2018.2815601. Epub 2018 Mar 13.
8
Deep Visual Attention Prediction.深度视觉注意力预测。
IEEE Trans Image Process. 2018 May;27(5):2368-2378. doi: 10.1109/TIP.2017.2787612. Epub 2017 Dec 27.
9
Saliency in VR: How Do People Explore Virtual Environments?虚拟现实中的显著性:人们如何探索虚拟环境?
IEEE Trans Vis Comput Graph. 2018 Apr;24(4):1633-1642. doi: 10.1109/TVCG.2018.2793599.
10
Correspondence Driven Saliency Transfer.关联驱动显著性迁移。
IEEE Trans Image Process. 2016 Nov;25(11):5025-5034. doi: 10.1109/TIP.2016.2601784. Epub 2016 Aug 19.