• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用多模态特征对教育 VR 环境中的内部和外部干扰进行分类。

Classification of Internal and External Distractions in an Educational VR Environment Using Multimodal Features.

出版信息

IEEE Trans Vis Comput Graph. 2024 Nov;30(11):7332-7342. doi: 10.1109/TVCG.2024.3456207. Epub 2024 Oct 11.

DOI:10.1109/TVCG.2024.3456207
PMID:39255100
Abstract

Virtual reality (VR) can potentially enhance student engagement and memory retention in the classroom. However, distraction among participants in a VR-based classroom is a significant concern. Several factors, including mind wandering, external noise, stress, etc., can cause students to become internally and/or externally distracted while learning. To detect distractions, single or multi-modal features can be used. A single modality is found to be insufficient to detect both internal and external distractions, mainly because of individual variability. In this work, we investigated multi-modal features: eye tracking and EEG data, to classify the internal and external distractions in an educational VR environment. We set up our educational VR environment and equipped it for multi-modal data collection. We implemented different machine learning (ML) methods, including k-nearest-neighbors (kNN), Random Forest (RF), one-dimensional convolutional neural network - long short-term memory (1 D-CNN-LSTM), and two-dimensional convolutional neural networks (2D-CNN) to classify participants' internal and external distraction states using the multi-modal features. We performed cross-subject, cross-session, and gender-based grouping tests to evaluate our models. We found that the RF classifier achieves the highest accuracy over 83% in the cross-subject test, around 68% to 78% in the cross-session test, and around 90% in the gender-based grouping test compared to other models. SHAP analysis of the extracted features illustrated greater contributions from the occipital and prefrontal regions of the brain, as well as gaze angle, gaze origin, and head rotation features from the eye tracking data.

摘要

虚拟现实 (VR) 有可能提高课堂中学生的参与度和记忆保留。然而,VR 课堂中的参与者分心是一个重大问题。多种因素,包括心不在焉、外部噪音、压力等,可能导致学生在学习过程中内在地和/或外在地分心。为了检测分心,可使用单一或多模态特征。发现单一模态不足以检测内在和外在的分心,主要是因为个体的可变性。在这项工作中,我们研究了多模态特征:眼动追踪和 EEG 数据,以在教育 VR 环境中分类内在和外在的分心。我们建立了我们的教育 VR 环境,并配备了多模态数据采集设备。我们实施了不同的机器学习 (ML) 方法,包括 k-最近邻 (kNN)、随机森林 (RF)、一维卷积神经网络-长短期记忆 (1D-CNN-LSTM) 和二维卷积神经网络 (2D-CNN),以使用多模态特征分类参与者的内在和外在分心状态。我们进行了跨主体、跨会话和基于性别的分组测试,以评估我们的模型。我们发现,RF 分类器在跨主体测试中达到了 83%以上的最高精度,在跨会话测试中达到了 68%至 78%左右,在基于性别的分组测试中达到了 90%左右,优于其他模型。提取特征的 SHAP 分析表明,大脑的枕叶和前额叶区域以及眼动追踪数据的注视角度、注视原点和头部旋转特征的贡献更大。

相似文献

1
Classification of Internal and External Distractions in an Educational VR Environment Using Multimodal Features.利用多模态特征对教育 VR 环境中的内部和外部干扰进行分类。
IEEE Trans Vis Comput Graph. 2024 Nov;30(11):7332-7342. doi: 10.1109/TVCG.2024.3456207. Epub 2024 Oct 11.
2
Classification of EEG evoked in 2D and 3D virtual reality: traditional machine learning versus deep learning.二维和三维虚拟现实中 EEG 诱发的分类:传统机器学习与深度学习。
Biomed Phys Eng Express. 2024 Nov 5;11(1). doi: 10.1088/2057-1976/ad89c5.
3
Evaluation of Machine Learning Algorithms for Classification of Visual Stimulation-Induced EEG Signals in 2D and 3D VR Videos.用于二维和三维虚拟现实视频中视觉刺激诱发脑电信号分类的机器学习算法评估
Brain Sci. 2025 Jan 16;15(1):75. doi: 10.3390/brainsci15010075.
4
Virtual reality-assisted prediction of adult ADHD based on eye tracking, EEG, actigraphy and behavioral indices: a machine learning analysis of independent training and test samples.基于眼动追踪、脑电图、活动记录仪和行为指标的虚拟现实辅助成人注意力缺陷多动障碍预测:独立训练和测试样本的机器学习分析
Transl Psychiatry. 2024 Dec 31;14(1):508. doi: 10.1038/s41398-024-03217-y.
5
Machine learning based classification of presence utilizing psychophysiological signals in immersive virtual environments.基于机器学习的沉浸式虚拟环境中利用心理生理信号进行存在分类。
Sci Rep. 2024 Sep 17;14(1):21667. doi: 10.1038/s41598-024-72376-1.
6
Enhanced electroencephalogram signal classification: A hybrid convolutional neural network with attention-based feature selection.增强型脑电图信号分类:一种基于注意力特征选择的混合卷积神经网络。
Brain Res. 2025 Mar 15;1851:149484. doi: 10.1016/j.brainres.2025.149484. Epub 2025 Feb 2.
7
Explainable feature selection and deep learning based emotion recognition in virtual reality using eye tracker and physiological data.基于眼动追踪器和生理数据的虚拟现实中可解释特征选择与深度学习情感识别
Front Med (Lausanne). 2024 Sep 12;11:1438720. doi: 10.3389/fmed.2024.1438720. eCollection 2024.
8
Influence of Distraction Factors on Performance in Laparoscopic Surgery in Immersive Virtual Reality: Study Protocol of a Cross-Over Trial in Medical Students and Residents-DisLapVR.分散因素对沉浸式虚拟现实腹腔镜手术表现的影响:医学生和住院医师交叉试验的研究方案——DisLapVR。
JMIR Res Protoc. 2024 Nov 5;13:e59014. doi: 10.2196/59014.
9
Convolutional long short-term memory neural network integrated with classifier in classifying type of asynchrony breathing in mechanically ventilated patients.结合分类器的卷积长短期记忆神经网络用于机械通气患者呼吸不同步类型的分类
Comput Methods Programs Biomed. 2025 May;263:108680. doi: 10.1016/j.cmpb.2025.108680. Epub 2025 Feb 19.
10
Neurological Evidence of Diverse Self-Help Breathing Training With Virtual Reality and Biofeedback Assistance: Extensive Exploration Study of Electroencephalography Markers.虚拟现实与生物反馈辅助下多样化自助呼吸训练的神经学证据:脑电图标记物的广泛探索性研究
JMIR Form Res. 2024 Dec 6;8:e55478. doi: 10.2196/55478.