Altuwairqi Khawlah, Jarraya Salma Kammoun, Allinjawi Arwa, Hammami Mohamed
Department of Computer Science, King Abdulaziz University, Jeddah, Saudi Arabia.
MIRACL-Laboratory, Sfax, Tunisia.
Signal Image Video Process. 2021;15(7):1387-1395. doi: 10.1007/s11760-021-01869-7. Epub 2021 May 14.
After the COVID-19 pandemic, no one refutes the importance of smart online learning systems in the educational process. Measuring student engagement is a crucial step towards smart online learning systems. A smart online learning system can automatically adapt to learners' emotions and provide feedback about their motivations. In the last few decades, online learning environments have generated tremendous interest among researchers in computer-based education. The challenge that researchers face is how to measure student engagement based on their emotions. There has been an increasing interest towards computer vision and camera-based solutions as technology that overcomes the limits of both human observations and expensive equipment used to measure student engagement. Several solutions have been proposed to measure student engagement, but few are behavior-based approaches. In response to these issues, in this paper, we propose a new automatic multimodal approach to measure student engagement levels in real time. Thus, to offer robust and accurate student engagement measures, we combine and analyze three modalities representing students' behaviors: emotions from facial expressions, keyboard keystrokes, and mouse movements. Such a solution operates in real time while providing the exact level of engagement and using the least expensive equipment possible. We validate the proposed multimodal approach through three main experiments, namely single, dual, and multimodal research modalities in novel engagement datasets. In fact, we build new and realistic student engagement datasets to validate our contributions. We record the highest accuracy value (95.23%) for the multimodal approach and the lowest value of "0.04" for mean square error (MSE).
在新冠疫情之后,没有人会反驳智能在线学习系统在教育过程中的重要性。衡量学生的参与度是迈向智能在线学习系统的关键一步。智能在线学习系统能够自动适应学习者的情绪,并提供有关其学习动机的反馈。在过去几十年中,在线学习环境在基于计算机的教育领域引发了研究人员的极大兴趣。研究人员面临的挑战是如何基于学生的情绪来衡量其参与度。人们对计算机视觉和基于摄像头的解决方案越来越感兴趣,因为这些技术克服了人工观察和用于衡量学生参与度的昂贵设备的局限性。已经提出了几种衡量学生参与度的解决方案,但基于行为的方法很少。针对这些问题,在本文中,我们提出了一种新的自动多模态方法来实时测量学生的参与度水平。因此,为了提供可靠且准确的学生参与度测量结果,我们结合并分析了代表学生行为的三种模态:面部表情中的情绪、键盘按键操作和鼠标移动。这样的解决方案能够实时运行,同时提供精确的参与度水平,并使用尽可能便宜的设备。我们通过三个主要实验,即在新颖的参与度数据集中进行单模态、双模态和多模态研究模态,来验证所提出的多模态方法。事实上,我们构建了新的、现实的学生参与度数据集来验证我们的贡献。我们记录到多模态方法的最高准确率值为95.23%,均方误差(MSE)的最低值为0.04。