Institute of Computing (IComp), Federal University of Amazonas (UFAM), Avenida Rodrigo Otávio 6200, Manaus 69067-005, Brazil.
Department of Informatics, University of California Irvine (UCI), Irvine, CA 92697, USA.
Sensors (Basel). 2021 May 17;21(10):3480. doi: 10.3390/s21103480.
The success of a software application is related to users' willingness to keep using it. In this sense, evaluating User eXperience (UX) became an important part of the software development process. Researchers have been carrying out studies by employing various methods to evaluate the UX of software products. Some studies reported varied and even contradictory results when applying different UX evaluation methods, making it difficult for practitioners to identify which results to rely upon. However, these works did not evaluate the developers' perspectives and their impacts on the decision process. Moreover, such studies focused on one-shot evaluations, which cannot assess whether the methods provide the same big picture of the experience (i.e., deteriorating, improving, or stable). This paper presents a longitudinal study in which 68 students evaluated the UX of an online judge system by employing AttrakDiff, UEQ, and Sentence Completion methods at three moments along a semester. This study reveals contrasting results between the methods, which affected developers' decisions and interpretations. With this work, we intend to draw the HCI community's attention to the contrast between different UX evaluation methods and the impact of their outcomes in the software development process.
软件应用的成功与用户愿意继续使用它有关。从这个意义上说,评估用户体验(UX)已成为软件开发过程的重要组成部分。研究人员一直在通过采用各种方法来评估软件产品的用户体验。当应用不同的用户体验评估方法时,一些研究报告了不同甚至矛盾的结果,这使得从业者难以确定应该依赖哪些结果。然而,这些工作并没有评估开发人员的观点及其对决策过程的影响。此外,这些研究侧重于一次性评估,无法评估这些方法是否提供了相同的体验全貌(即体验在恶化、改善还是稳定)。本文介绍了一项纵向研究,该研究中,68 名学生在一个学期的三个时间点使用 AttrakDiff、UEQ 和句子完成方法来评估在线裁判系统的用户体验。这项研究揭示了方法之间的对比结果,这些结果影响了开发者的决策和解释。通过这项工作,我们旨在引起人机交互社区对不同用户体验评估方法之间的差异以及它们在软件开发过程中的结果的影响的关注。