Hahnel Carolin, Eichmann Beate, Goldhammer Frank
DIPF | Leibniz Institute for Research and Information in Education, Frankfurt, Germany.
Centre for International Student Assessment (ZIB), Frankfurt, Germany.
Front Psychol. 2020 Dec 16;11:562128. doi: 10.3389/fpsyg.2020.562128. eCollection 2020.
As Internet sources provide information of varying quality, it is an indispensable prerequisite skill to evaluate the relevance and credibility of online information. Based on the assumption that competent individuals can use different properties of information to assess its relevance and credibility, we developed the EVON (evaluation of online information), an interactive computer-based test for university students. The developed instrument consists of eight items that assess the skill to evaluate online information in six languages. Within a simulated search engine environment, students are requested to select the most relevant and credible link for a respective task. To evaluate the developed instrument, we conducted two studies: (1) a pre-study for quality assurance and observing the response process (cognitive interviews of = 8 students) and (2) a main study aimed at investigating the psychometric properties of the EVON and its relation to other variables ( = 152 students). The results of the pre-study provided first evidence for a theoretically sound test construction with regard to students' item processing behavior. The results of the main study showed acceptable psychometric outcomes for a standardized screening instrument with a small number of items. The item design criteria affected the item difficulty as intended, and students' choice to visit a website had an impact on their task success. Furthermore, the probability of task success was positively predicted by general cognitive performance and reading skill. Although the results uncovered a few weaknesses (e.g., a lack of difficult items), and the efforts of validating the interpretation of EVON outcomes still need to be continued, the overall results speak in favor of a successful test construction and provide first indication that the EVON assesses students' skill in evaluating online information in search engine environments.
由于网络资源提供的信息质量参差不齐,评估在线信息的相关性和可信度是一项不可或缺的前提技能。基于有能力的个体可以利用信息的不同属性来评估其相关性和可信度这一假设,我们开发了EVON(在线信息评估),这是一种针对大学生的基于计算机的交互式测试。所开发的工具由八个项目组成,用于评估六种语言的在线信息评估技能。在模拟搜索引擎环境中,要求学生为各自的任务选择最相关和最可信的链接。为了评估所开发的工具,我们进行了两项研究:(1)一项用于质量保证和观察响应过程的预研究(对8名学生进行认知访谈),以及(2)一项旨在调查EVON的心理测量特性及其与其他变量关系的主要研究(152名学生)。预研究的结果为基于学生项目处理行为的理论上合理的测试构建提供了初步证据。主要研究的结果表明,对于一个项目数量较少的标准化筛选工具,其心理测量结果是可以接受的。项目设计标准按预期影响了项目难度,学生访问网站的选择对其任务成功率有影响。此外,一般认知表现和阅读技能对任务成功概率有正向预测作用。尽管结果发现了一些弱点(例如,缺乏难度较大的项目),并且验证EVON结果解释的工作仍需继续,但总体结果表明测试构建成功,并首次表明EVON评估了学生在搜索引擎环境中评估在线信息的技能。