Urbina Jesica, Monks Stormy M.
Texas Tech University Health Sciences Center El Paso
Texas Tech University Health Sciences Center
Health care simulation is a growing field that combines innovative technologies and adult learning theory to reproducibly train medical professionals in clinical skills and practices. A wide range of assessment tools are available to assess learners on taught skills and knowledge, and there is stake-holder interest validating these assessment tools. Reliable quantitative assessment is critical for high-stakes certification, such as licensing opportunities and board examinations. There are many aspects to an evaluation in healthcare simulation that range from educating new learners and training current professionals, to a systematic review of programs to improve outcomes. Validation of these assessment tools is essential to ensure that they are valid and reliable. Validity refers to whether any measuring instrument measures what it is intended to measure. Additionally, reliability is part of the validity assessment and refers to the consistent or reproducible results of an assessment tool. The assessment tool should yield the same results for the same type of learner every time it is used. In practice, actual healthcare delivery requires knowledge of technical, analytical, and interpersonal skills. This merits assessment systems to be comprehensive, valid, and reliable enough to assess the necessary elements along with testing for critical knowledge and skills. Validating assessment tools for healthcare simulation education ensure that learners can demonstrate the integration of knowledge and skills in a realistic setting. The assessment process itself is influential for the process of curriculum development, as well as feedback and learning. Recent developments in psychometric theory and standard settings have been efficient in assessing professionalism, communication, procedural, and clinical skills. Ideally, simulation developers should reflect on the purpose of the simulation to determine if the focus will be on teaching or learning. If the focus is on teaching, then assessments should focus on performance criteria with exercises for a set of skill-based experiences – this assesses the teaching method's effectiveness in task training. Alternatively, if the focus of the simulation is to determine higher-order learning, then the assessment should be designed to measure multiple integrated abilities such as factual understanding, problem-solving, analysis, and synthesis. In general, multiple assessment methods are necessary to capture all relevant aspects of clinical competency. For higher-order cognitive assessment (knowledge, application, and synthesis of knowledge), context-based multiple-choice questions (MCQ), extended matching items, and short answer questions are appropriate. For the demonstration of skills mastery, a multi-station objective structured clinical examination (OSCE) is viable. Performance-based assessments such as Mini-Clinical Evaluation Exercise (mini-CEX) and Direct Observation of Procedural Skills (DOPS) are appropriate to have a positive effect on learner comprehension. Alternatively, for the advanced professional continuing learner, a clinical work sampling and portfolio or logbook may be used. In an assessment, the developers select an assessment instrument with known characteristics. A wide range of assessment tools is currently available for assessment of knowledge and application and performance assessment. The assessment materials are then created around learning objectives, and the developers directly control all aspects of delivery and assessment. The content should relate to the learning objectives and the test comprehensive enough that it produces reliable scores. This ensures that the performance is wholly attributable to the learner – and not an artifact of curriculum planning or execution. Additionally, different versions of the assessment that are comparable in difficulty will permit comparisons among examinees and against standards. Learner assessment is a wide-ranging decision-making process with implications beyond student achievement alone. It is also related to program evaluation and provides important information to determine program effectiveness. Valid and reliable assessments satisfy accreditation needs and contribute to student learning.
医疗保健模拟是一个不断发展的领域,它将创新技术与成人学习理论相结合,以可重复的方式培训医学专业人员的临床技能和实践。有各种各样的评估工具可用于评估学习者所学的技能和知识,并且利益相关者对验证这些评估工具很感兴趣。可靠的定量评估对于高风险认证至关重要,例如执照颁发机会和委员会考试。医疗保健模拟评估有很多方面,从教育新学习者和培训在职专业人员,到对项目进行系统审查以改善结果。验证这些评估工具对于确保它们的有效性和可靠性至关重要。有效性是指任何测量工具是否测量了它 intended to measure 的内容。此外,可靠性是有效性评估的一部分,是指评估工具的一致或可重复的结果。每次使用评估工具时,对于同一类型的学习者都应产生相同的结果。在实践中,实际的医疗保健服务需要技术、分析和人际技能方面的知识。这就要求评估系统要全面有效且可靠,足以评估必要的要素以及对关键知识和技能进行测试。验证医疗保健模拟教育的评估工具可确保学习者能够在现实环境中展示知识和技能的整合。评估过程本身对课程开发过程以及反馈和学习都有影响。心理测量理论和标准设定方面的最新进展在评估专业精神、沟通、程序和临床技能方面很有效。理想情况下,模拟开发者应思考模拟的目的,以确定重点将是教学还是学习。如果重点是教学,那么评估应侧重于绩效标准以及针对一组基于技能的体验的练习——这评估了教学方法在任务培训中的有效性。或者,如果模拟的重点是确定高阶学习,那么评估应设计为测量多种综合能力,如实性理解、问题解决、分析和综合。一般来说,需要多种评估方法来涵盖临床能力的所有相关方面。对于高阶认知评估(知识、知识的应用和综合)而言,基于情境的多项选择题(MCQ)以及扩展匹配项和简答题是合适的。对于技能掌握的展示,多站式客观结构化临床考试(OSCE)是可行的。基于绩效的评估,如迷你临床评估练习(mini-CEX)和程序技能直接观察(DOPS),对于对学习者的理解产生积极影响是合适的。或者,对于高级专业持续学习者,可以使用临床工作抽样以及档案袋或日志。在评估中,开发者选择具有已知特征的评估工具。当前有各种各样的评估工具可用于知识和应用评估以及绩效评估。然后围绕学习目标创建评估材料,并且开发者直接控制交付和评估的所有方面。内容应与学习目标相关,并且测试要足够全面,以便产生可靠的分数。这确保了表现完全归因于学习者,而不是课程规划或执行的人为因素。此外,难度相当的不同版本的评估将允许考生之间进行比较并与标准进行对照。学习者评估是一个广泛的决策过程,其影响不仅仅局限于学生成绩。它还与项目评估相关,并为确定项目有效性提供重要信息。有效且可靠的评估满足认证需求并有助于学生学习。