• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Validating Assessment Tools in Simulation验证模拟中的评估工具
2
The effectiveness of internet-based e-learning on clinician behavior and patient outcomes: a systematic review protocol.基于互联网的电子学习对临床医生行为和患者结局的有效性:一项系统评价方案。
JBI Database System Rev Implement Rep. 2015 Jan;13(1):52-64. doi: 10.11124/jbisrir-2015-1919.
3
Student and educator experiences of maternal-child simulation-based learning: a systematic review of qualitative evidence protocol.基于母婴模拟学习的学生和教育工作者体验:定性证据协议的系统评价
JBI Database System Rev Implement Rep. 2015 Jan;13(1):14-26. doi: 10.11124/jbisrir-2015-1694.
4
The future of Cochrane Neonatal.考克兰新生儿协作网的未来。
Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12.
5
Health professionals' experience of teamwork education in acute hospital settings: a systematic review of qualitative literature.医疗专业人员在急症医院环境中团队合作教育的经验:对定性文献的系统综述
JBI Database System Rev Implement Rep. 2016 Apr;14(4):96-137. doi: 10.11124/JBISRIR-2016-1843.
6
Exploring conceptual and theoretical frameworks for nurse practitioner education: a scoping review protocol.探索执业护士教育的概念和理论框架:一项范围综述方案
JBI Database System Rev Implement Rep. 2015 Oct;13(10):146-55. doi: 10.11124/jbisrir-2015-2150.
7
Pilot Medical Certification飞行员医学认证
8
9
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.
10
The effectiveness of using non-traditional teaching methods to prepare student health care professionals for the delivery of mental state examination: a systematic review.使用非传统教学方法培养学生医护专业人员进行精神状态检查的有效性:一项系统综述。
JBI Database System Rev Implement Rep. 2015 Aug 14;13(7):177-212. doi: 10.11124/jbisrir-2015-2263.

验证模拟中的评估工具

Validating Assessment Tools in Simulation

作者信息

Urbina Jesica, Monks Stormy M.

机构信息

Texas Tech University Health Sciences Center El Paso

Texas Tech University Health Sciences Center

PMID:32809366
Abstract

Health care simulation is a growing field that combines innovative technologies and adult learning theory to reproducibly train medical professionals in clinical skills and practices. A wide range of assessment tools are available to assess learners on taught skills and knowledge, and there is stake-holder interest validating these assessment tools. Reliable quantitative assessment is critical for high-stakes certification, such as licensing opportunities and board examinations. There are many aspects to an evaluation in healthcare simulation that range from educating new learners and training current professionals, to a systematic review of programs to improve outcomes. Validation of these assessment tools is essential to ensure that they are valid and reliable. Validity refers to whether any measuring instrument measures what it is intended to measure. Additionally, reliability is part of the validity assessment and refers to the consistent or reproducible results of an assessment tool. The assessment tool should yield the same results for the same type of learner every time it is used. In practice, actual healthcare delivery requires knowledge of technical, analytical, and interpersonal skills. This merits assessment systems to be comprehensive, valid, and reliable enough to assess the necessary elements along with testing for critical knowledge and skills. Validating assessment tools for healthcare simulation education ensure that learners can demonstrate the integration of knowledge and skills in a realistic setting. The assessment process itself is influential for the process of curriculum development, as well as feedback and learning. Recent developments in psychometric theory and standard settings have been efficient in assessing professionalism, communication, procedural, and clinical skills. Ideally, simulation developers should reflect on the purpose of the simulation to determine if the focus will be on teaching or learning. If the focus is on teaching, then assessments should focus on performance criteria with exercises for a set of skill-based experiences – this assesses the teaching method's effectiveness in task training. Alternatively, if the focus of the simulation is to determine higher-order learning, then the assessment should be designed to measure multiple integrated abilities such as factual understanding, problem-solving, analysis, and synthesis. In general, multiple assessment methods are necessary to capture all relevant aspects of clinical competency. For higher-order cognitive assessment (knowledge, application, and synthesis of knowledge), context-based multiple-choice questions (MCQ), extended matching items, and short answer questions are appropriate. For the demonstration of skills mastery, a multi-station objective structured clinical examination (OSCE) is viable. Performance-based assessments such as Mini-Clinical Evaluation Exercise (mini-CEX) and Direct Observation of Procedural Skills (DOPS) are appropriate to have a positive effect on learner comprehension. Alternatively, for the advanced professional continuing learner, a clinical work sampling and portfolio or logbook may be used. In an assessment, the developers select an assessment instrument with known characteristics. A wide range of assessment tools is currently available for assessment of knowledge and application and performance assessment. The assessment materials are then created around learning objectives, and the developers directly control all aspects of delivery and assessment. The content should relate to the learning objectives and the test comprehensive enough that it produces reliable scores. This ensures that the performance is wholly attributable to the learner – and not an artifact of curriculum planning or execution. Additionally, different versions of the assessment that are comparable in difficulty will permit comparisons among examinees and against standards. Learner assessment is a wide-ranging decision-making process with implications beyond student achievement alone. It is also related to program evaluation and provides important information to determine program effectiveness. Valid and reliable assessments satisfy accreditation needs and contribute to student learning.

摘要

医疗保健模拟是一个不断发展的领域,它将创新技术与成人学习理论相结合,以可重复的方式培训医学专业人员的临床技能和实践。有各种各样的评估工具可用于评估学习者所学的技能和知识,并且利益相关者对验证这些评估工具很感兴趣。可靠的定量评估对于高风险认证至关重要,例如执照颁发机会和委员会考试。医疗保健模拟评估有很多方面,从教育新学习者和培训在职专业人员,到对项目进行系统审查以改善结果。验证这些评估工具对于确保它们的有效性和可靠性至关重要。有效性是指任何测量工具是否测量了它 intended to measure 的内容。此外,可靠性是有效性评估的一部分,是指评估工具的一致或可重复的结果。每次使用评估工具时,对于同一类型的学习者都应产生相同的结果。在实践中,实际的医疗保健服务需要技术、分析和人际技能方面的知识。这就要求评估系统要全面有效且可靠,足以评估必要的要素以及对关键知识和技能进行测试。验证医疗保健模拟教育的评估工具可确保学习者能够在现实环境中展示知识和技能的整合。评估过程本身对课程开发过程以及反馈和学习都有影响。心理测量理论和标准设定方面的最新进展在评估专业精神、沟通、程序和临床技能方面很有效。理想情况下,模拟开发者应思考模拟的目的,以确定重点将是教学还是学习。如果重点是教学,那么评估应侧重于绩效标准以及针对一组基于技能的体验的练习——这评估了教学方法在任务培训中的有效性。或者,如果模拟的重点是确定高阶学习,那么评估应设计为测量多种综合能力,如实性理解、问题解决、分析和综合。一般来说,需要多种评估方法来涵盖临床能力的所有相关方面。对于高阶认知评估(知识、知识的应用和综合)而言,基于情境的多项选择题(MCQ)以及扩展匹配项和简答题是合适的。对于技能掌握的展示,多站式客观结构化临床考试(OSCE)是可行的。基于绩效的评估,如迷你临床评估练习(mini-CEX)和程序技能直接观察(DOPS),对于对学习者的理解产生积极影响是合适的。或者,对于高级专业持续学习者,可以使用临床工作抽样以及档案袋或日志。在评估中,开发者选择具有已知特征的评估工具。当前有各种各样的评估工具可用于知识和应用评估以及绩效评估。然后围绕学习目标创建评估材料,并且开发者直接控制交付和评估的所有方面。内容应与学习目标相关,并且测试要足够全面,以便产生可靠的分数。这确保了表现完全归因于学习者,而不是课程规划或执行的人为因素。此外,难度相当的不同版本的评估将允许考生之间进行比较并与标准进行对照。学习者评估是一个广泛的决策过程,其影响不仅仅局限于学生成绩。它还与项目评估相关,并为确定项目有效性提供重要信息。有效且可靠的评估满足认证需求并有助于学生学习。