• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

来自七所医学院校临床技能考试多中心研究的标准化患者问诊及患者记录的效度证据与评分指南

Validity Evidence and Scoring Guidelines for Standardized Patient Encounters and Patient Notes From a Multisite Study of Clinical Performance Examinations in Seven Medical Schools.

作者信息

Park Yoon Soo, Hyderi Abbas, Heine Nancy, May Win, Nevins Andrew, Lee Ming, Bordage Georges, Yudkowsky Rachel

机构信息

Y.S. Park is associate professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0001-8583-4335. A. Hyderi is associate dean for curriculum and associate professor, Department of Family Medicine, University of Illinois at Chicago College of Medicine, Chicago, Illinois. N. Heine is assistant professor, Department of Medical Education and Department of Medicine, and director, Clinical Skills Education Center, Loma Linda University School of Medicine, Loma Linda, California; ORCID: http://orcid.org/0000-0001-6812-9079. W. May is professor, Department of Medical Education, and director, Clinical Skills Education and Evaluation Center, Keck School of Medicine of the University of Southern California, Los Angeles, California. A. Nevins is clinical associate professor, Department of Medicine, Stanford University School of Medicine, Palo Alto, California. M. Lee is professor of medical education, University of California, Los Angeles David Geffen School of Medicine, Los Angeles, California. G. Bordage is professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois. R. Yudkowsky is director, Graham Clinical Performance Center, and professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0002-2145-7582.

出版信息

Acad Med. 2017 Nov;92(11S Association of American Medical Colleges Learn Serve Lead: Proceedings of the 56th Annual Research in Medical Education Sessions):S12-S20. doi: 10.1097/ACM.0000000000001918.

DOI:10.1097/ACM.0000000000001918
PMID:29065018
Abstract

PURPOSE

To examine validity evidence of local graduation competency examination scores from seven medical schools using shared cases and to provide rater training protocols and guidelines for scoring patient notes (PNs).

METHOD

Between May and August 2016, clinical cases were developed, shared, and administered across seven medical schools (990 students participated). Raters were calibrated using training protocols, and guidelines were developed collaboratively across sites to standardize scoring. Data included scores from standardized patient encounters for history taking, physical examination, and PNs. Descriptive statistics were used to examine scores from the different assessment components. Generalizability studies (G-studies) using variance components were conducted to estimate reliability for composite scores.

RESULTS

Validity evidence was collected for response process (rater perception), internal structure (variance components, reliability), relations to other variables (interassessment correlations), and consequences (composite score). Student performance varied by case and task. In the PNs, justification of differential diagnosis was the most discriminating task. G-studies showed that schools accounted for less than 1% of total variance; however, for the PNs, there were differences in scores for varying cases and tasks across schools, indicating a school effect. Composite score reliability was maximized when the PN was weighted between 30% and 40%. Raters preferred using case-specific scoring guidelines with clear point-scoring systems.

CONCLUSIONS

This multisite study presents validity evidence for PN scores based on scoring rubric and case-specific scoring guidelines that offer rigor and feedback for learners. Variability in PN scores across participating sites may signal different approaches to teaching clinical reasoning among medical schools.

摘要

目的

使用共享病例检验七所医学院校的局部毕业能力考试成绩的效度证据,并提供评分者培训方案以及患者记录(PN)评分指南。

方法

2016年5月至8月期间,开发、共享并在七所医学院校实施了临床病例(990名学生参与)。使用培训方案对评分者进行校准,并跨站点协作制定指南以规范评分。数据包括标准化患者问诊中病史采集、体格检查和PN的分数。使用描述性统计来检验不同评估组件的分数。进行了使用方差分量的概化研究(G研究)以估计综合分数的可靠性。

结果

收集了关于反应过程(评分者感知)、内部结构(方差分量、可靠性)、与其他变量的关系(评估间相关性)和后果(综合分数)的效度证据。学生表现因病例和任务而异。在PN中,鉴别诊断的理由是最具区分性的任务。G研究表明,学校占总方差的比例不到1%;然而,对于PN,不同学校的不同病例和任务的分数存在差异,表明存在学校效应。当PN权重在30%至40%之间时,综合分数的可靠性最大化。评分者更喜欢使用具有明确分数系统的特定病例评分指南。

结论

这项多站点研究基于评分标准和特定病例评分指南提供了PN分数的效度证据,这些指南为学习者提供了严谨性和反馈。参与站点间PN分数的差异可能表明医学院校在临床推理教学方面采用了不同方法。

相似文献

1
Validity Evidence and Scoring Guidelines for Standardized Patient Encounters and Patient Notes From a Multisite Study of Clinical Performance Examinations in Seven Medical Schools.来自七所医学院校临床技能考试多中心研究的标准化患者问诊及患者记录的效度证据与评分指南
Acad Med. 2017 Nov;92(11S Association of American Medical Colleges Learn Serve Lead: Proceedings of the 56th Annual Research in Medical Education Sessions):S12-S20. doi: 10.1097/ACM.0000000000001918.
2
Inter-rater reliability and generalizability of patient note scores using a scoring rubric based on the USMLE Step-2 CS format.使用基于美国医师执照考试第二步临床技能考试(USMLE Step-2 CS)格式的评分标准时,评分者间信度及患者记录分数的可推广性。
Adv Health Sci Educ Theory Pract. 2016 Oct;21(4):761-73. doi: 10.1007/s10459-015-9664-3. Epub 2016 Jan 12.
3
Differential Weighting for Subcomponent Measures of Integrated Clinical Encounter Scores Based on the USMLE Step 2 CS Examination: Effects on Composite Score Reliability and Pass-Fail Decisions.基于美国医师执照考试第二步临床技能考试的综合临床问诊分数子成分测量的差异加权:对综合分数可靠性及通过-未通过决策的影响
Acad Med. 2016 Nov;91(11 Association of American Medical Colleges Learn Serve Lead: Proceedings of the 55th Annual Research in Medical Education Sessions):S24-S30. doi: 10.1097/ACM.0000000000001359.
4
Validity evidence for a patient note scoring rubric based on the new patient note format of the United States Medical Licensing Examination.基于美国医师执照考试新的病历书写格式的病历评分细则的有效性证据。
Acad Med. 2013 Oct;88(10):1552-7. doi: 10.1097/ACM.0b013e3182a34b1e.
5
Can Nonclinician Raters Be Trained to Assess Clinical Reasoning in Postencounter Patient Notes?非临床评分者能否经过培训来评估患者就诊后记录中的临床推理?
Acad Med. 2019 Nov;94(11S Association of American Medical Colleges Learn Serve Lead: Proceedings of the 58th Annual Research in Medical Education Sessions):S21-S27. doi: 10.1097/ACM.0000000000002904.
6
The IDEA Assessment Tool: Assessing the Reporting, Diagnostic Reasoning, and Decision-Making Skills Demonstrated in Medical Students' Hospital Admission Notes.IDEA评估工具:评估医学生住院病历中展示的报告、诊断推理和决策技能。
Teach Learn Med. 2015;27(2):163-73. doi: 10.1080/10401334.2015.1011654.
7
Characteristics and Implications of Diagnostic Justification Scores Based on the New Patient Note Format of the USMLE Step 2 CS Exam.基于美国医师执照考试第二步临床技能考试新患者记录格式的诊断理由分数的特征及影响
Acad Med. 2015 Nov;90(11 Suppl):S56-62. doi: 10.1097/ACM.0000000000000900.
8
Comparing Students' Clinical Grades to Scores on a Standardized Patient Note-Writing Task.将学生的临床成绩与标准化患者记笔记任务的分数进行比较。
J Gen Intern Med. 2020 Nov;35(11):3243-3247. doi: 10.1007/s11606-020-06019-2. Epub 2020 Jul 13.
9
A comparison of two standard-setting approaches in high-stakes clinical performance assessment using generalizability theory.使用概化理论比较两种高风险临床绩效评估中的标准设定方法。
Acad Med. 2012 Aug;87(8):1077-82. doi: 10.1097/ACM.0b013e31825cea4b.
10
Clinically discriminating checklists versus thoroughness checklists: improving the validity of performance test scores.临床鉴别检查表与全面性检查表:提高绩效测试分数的有效性。
Acad Med. 2014 Jul;89(7):1057-62. doi: 10.1097/ACM.0000000000000235.

引用本文的文献

1
A Scoping Review of Assessments in Undergraduate Medical Education: Implications for Residency Programs and Medical Schools.本科医学教育评估的范围综述:对住院医师培训项目和医学院校的启示
Acad Psychiatry. 2025 Apr 1. doi: 10.1007/s40596-025-02136-4.
2
Developing institution-specific admission competency criteria for prospective health sciences students.为未来的健康科学专业学生制定特定院校的入学能力标准。
BMC Med Educ. 2024 Dec 18;24(1):1474. doi: 10.1186/s12909-024-06495-8.
3
Core and cluster or head to toe?: a comparison of two types of curricula for teaching physical examination skills to preclinical medical students.
核心与整体或从头到脚:两种临床前医学生体检技能教学课程的比较。
BMC Med Educ. 2024 Mar 26;24(1):337. doi: 10.1186/s12909-024-05191-x.
4
Accuracy of Entrustment-Based Assessment: Implications for Programs and Patients.基于委托的评估的准确性:对项目和患者的影响。
J Grad Med Educ. 2024 Feb;16(1):30-36. doi: 10.4300/JGME-D-23-00275.1. Epub 2024 Feb 17.
5
Applying a validated scoring rubric to pre-clerkship medical students' standardized patient notes: a pilot study.应用已验证的评分量表对医预学生的标准化患者记录进行评估:一项试点研究。
BMC Med Educ. 2023 Jul 13;23(1):504. doi: 10.1186/s12909-023-04424-9.
6
Student standardized patients versus occupational standardized patients for improving clinical competency among TCM medical students: a 3-year prospective randomized study.学生标准化患者与职业标准化患者在提高中医医学生临床能力中的应用:一项为期 3 年的前瞻性随机研究。
BMC Med Educ. 2023 Apr 5;23(1):216. doi: 10.1186/s12909-023-04198-0.
7
Evaluator Agreement in Medical Student Assessment Across a Multi-Campus Medical School During a Standardized Patient Encounter.多校区医学院标准化患者问诊期间医学生评估中的评估者一致性
Med Sci Educ. 2020 Feb 5;30(1):381-386. doi: 10.1007/s40670-020-00916-1. eCollection 2020 Mar.
8
Machine Scoring of Medical Students' Written Clinical Reasoning: Initial Validity Evidence.机器评分在医学生临床推理写作中的应用:初步有效性证据。
Acad Med. 2021 Jul 1;96(7):1026-1035. doi: 10.1097/ACM.0000000000004010.
9
Comparing Students' Clinical Grades to Scores on a Standardized Patient Note-Writing Task.将学生的临床成绩与标准化患者记笔记任务的分数进行比较。
J Gen Intern Med. 2020 Nov;35(11):3243-3247. doi: 10.1007/s11606-020-06019-2. Epub 2020 Jul 13.