• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

SHARP(简短回答、推理说明):一种新的项目格式,用于评估临床推理。

SHARP (SHort Answer, Rationale Provision): A New Item Format to Assess Clinical Reasoning.

出版信息

Acad Med. 2024 Sep 1;99(9):976-980. doi: 10.1097/ACM.0000000000005769. Epub 2024 May 15.

DOI:10.1097/ACM.0000000000005769
PMID:38753971
Abstract

PROBLEM

Many non-workplace-based assessments do not provide good evidence of a learner's problem representation or ability to provide a rationale for a clinical decision they have made. Exceptions include assessment formats that require resource-intensive administration and scoring. This article reports on research efforts toward building a scalable non-workplace-based assessment format that was specifically developed to capture evidence of a learner's ability to justify a clinical decision.

APPROACH

The authors developed a 2-step item format called SHARP (SHort Answer, Rationale Provision), referring to the 2 tasks that comprise the item. In collaboration with physician-educators, the authors integrated short-answer questions into a patient medical record-based item starting in October 2021 and arrived at an innovative item format in December 2021. In this format, a test-taker interprets patient medical record data to make a clinical decision, types in their response, and pinpoints medical record details that justify their answers. In January 2022, a total of 177 fourth-year medical students, representing 20 U.S. medical schools, completed 35 SHARP items in a proof-of-concept study.

OUTCOMES

Primary outcomes were item timing, difficulty, reliability, and scoring ease. There was substantial variability in item difficulty, with the average item answered correctly by 44% of students (range, 4%-76%). The estimated reliability (Cronbach α ) of the set of SHARP items was 0.76 (95% confidence interval, 0.70-0.80). Item scoring is fully automated, minimizing resource requirements.

NEXT STEPS

A larger study is planned to gather additional validity evidence about the item format. This study will allow comparisons between performance on SHARP items and other examinations, examination of group differences in performance, and possible use cases for formative assessment. Cognitive interviews are also planned to better understand the thought processes of medical students as they work through the SHARP items.

摘要

问题

许多非工作场所评估并不能很好地证明学习者的问题表示或为他们所做的临床决策提供理由的能力。例外情况包括需要资源密集型管理和评分的评估格式。本文报告了构建可扩展的非工作场所评估格式的研究工作,该格式专门用于捕捉学习者证明临床决策合理性的能力的证据。

方法

作者开发了一种称为 SHARP(Short Answer,Rationale Provision)的 2 步项目格式,指的是组成项目的 2 项任务。作者与医师教育工作者合作,从 2021 年 10 月开始将简答题整合到基于患者病历的项目中,并于 2021 年 12 月提出了一种创新的项目格式。在这种格式中,应试者解释患者的病历数据以做出临床决策,输入他们的答案,并指出支持他们答案的病历细节。2022 年 1 月,共有来自 20 所美国医学院的 177 名四年级医学生在概念验证研究中完成了 35 项 SHARP 项目。

结果

主要结果是项目时间、难度、可靠性和评分的便利性。项目难度差异很大,平均每个项目有 44%的学生答对(范围为 4%-76%)。整套 SHARP 项目的估计可靠性(Cronbach α)为 0.76(95%置信区间,0.70-0.80)。项目评分完全自动化,最大限度地减少资源需求。

下一步计划

计划进行一项更大的研究,以收集有关项目格式的更多有效性证据。这项研究将允许比较 SHARP 项目和其他考试的表现,考察表现的群体差异,以及形成性评估的可能用例。还计划进行认知访谈,以更好地了解医学生在处理 SHARP 项目时的思维过程。

相似文献

1
SHARP (SHort Answer, Rationale Provision): A New Item Format to Assess Clinical Reasoning.SHARP(简短回答、推理说明):一种新的项目格式,用于评估临床推理。
Acad Med. 2024 Sep 1;99(9):976-980. doi: 10.1097/ACM.0000000000005769. Epub 2024 May 15.
2
Development and validation of immediate self-feedback very short answer questions for medical students: practical implementation of generalizability theory to estimate reliability in formative examination designs.发展和验证医学生即时自我反馈简答题:应用概化理论估计形成性考试设计中的可靠性的实际操作。
BMC Med Educ. 2024 May 24;24(1):572. doi: 10.1186/s12909-024-05569-x.
3
Promoting Longitudinal and Developmental Computer-Based Assessments of Clinical Reasoning: Validity Evidence for a Clinical Reasoning Mapping Exercise.促进临床推理的纵向和发展性基于计算机的评估:临床推理映射练习的效度证据。
Acad Med. 2024 Jun 1;99(6):628-634. doi: 10.1097/ACM.0000000000005632. Epub 2024 Jan 24.
4
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.
5
Gathering Validity Evidence on an Internal Medicine Clerkship Multistep Exam to Assess Medical Student Analytic Ability.收集关于内科实习多步骤考试的效度证据以评估医学生的分析能力。
Teach Learn Med. 2021 Jan-Mar;33(1):28-35. doi: 10.1080/10401334.2020.1749635. Epub 2020 Apr 11.
6
Should essays and other "open-ended"-type questions retain a place in written summative assessment in clinical medicine?论文及其他“开放式”问题在临床医学书面总结性评估中是否应保留一席之地?
BMC Med Educ. 2014 Nov 28;14:249. doi: 10.1186/s12909-014-0249-2.
7
A report on the piloting of a novel computer-based medical case simulation for teaching and formative assessment of diagnostic laboratory testing.一份关于新型基于计算机的医学病例模拟用于诊断实验室检测的教学和形成性评估的试点报告。
Med Educ Online. 2011 Jan 14;16. doi: 10.3402/meo.v16i0.5646.
8
Comparing single-best-answer and very-short-answer questions for the assessment of applied medical knowledge in 20 UK medical schools: Cross-sectional study.比较 20 所英国医学院应用医学知识评估中的单项最佳答案题和极简短答案题:横断面研究。
BMJ Open. 2019 Sep 26;9(9):e032550. doi: 10.1136/bmjopen-2019-032550.
9
Validity of very short answer versus single best answer questions for undergraduate assessment.本科评估中极简短回答题与单项最佳答案题的效度。
BMC Med Educ. 2016 Oct 13;16(1):266. doi: 10.1186/s12909-016-0793-z.
10
The IDEA Assessment Tool: Assessing the Reporting, Diagnostic Reasoning, and Decision-Making Skills Demonstrated in Medical Students' Hospital Admission Notes.IDEA评估工具:评估医学生住院病历中展示的报告、诊断推理和决策技能。
Teach Learn Med. 2015;27(2):163-73. doi: 10.1080/10401334.2015.1011654.