• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

住院医师和研究员入院记录临床推理文件评估工具的开发:反馈的共享心理模型。

Development of a Clinical Reasoning Documentation Assessment Tool for Resident and Fellow Admission Notes: a Shared Mental Model for Feedback.

机构信息

NYU Grossman School of Medicine, New York, NY, USA.

NYC Health + Hospitals/Bellevue, New York, NY, USA.

出版信息

J Gen Intern Med. 2022 Feb;37(3):507-512. doi: 10.1007/s11606-021-06805-6. Epub 2021 May 4.

DOI:10.1007/s11606-021-06805-6
PMID:33945113
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8858363/
Abstract

BACKGROUND

Residents and fellows receive little feedback on their clinical reasoning documentation. Barriers include lack of a shared mental model and variability in the reliability and validity of existing assessment tools. Of the existing tools, the IDEA assessment tool includes a robust assessment of clinical reasoning documentation focusing on four elements (interpretive summary, differential diagnosis, explanation of reasoning for lead and alternative diagnoses) but lacks descriptive anchors threatening its reliability.

OBJECTIVE

Our goal was to develop a valid and reliable assessment tool for clinical reasoning documentation building off the IDEA assessment tool.

DESIGN, PARTICIPANTS, AND MAIN MEASURES: The Revised-IDEA assessment tool was developed by four clinician educators through iterative review of admission notes written by medicine residents and fellows and subsequently piloted with additional faculty to ensure response process validity. A random sample of 252 notes from July 2014 to June 2017 written by 30 trainees across several chief complaints was rated. Three raters rated 20% of the notes to demonstrate internal structure validity. A quality cut-off score was determined using Hofstee standard setting.

KEY RESULTS

The Revised-IDEA assessment tool includes the same four domains as the IDEA assessment tool with more detailed descriptive prompts, new Likert scale anchors, and a score range of 0-10. Intraclass correlation was high for the notes rated by three raters, 0.84 (95% CI 0.74-0.90). Scores ≥6 were determined to demonstrate high-quality clinical reasoning documentation. Only 53% of notes (134/252) were high-quality.

CONCLUSIONS

The Revised-IDEA assessment tool is reliable and easy to use for feedback on clinical reasoning documentation in resident and fellow admission notes with descriptive anchors that facilitate a shared mental model for feedback.

摘要

背景

住院医师和研究员在其临床推理文档方面几乎得不到任何反馈。障碍包括缺乏共享的心智模型以及现有评估工具的可靠性和有效性存在差异。在现有的工具中,IDEA 评估工具包括对临床推理文档的全面评估,重点关注四个要素(解释性摘要、鉴别诊断、对主要和替代诊断的推理解释),但缺乏描述性锚点,这威胁到其可靠性。

目的

我们的目标是在 IDEA 评估工具的基础上,开发一种用于临床推理文档评估的有效且可靠的工具。

设计、参与者和主要措施:修订后的 IDEA 评估工具由四位临床教育工作者通过迭代审查内科住院医师和研究员撰写的入院记录开发而成,随后由其他教员进行试点,以确保反馈过程的有效性。对 2014 年 7 月至 2017 年 6 月期间由 30 名住院医师针对多种主诉撰写的 252 份随机样本记录进行了评估。三位评估者评估了 20%的记录以证明内部结构的有效性。使用 Hofstee 标准设置确定质量截止分数。

主要结果

修订后的 IDEA 评估工具包括与 IDEA 评估工具相同的四个领域,但具有更详细的描述性提示、新的李克特量表锚点以及 0-10 的分数范围。三位评估者对记录的评分的组内相关系数较高,为 0.84(95%置信区间为 0.74-0.90)。得分≥6 被认为表现出高质量的临床推理文档。只有 53%的记录(134/252)为高质量。

结论

修订后的 IDEA 评估工具用于住院医师和研究员入院记录中的临床推理文档反馈既可靠又易于使用,具有描述性锚点,可促进反馈的共享心智模型。

相似文献

1
Development of a Clinical Reasoning Documentation Assessment Tool for Resident and Fellow Admission Notes: a Shared Mental Model for Feedback.住院医师和研究员入院记录临床推理文件评估工具的开发:反馈的共享心理模型。
J Gen Intern Med. 2022 Feb;37(3):507-512. doi: 10.1007/s11606-021-06805-6. Epub 2021 May 4.
2
The IDEA Assessment Tool: Assessing the Reporting, Diagnostic Reasoning, and Decision-Making Skills Demonstrated in Medical Students' Hospital Admission Notes.IDEA评估工具:评估医学生住院病历中展示的报告、诊断推理和决策技能。
Teach Learn Med. 2015;27(2):163-73. doi: 10.1080/10401334.2015.1011654.
3
Development and Validation of a Machine Learning Model for Automated Assessment of Resident Clinical Reasoning Documentation.机器学习模型在自动化评估住院医师临床推理文档中的开发和验证。
J Gen Intern Med. 2022 Jul;37(9):2230-2238. doi: 10.1007/s11606-022-07526-0. Epub 2022 Jun 16.
4
Documentation of Clinical Reasoning in Admission Notes of Hospitalists: Validation of the CRANAPL Assessment Rubric.住院医师入院记录中临床推理的记录:CRANAPL 评估量表的验证。
J Hosp Med. 2019 Dec 1;14(12):746-753. doi: 10.12788/jhm.3233. Epub 2019 Jun 19.
5
Development and Establishment of Initial Validity Evidence for a Novel Tool for Assessing Trainee Admission Notes.评估住院医师入院记录新工具初始有效性证据的开发与建立。
J Gen Intern Med. 2020 Apr;35(4):1078-1083. doi: 10.1007/s11606-020-05669-6. Epub 2020 Jan 28.
6
Development and Validation of a Formative Assessment Tool for Nephrology Fellows' Clinical Reasoning.肾脏病学住院医师临床推理形成性评估工具的开发与验证。
Clin J Am Soc Nephrol. 2024 Jan 1;19(1):26-34. doi: 10.2215/CJN.0000000000000315. Epub 2023 Oct 18.
7
Using the Assessment of Reasoning Tool to facilitate feedback about diagnostic reasoning.使用推理评估工具促进关于诊断推理的反馈。
Diagnosis (Berl). 2022 Sep 8;9(4):476-484. doi: 10.1515/dx-2022-0020. eCollection 2022 Nov 1.
8
Can Nonclinician Raters Be Trained to Assess Clinical Reasoning in Postencounter Patient Notes?非临床评分者能否经过培训来评估患者就诊后记录中的临床推理?
Acad Med. 2019 Nov;94(11S Association of American Medical Colleges Learn Serve Lead: Proceedings of the 58th Annual Research in Medical Education Sessions):S21-S27. doi: 10.1097/ACM.0000000000002904.
9
Promoting Responsible Electronic Documentation: Validity Evidence for a Checklist to Assess Progress Notes in the Electronic Health Record.促进负责任的电子文档记录:一份用于评估电子健康记录中病程记录的检查表的效度证据
Teach Learn Med. 2017 Oct-Dec;29(4):420-432. doi: 10.1080/10401334.2017.1303385. Epub 2017 May 12.
10
REACT: Rapid Evaluation Assessment of Clinical Reasoning Tool.REACT:快速临床推理评估工具。
J Gen Intern Med. 2022 Jul;37(9):2224-2229. doi: 10.1007/s11606-022-07513-5. Epub 2022 Jun 16.

引用本文的文献

1
Artificial intelligence based assessment of clinical reasoning documentation: an observational study of the impact of the clinical learning environment on resident documentation quality.基于人工智能的临床推理文档评估:关于临床学习环境对住院医师文档质量影响的观察性研究
BMC Med Educ. 2025 Apr 22;25(1):591. doi: 10.1186/s12909-025-07191-x.
2
Current and future state of evaluation of large language models for medical summarization tasks.用于医学总结任务的大语言模型评估的当前及未来状况。
Npj Health Syst. 2025;2. doi: 10.1038/s44401-024-00011-2. Epub 2025 Feb 3.
3
Large Language Model-Based Assessment of Clinical Reasoning Documentation in the Electronic Health Record Across Two Institutions: Development and Validation Study.基于大语言模型对两个机构电子健康记录中临床推理文档的评估:开发与验证研究
J Med Internet Res. 2025 Mar 21;27:e67967. doi: 10.2196/67967.
4
Evaluating large language model performance to support the diagnosis and management of patients with primary immune disorders.评估大型语言模型的性能以支持原发性免疫疾病患者的诊断和管理。
J Allergy Clin Immunol. 2025 Feb 14. doi: 10.1016/j.jaci.2025.02.004.
5
Transformation and articulation of clinical data to understand students' clinical reasoning: a scoping review.转化与阐明临床数据以理解学生的临床推理:一项范围综述
BMC Med Educ. 2025 Jan 12;25(1):52. doi: 10.1186/s12909-025-06644-7.
6
Clinical Reasoning and Knowledge Assessment of Rheumatology Residents Compared to AI Models: A Pilot Study.与人工智能模型相比,风湿病住院医师的临床推理与知识评估:一项试点研究。
J Clin Med. 2024 Dec 5;13(23):7405. doi: 10.3390/jcm13237405.
7
Developing and Evaluating Large Language Model-Generated Emergency Medicine Handoff Notes.开发和评估大语言模型生成的急诊医学交接班记录
JAMA Netw Open. 2024 Dec 2;7(12):e2448723. doi: 10.1001/jamanetworkopen.2024.48723.
8
Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial.大语言模型对诊断推理的影响:一项随机临床试验。
JAMA Netw Open. 2024 Oct 1;7(10):e2440969. doi: 10.1001/jamanetworkopen.2024.40969.
9
A Pilot Longitudinal Clinical Reasoning Curriculum for Pediatric Residents.儿科住院医师临床推理纵向先导课程。
MedEdPORTAL. 2024 Sep 25;20:11447. doi: 10.15766/mep_2374-8265.11447. eCollection 2024.
10
Influence of a Large Language Model on Diagnostic Reasoning: A Randomized Clinical Vignette Study.大语言模型对诊断推理的影响:一项随机临床病例研究
medRxiv. 2024 Mar 14:2024.03.12.24303785. doi: 10.1101/2024.03.12.24303785.

本文引用的文献

1
Approaches to Clinical Reasoning Assessment.临床推理评估方法。
Acad Med. 2020 Aug;95(8):1285. doi: 10.1097/ACM.0000000000003154.
2
Theory-guided teaching: Implementation of a clinical reasoning curriculum in residents.理论导向教学:在住院医师中实施临床推理课程。
Med Teach. 2019 Oct;41(10):1192-1199. doi: 10.1080/0142159X.2019.1626977. Epub 2019 Jul 9.
3
Competencies for improving diagnosis: an interprofessional framework for education and training in health care.提升诊断能力:医疗卫生保健领域教育与培训的跨专业框架
Diagnosis (Berl). 2019 Nov 26;6(4):335-341. doi: 10.1515/dx-2018-0107.
4
Documentation of Clinical Reasoning in Admission Notes of Hospitalists: Validation of the CRANAPL Assessment Rubric.住院医师入院记录中临床推理的记录:CRANAPL 评估量表的验证。
J Hosp Med. 2019 Dec 1;14(12):746-753. doi: 10.12788/jhm.3233. Epub 2019 Jun 19.
5
A workshop to train medicine faculty to teach clinical reasoning.一个培训医学院教师临床推理教学能力的研讨会。
Diagnosis (Berl). 2019 Jun 26;6(2):109-113. doi: 10.1515/dx-2018-0059.
6
Clinical Reasoning Assessment Methods: A Scoping Review and Practical Guidance.临床推理评估方法:范围综述与实践指导。
Acad Med. 2019 Jun;94(6):902-912. doi: 10.1097/ACM.0000000000002618.
7
Clinicians' reasoning as reflected in electronic clinical note-entry and reading/retrieval: a systematic review and qualitative synthesis.临床医生在电子临床记录输入和阅读/检索中的推理:系统评价和定性综合。
J Am Med Inform Assoc. 2019 Feb 1;26(2):172-184. doi: 10.1093/jamia/ocy155.
8
Electronic Health Records as an Educational Tool: Viewpoint.电子健康记录作为一种教育工具:观点
JMIR Med Educ. 2018 Nov 12;4(2):e10306. doi: 10.2196/10306.
9
The Assessment of Reasoning Tool (ART): structuring the conversation between teachers and learners.推理评估工具(ART):构建教师与学习者之间的对话
Diagnosis (Berl). 2018 Nov 27;5(4):197-203. doi: 10.1515/dx-2018-0052.
10
The role of electronic health records in clinical reasoning.电子健康记录在临床推理中的作用。
Ann N Y Acad Sci. 2018 Dec;1434(1):109-114. doi: 10.1111/nyas.13849. Epub 2018 May 16.