NYU Grossman School of Medicine, New York, NY, USA.
NYC Health + Hospitals/Bellevue, New York, NY, USA.
J Gen Intern Med. 2022 Feb;37(3):507-512. doi: 10.1007/s11606-021-06805-6. Epub 2021 May 4.
Residents and fellows receive little feedback on their clinical reasoning documentation. Barriers include lack of a shared mental model and variability in the reliability and validity of existing assessment tools. Of the existing tools, the IDEA assessment tool includes a robust assessment of clinical reasoning documentation focusing on four elements (interpretive summary, differential diagnosis, explanation of reasoning for lead and alternative diagnoses) but lacks descriptive anchors threatening its reliability.
Our goal was to develop a valid and reliable assessment tool for clinical reasoning documentation building off the IDEA assessment tool.
DESIGN, PARTICIPANTS, AND MAIN MEASURES: The Revised-IDEA assessment tool was developed by four clinician educators through iterative review of admission notes written by medicine residents and fellows and subsequently piloted with additional faculty to ensure response process validity. A random sample of 252 notes from July 2014 to June 2017 written by 30 trainees across several chief complaints was rated. Three raters rated 20% of the notes to demonstrate internal structure validity. A quality cut-off score was determined using Hofstee standard setting.
The Revised-IDEA assessment tool includes the same four domains as the IDEA assessment tool with more detailed descriptive prompts, new Likert scale anchors, and a score range of 0-10. Intraclass correlation was high for the notes rated by three raters, 0.84 (95% CI 0.74-0.90). Scores ≥6 were determined to demonstrate high-quality clinical reasoning documentation. Only 53% of notes (134/252) were high-quality.
The Revised-IDEA assessment tool is reliable and easy to use for feedback on clinical reasoning documentation in resident and fellow admission notes with descriptive anchors that facilitate a shared mental model for feedback.
住院医师和研究员在其临床推理文档方面几乎得不到任何反馈。障碍包括缺乏共享的心智模型以及现有评估工具的可靠性和有效性存在差异。在现有的工具中,IDEA 评估工具包括对临床推理文档的全面评估,重点关注四个要素(解释性摘要、鉴别诊断、对主要和替代诊断的推理解释),但缺乏描述性锚点,这威胁到其可靠性。
我们的目标是在 IDEA 评估工具的基础上,开发一种用于临床推理文档评估的有效且可靠的工具。
设计、参与者和主要措施:修订后的 IDEA 评估工具由四位临床教育工作者通过迭代审查内科住院医师和研究员撰写的入院记录开发而成,随后由其他教员进行试点,以确保反馈过程的有效性。对 2014 年 7 月至 2017 年 6 月期间由 30 名住院医师针对多种主诉撰写的 252 份随机样本记录进行了评估。三位评估者评估了 20%的记录以证明内部结构的有效性。使用 Hofstee 标准设置确定质量截止分数。
修订后的 IDEA 评估工具包括与 IDEA 评估工具相同的四个领域,但具有更详细的描述性提示、新的李克特量表锚点以及 0-10 的分数范围。三位评估者对记录的评分的组内相关系数较高,为 0.84(95%置信区间为 0.74-0.90)。得分≥6 被认为表现出高质量的临床推理文档。只有 53%的记录(134/252)为高质量。
修订后的 IDEA 评估工具用于住院医师和研究员入院记录中的临床推理文档反馈既可靠又易于使用,具有描述性锚点,可促进反馈的共享心智模型。