Hadler Rachel A, Dexter Franklin, Hindman Bradley J
Anesthesia, University of Iowa, Iowa City, USA.
Cureus. 2022 Mar 26;14(3):e23500. doi: 10.7759/cureus.23500. eCollection 2022 Mar.
Introduction In this study, we tested whether raters' (residents and fellows) decisions to evaluate (or not) critical care anesthesiologists were significantly associated with clinical interactions documented from electronic health record progress notes and whether that influenced the reliability of supervision scores. We used the de Oliveira Filho clinical supervision scale for the evaluation of faculty anesthesiologists. Email requests were sent to raters who worked one hour or longer with the anesthesiologist the preceding day in an operating room. In contrast, potential raters were requested to evaluate all critical care anesthesiologists scheduled in intensive care units during the preceding week. Methods Over 7.6 years, raters (N=172) received a total of 7764 requests to evaluate 21 critical care anesthesiologists. Each rater received a median/mode of three evaluation requests, one per anesthesiologist on service that week. In this retrospective cohort study, we related responses (2970 selections of "insufficient interaction" to evaluate the faculty, and 3127 completed supervision scores) to progress notes (N=25,469) electronically co-signed by the rater and anesthesiologist combination during that week. Results Raters with few jointly signed notes were more likely to select insufficient interaction for evaluation (P < 0.0001): 62% when no joint notes versus 1% with at least 20 joint notes during the week. Still, rater-anesthesiologist combinations with no co-authored notes accounted not only for most (78%) of the evaluation requests but also most (56%) of the completed evaluations (both P < 0.0001). Among rater and anesthesiologist combinations with each anesthesiologist receiving evaluations from multiple (at least nine) raters and each rater evaluating multiple anesthesiologists, most (72%) rater-anesthesiologist combinations were among raters who had no co-authored notes with the anesthesiologist (P < 0.0001). Conclusions Regular use of the supervision scale should be practiced with raters who were selected not only from their scheduled clinical site but also using electronic health record data verifying joint workload with the anesthesiologist.
引言 在本研究中,我们测试了评估者(住院医师和专科住院医师)决定是否评估重症监护麻醉医师的行为,是否与电子健康记录病程记录中记录的临床互动显著相关,以及这是否会影响监督评分的可靠性。我们使用了奥利维拉·菲略临床监督量表来评估麻醉科教员。通过电子邮件向在前一天于手术室与麻醉医师共事一小时或更长时间的评估者发送请求。相比之下,潜在评估者被要求评估前一周安排在重症监护病房的所有重症监护麻醉医师。
方法 在7.6年的时间里,评估者(N = 172)总共收到7764次评估21位重症监护麻醉医师的请求。每位评估者收到的评估请求中位数/众数为三次,即每周为服务中的每位麻醉医师各收到一次评估请求。在这项回顾性队列研究中,我们将回复(2970次选择“互动不足”以评估教员,以及3127份完成的监督评分)与该周由评估者和麻醉医师组合电子共同签署的病程记录(N = 25469)相关联。
结果 共同签署记录较少的评估者更有可能选择互动不足进行评估(P < 0.0001):当本周没有共同记录时为62%,而有至少20条共同记录时为1%。尽管如此,没有共同撰写记录的评估者 - 麻醉医师组合不仅占评估请求的大多数(78%),也占完成评估的大多数(56%)(两者P < 0.0001)。在每位麻醉医师从多名(至少九名)评估者处接受评估且每位评估者评估多名麻醉医师的评估者和麻醉医师组合中,大多数(72%)评估者 - 麻醉医师组合中的评估者与麻醉医师没有共同撰写记录(P < 0.0001)。
结论 应与评估者定期使用监督量表,这些评估者不仅要从其预定的临床地点中挑选,还要使用电子健康记录数据来核实与麻醉医师的联合工作量。