Suppr超能文献

测试评估者:标准化麻醉模拟器操作的评估者间信度

Testing the raters: inter-rater reliability of standardized anaesthesia simulator performance.

作者信息

Devitt J H, Kurrek M M, Cohen M M, Fish K, Fish P, Murphy P M, Szalai J P

机构信息

Department of Anaesthesia, Sunnybrook Health Science Centre, Toronto, Ontario.

出版信息

Can J Anaesth. 1997 Sep;44(9):924-8. doi: 10.1007/BF03011962.

Abstract

PURPOSE

Assessment of physician performance has been a subjective process. An anaesthesia simulator could be used for a more structured and standardized evaluation but its reliability for this purpose is not known. We sought to determine if observers witnessing the same event in an anaesthesia simulator would agree on their rating of anaesthetist performance.

METHODS

The study had the approval of the research ethics board. Two one-hour clinical scenarios were developed, each containing five anaesthetic problems. For each problem, a rating scale defined the appropriate score (no response to the situation: score = 0; compensating intervention defined as physiological correction: score = 1; corrective treatment: defined as definitive therapy score = 2). Video tape recordings, for assessment of inter-rater reliability, were generated through role-playing with recording of the two scenarios three times each resulting in a total of 30 events to be evaluated. Two clinical anaesthetists, uninvolved in the development of the study and the clinical scenarios, reviewed and scored each of the 30 problems independently. The scores produced by the two observers were compared using the kappa statistic of agreement.

RESULTS

The raters were in complete agreement on 29 of the 30 items. There was excellent inter-rater reliability (= 0.96, P < 0.001).

CONCLUSIONS

The use of videotapes allowed the scenarios to be scored by reproducing the same event for each observer. There was excellent inter-rater agreement within the confines of the study. Rating of video recordings of anaesthetist performance in a simulation setting can be used for scoring of performance. The validity of the scenarios and the scoring system for assessing clinician performance have yet to be determined.

摘要

目的

对医生表现的评估一直是个主观过程。麻醉模拟器可用于更结构化和标准化的评估,但其用于此目的的可靠性尚不清楚。我们试图确定在麻醉模拟器中目睹同一事件的观察者对麻醉医生表现的评分是否一致。

方法

本研究获得了研究伦理委员会的批准。开发了两个一小时的临床场景,每个场景包含五个麻醉问题。对于每个问题,一个评分量表定义了适当的分数(对情况无反应:分数 = 0;补偿性干预定义为生理纠正:分数 = 1;纠正性治疗:定义为确定性治疗分数 = 2)。通过角色扮演生成录像带以评估评分者间的可靠性,两个场景各录制三次,总共产生30个待评估事件。两名未参与研究和临床场景开发的临床麻醉医生独立审查并对这30个问题中的每一个进行评分。使用一致性kappa统计量比较两位观察者给出的分数。

结果

评分者在30个项目中的29个上完全一致。评分者间可靠性极佳(= 0.96,P < 0.001)。

结论

录像带的使用使得通过为每个观察者重现相同事件来对场景进行评分成为可能。在本研究范围内,评分者间一致性极佳。在模拟环境中对麻醉医生表现的录像评分可用于表现评分。这些场景和评分系统对评估临床医生表现的有效性尚未确定。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验