Suppr超能文献

评估客观结构化临床考试中的沟通技能:需要综合心理计量学方法。

Assessing communication skills during OSCE: need for integrated psychometric approaches.

机构信息

Division of Primary Care, Population Epidemiology Unit, Geneva University Hospitals, Geneva, Switzerland.

Institute of Public Health, Faculty of BioMedical Sciences, Università della Svizzera Italiana, Lugano, Switzerland.

出版信息

BMC Med Educ. 2021 Feb 16;21(1):106. doi: 10.1186/s12909-021-02552-8.

Abstract

BACKGROUND

Physicians' communication skills (CS) are known to significantly affect the quality of health care. Communication skills training programs are part of most undergraduate medical curricula and are usually assessed in Objective Structured Clinical Examinations (OSCE) throughout the curriculum. The adoption of reliable measurement instruments is thus essential to evaluate such skills.

METHODS

Using Exploratory Factor Analysis (EFA), Multi-Group Confirmatory Factor Analysis (MGCFA) and Item Response Theory analysis (IRT) the current retrospective study tested the factorial validity and reliability of a four-item global rating scale developed by Hodges and McIlroy to measure CS among 296 third- and fourth-year medical students at the Faculty of Medicine in Geneva, Switzerland, during OSCEs.

RESULTS

EFA results at each station showed good reliability scores. However, measurement invariance assessments through MGCFA across different stations (i.e., same students undergoing six or three stations) and across different groups of stations (i.e., different students undergoing groups of six or three stations) were not satisfactory, failing to meet the minimum requirements to establish measurement invariance and thus possibly affecting reliable comparisons between students' communication scores across stations. IRT revealed that the four communication items provided overlapping information focusing especially on high levels of the communication spectrum.

CONCLUSIONS

Using this four-item set in its current form it may be difficult to adequately differentiate between students who are poor in CS from those who perform better. Future directions in best-practices to assess CS among medical students in the context of OSCE may thus focus on (1) training examiners so to obtain scores that are more coherent across stations; and (2) evaluating items in terms of their ability to cover a wider spectrum of medical students' CS. In this respect, IRT can prove to be very useful for the continuous evaluation of CS measurement instruments in performance-based assessments.

摘要

背景

医生的沟通技巧(CS)被认为会显著影响医疗质量。沟通技巧培训计划是大多数本科医学课程的一部分,通常在整个课程中通过客观结构化临床考试(OSCE)进行评估。因此,采用可靠的测量工具对于评估这些技能至关重要。

方法

本回顾性研究使用探索性因素分析(EFA)、多组验证性因素分析(MGCFA)和项目反应理论分析(IRT),对 Hodges 和 McIlroy 开发的一个用于测量瑞士日内瓦医学院 296 名三四年级医学生在 OSCE 中 CS 的四项全球评分量表的因子有效性和可靠性进行了测试。

结果

每个站点的 EFA 结果显示出良好的可靠性评分。然而,通过 MGCFA 对不同站点(即,同一批学生进行六个或三个站点)和不同组站点(即,不同学生进行六个或三个站点的组)进行的测量不变性评估并不令人满意,无法满足建立测量不变性的最低要求,从而可能影响学生在站点之间的沟通评分的可靠比较。IRT 显示,这四个沟通项目提供了重叠的信息,特别侧重于沟通范围的较高水平。

结论

在目前的形式下,使用这个四项集可能难以充分区分 CS 较差的学生和表现较好的学生。因此,未来在 OSCE 背景下评估医学生 CS 的最佳实践方向可能集中在(1)培训考官,以获得在站点之间更一致的分数;(2)评估项目在涵盖更广泛的医学生 CS 方面的能力。在这方面,IRT 可以证明在基于绩效评估中对 CS 测量工具的连续评估非常有用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ad08/7887794/9d0152ed0ead/12909_2021_2552_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验