Suppr超能文献

在继续医学教育课程评估中比较基于意见的方法和基于预测的方法。

'' Comparing the opinion-based method with the prediction-based method in Continuing Medical Education course evaluation.

作者信息

Chua Jamie S, van Diepen Merel, Trietsch Marjolijn D, Dekker Friedo W, Schönrock-Adema Johanna, Bustraan Jacqueline

机构信息

Department of Gastroenterology and Hepatology, Leiden University Medical Center, Leiden, The Netherlands.

Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, The Netherlands.

出版信息

Can Med Educ J. 2024 Jul 12;15(3):18-25. doi: 10.36834/cmej.77580. eCollection 2024 Jul.

Abstract

BACKGROUND

Although medical courses are frequently evaluated via surveys with Likert scales ranging from "" to "," low response rates limit their utility. In undergraduate medical education, a new method with students predicting what their peers would say, required fewer respondents to obtain similar results. However, this prediction-based method lacks validation for continuing medical education (CME), which typically targets a more heterogeneous group than medical students.

METHODS

In this study, 597 participants of a large CME course were randomly assigned to either express personal opinions on a five-point Likert scale (opinion-based method; = 300) or to predict the percentage of their peers choosing each Likert scale option (prediction-based method; = 297). For each question, we calculated the minimum numbers of respondents needed for stable average results using an iterative algorithm. We compared mean scores and the distribution of scores between both methods.

RESULTS

The overall response rate was 47%. The prediction-based method required fewer respondents than the opinion-based method for similar average responses. Mean response scores were similar in both groups for most questions, but prediction-based outcomes resulted in fewer extreme responses (strongly agree/disagree).

CONCLUSIONS

We validated the prediction-based method in evaluating CME. We also provide practical considerations for applying this method.

摘要

背景

尽管医学课程经常通过使用从“ ”到“ ”的李克特量表进行调查来评估,但低回复率限制了它们的效用。在本科医学教育中,一种让学生预测同龄人会怎么说的新方法,只需较少的受访者就能获得类似的结果。然而,这种基于预测的方法在继续医学教育(CME)中缺乏验证,继续医学教育的目标群体通常比医学生更加多样化。

方法

在本研究中,一个大型继续医学教育课程的597名参与者被随机分配,要么用五点李克特量表表达个人意见(基于意见的方法; = 300),要么预测选择每个李克特量表选项的同龄人百分比(基于预测的方法; = 297)。对于每个问题,我们使用迭代算法计算出获得稳定平均结果所需的最少受访者数量。我们比较了两种方法的平均得分和得分分布。

结果

总体回复率为47%。对于类似的平均回复,基于预测的方法比基于意见的方法需要的受访者更少。在大多数问题上,两组的平均回复得分相似,但基于预测的结果产生的极端回复(强烈同意/不同意)更少。

结论

我们验证了基于预测的方法在评估继续医学教育中的有效性。我们还提供了应用此方法的实际考虑因素。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/431f/11302746/076afc4e6612/CMEJ-15-018-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验