Suppr超能文献

作为对综合临床实习表现进行系统性评估而设计的档案袋的可靠性和有效性。

The reliability and validity of a portfolio designed as a programmatic assessment of performance in an integrated clinical placement.

作者信息

Roberts Chris, Shadbolt Narelle, Clark Tyler, Simpson Phillip

机构信息

Sydney Medical School - Northern, University of Sydney, Hornsby Ku-ring-gai Hospital, Palmerston Road, Sydney, NSW 2077, Australia.

出版信息

BMC Med Educ. 2014 Sep 20;14:197. doi: 10.1186/1472-6920-14-197.

Abstract

BACKGROUND

Little is known about the technical adequacy of portfolios in reporting multiple complex academic and performance-based assessments. We explored, first, the influencing factors on the precision of scoring within a programmatic assessment of student learning outcomes within an integrated clinical placement. Second, the degree to which validity evidence supported interpretation of student scores.

METHODS

Within generalisability theory, we estimated the contribution that each wanted factor (i.e. student capability) and unwanted factors (e.g. the impact of assessors) made to the variation in portfolio task scores. Relative and absolute standard errors of measurement provided a confidence interval around a pre-determined pass/fail standard for all six tasks. Validity evidence was sought through demonstrating the internal consistency of the portfolio and exploring the relationship of student scores with clinical experience.

RESULTS

The mean portfolio mark for 257 students, across 372 raters, based on six tasks, was 75.56 (SD, 6.68). For a single student on one assessment task, 11% of the variance in scores was due to true differences in student capability. The most significant interaction was context specificity (49%), the tendency for one student to engage with one task and not engage with another task. Rater subjectivity was 29%. An absolute standard error of measurement of 4.74%, gave a 95% CI of +/- 9.30%, and a 68% CI of +/- 4.74% around a pass/fail score of 57%. Construct validity was supported by demonstration of an assessment framework, the internal consistency of the portfolio tasks, and higher scores for students who did the clinical placement later in the academic year.

CONCLUSION

A portfolio designed as a programmatic assessment of an integrated clinical placement has sufficient evidence of validity to support a specific interpretation of student scores around passing a clinical placement. It has modest precision in assessing students' achievement of a competency standard. There were identifiable areas for reducing measurement error and providing more certainty around decision-making. Reducing the measurement error would require engaging with the student body on the value of the tasks, more focussed academic and clinical supervisor training, and revisiting the rubric of the assessment in the light of feedback.

摘要

背景

关于档案袋在报告多种复杂的学术和基于表现的评估方面的技术充分性,人们了解甚少。我们首先探讨了在综合临床实习中对学生学习成果进行的系统性评估中,影响评分准确性的因素。其次,探讨了效度证据支持对学生分数进行解释的程度。

方法

在概化理论中,我们估计了每个期望因素(即学生能力)和非期望因素(如评估者的影响)对档案袋任务分数变异的贡献。测量的相对和绝对标准误差为所有六项任务围绕预先确定的及格/不及格标准提供了一个置信区间。通过证明档案袋的内部一致性并探索学生分数与临床经验的关系来寻求效度证据。

结果

基于六项任务,372名评分者对257名学生的档案袋平均成绩为75.56(标准差为6.68)。对于一名学生在一项评估任务中的情况,分数变异的11%是由于学生能力的真实差异。最显著的交互作用是情境特异性(49%),即一名学生参与一项任务而不参与另一项任务的倾向。评分者主观性为29%。测量的绝对标准误差为4.74%,在及格/不及格分数57%周围给出了95%的置信区间为±9.30%,68%的置信区间为±4.74%。通过展示评估框架、档案袋任务的内部一致性以及学年后期进行临床实习的学生获得更高分数,支持了结构效度。

结论

作为对综合临床实习的系统性评估而设计的档案袋有足够的效度证据来支持围绕通过临床实习对学生分数进行的特定解释。它在评估学生达到能力标准方面的精度适中。存在可识别的领域来减少测量误差并在决策方面提供更多确定性。减少测量误差需要让学生群体了解任务的价值,提供更有针对性的学术和临床督导培训,并根据反馈重新审视评估的评分标准。

相似文献

2
From aggregation to interpretation: how assessors judge complex data in a competency-based portfolio.
Adv Health Sci Educ Theory Pract. 2018 May;23(2):275-287. doi: 10.1007/s10459-017-9793-y. Epub 2017 Oct 14.
7
Inter-rater reliability and generalizability of patient note scores using a scoring rubric based on the USMLE Step-2 CS format.
Adv Health Sci Educ Theory Pract. 2016 Oct;21(4):761-73. doi: 10.1007/s10459-015-9664-3. Epub 2016 Jan 12.
9
Student perspectives on assessment: experience in a competency-based portfolio system.
Med Teach. 2012;34(3):221-5. doi: 10.3109/0142159X.2012.652243.
10
Demonstration of portfolios to assess competency of residents.
Adv Health Sci Educ Theory Pract. 2004;9(4):309-23. doi: 10.1007/s10459-004-0885-0.

引用本文的文献

1
4
Student perspectives on programmatic assessment in a large medical programme: A critical realist analysis.
Med Educ. 2022 Sep;56(9):901-914. doi: 10.1111/medu.14807. Epub 2022 Apr 29.
5
Medical Student Portfolios: A Systematic Scoping Review.
J Med Educ Curric Dev. 2022 Mar 3;9:23821205221076022. doi: 10.1177/23821205221076022. eCollection 2022 Jan-Dec.
6
Development and validation of a portfolio assessment system for medical schools in Korea.
J Educ Eval Health Prof. 2020;17:39. doi: 10.3352/jeehp.2020.17.39. Epub 2020 Dec 9.
7
Development of Resident-Sensitive Quality Measures for Inpatient General Internal Medicine.
J Gen Intern Med. 2021 May;36(5):1271-1278. doi: 10.1007/s11606-020-06320-0. Epub 2020 Oct 26.
8
Deconstructing programmatic assessment.
Adv Med Educ Pract. 2018 Mar 22;9:191-197. doi: 10.2147/AMEP.S144449. eCollection 2018.
10
Do portfolios have a future?
Adv Health Sci Educ Theory Pract. 2017 Mar;22(1):221-228. doi: 10.1007/s10459-016-9679-4. Epub 2016 Mar 30.

本文引用的文献

1
Evaluating peer teaching about chronic disease.
Clin Teach. 2014 Dec;11(7):541-5. doi: 10.1111/tct.12211.
2
The assessment of professional competence: Developments, research and practical implications.
Adv Health Sci Educ Theory Pract. 1996 Jan;1(1):41-67. doi: 10.1007/BF00596229.
3
A review of longitudinal community and hospital placements in medical education: BEME Guide No. 26.
Med Teach. 2013 Aug;35(8):e1340-64. doi: 10.3109/0142159X.2013.806981. Epub 2013 Jul 12.
4
Beyond assessment of learning toward assessment for learning: educating tomorrow's physicians.
Med Teach. 2013 Jul;35(7):560-3. doi: 10.3109/0142159X.2013.787141. Epub 2013 May 3.
5
Exploring the role of first impressions in rater-based assessments.
Adv Health Sci Educ Theory Pract. 2014 Aug;19(3):409-27. doi: 10.1007/s10459-013-9453-9. Epub 2013 Mar 26.
6
Workplace-based assessment: raters' performance theories and constructs.
Adv Health Sci Educ Theory Pract. 2013 Aug;18(3):375-96. doi: 10.1007/s10459-012-9376-x. Epub 2012 May 17.
7
Exploring the impact of mental workload on rater-based assessments.
Adv Health Sci Educ Theory Pract. 2013 May;18(2):291-303. doi: 10.1007/s10459-012-9370-3. Epub 2012 Apr 7.
8
A model for programmatic assessment fit for purpose.
Med Teach. 2012;34(3):205-14. doi: 10.3109/0142159X.2012.652239.
9
Programmatic assessment and Kane's validity perspective.
Med Educ. 2012 Jan;46(1):38-48. doi: 10.1111/j.1365-2923.2011.04098.x.
10
Programmatic assessment: From assessment of learning to assessment for learning.
Med Teach. 2011;33(6):478-85. doi: 10.3109/0142159X.2011.565828.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验