Kassebaum D G, Eaglen R H
Division of Medical School Standards and Assessment, Association of American Medical Colleges (AAMC), Washington, D.C. 20037, USA.
Acad Med. 1999 Jul;74(7):842-9. doi: 10.1097/00001888-199907000-00020.
The authors review the methods by which U.S. medical schools have evaluated student achievement during the twentieth century, especially for the assessment of noncognitive abilities, including clinical skills and behaviors. With particular reference to the current decade, information collected by the Liaison Committee on Medical Education (LCME) is used to examine the congruence of assessment methods with the rising tide of understanding--and accreditation requirements--that knowledge, competence, and behavioral objectives require different methods of assessment to measure the extent of students' learning in each domain. Amongst 97 medical schools having accreditation surveys between July 1993 and June 1998, only 186 of 751 basic science courses tested students' noncognitive achievements in things such as the preparation for and participation in small-group conferences, the quality of case-based discussion, library research and literature reviews, and research projects, despite staking out scholarship, habits of life-long learning, and reasoned thinking as educational objectives. In the clerkships of these schools, structured and observed assessments of clinical skills--with standardized patients and/or OSCEs--contributed 7.4-23.1% to a student's grade (depending on the clerkship discipline), while the predominant contribution (50-70% across the clerkships) was made by resident and faculty ratings that were based largely on recollections of case presentations and discussions having little relationship to interpersonal skills, rapport with patients, and logical and sequenced history taking and physical examination. On a more optimistic note, the results show that the number of schools using standardized patients in one or more clerkships increased between 1993 and 1998 from 34.1% to 50.4% of the 125 schools in the United States, and the number of schools using standardized patients in comprehensive fourth-year examinations increased from 19.1% to 48% of the total. Despite such progress, this study shows that too many medical schools still fail to employ evaluation methods that specifically assess students' achievement of the skills and behaviors they need to learn to practice medicine. The findings of this article explain why accreditors are paying closer attention to how well schools provide measured assurances that students learn what the faculties set out to teach.
作者回顾了美国医学院在20世纪评估学生成绩的方法,尤其是对非认知能力的评估,包括临床技能和行为。特别提及当前十年,医学教育联络委员会(LCME)收集的信息被用于检验评估方法与不断高涨的认识浪潮(以及认证要求)的一致性,即知识、能力和行为目标需要不同的评估方法来衡量学生在每个领域的学习程度。在1993年7月至1998年6月间接受认证调查的97所医学院中,751门基础科学课程中只有186门测试了学生在诸如准备和参与小组会议、基于案例讨论的质量、图书馆研究和文献综述以及研究项目等方面的非认知成绩,尽管将学术成就、终身学习习惯和理性思维作为教育目标。在这些学校的实习中,对临床技能的结构化观察评估(使用标准化病人和/或客观结构化临床考试)对学生成绩的贡献为7.4% - 23.1%(取决于实习学科),而主要贡献(各实习学科的比例为50% - 70%)来自住院医师和教师的评分,这些评分很大程度上基于对病例展示和讨论的回忆,与人际技能、与患者的融洽关系以及逻辑连贯的病史采集和体格检查关系不大。更乐观的是,结果显示,在1993年至1998年间,美国125所学校中在一个或多个实习中使用标准化病人的学校数量从34.1%增加到了50.4%,在综合四年级考试中使用标准化病人的学校数量从总数的19.1%增加到了48%。尽管有这样的进步,但这项研究表明,仍有太多医学院未能采用专门评估学生在学习行医所需技能和行为方面成绩的评估方法。本文的研究结果解释了为什么认证机构越来越关注学校如何提供有衡量依据的保证,即学生学到了教师打算教授的内容。