Adelaide Medical School, University of Adelaide, Adelaide, South Australia.
Royal Adelaide Hospital, Adelaide, South Australia.
Teach Learn Med. 2023 Jun-Jul;35(3):356-367. doi: 10.1080/10401334.2022.2050240. Epub 2022 May 1.
We compared the quality of clinician-authored and student-authored multiple choice questions (MCQs) using a formative, mock examination of clinical knowledge for medical students.
Multiple choice questions are a popular format used in medical programs of assessment. A challenge for educators is creating high-quality items efficiently. For expediency's sake, a standard practice is for faculties to repeat items in examinations from year to year. This study aims to compare the quality of student-authored with clinician-authored items as a potential source of new items to include in faculty item banks.
We invited Year IV and V medical students at the University of Adelaide to participate in a mock examination. The participants first completed an online instructional module on strategies for answering and writing MCQs, then submitted one original MCQ each for potential inclusion in the mock examination. Two 180-item mock examinations, one for each year level, were constructed. Each consisted of 90 student-authored items and 90 clinician-authored items. Participants were blinded to the author of each item. Each item was analyzed for item difficulty and discrimination, number of item-writing flaws (IWFs) and non-functioning distractors (NFDs), and cognitive skill level (using a modified version of Bloom's taxonomy).
Eighty-nine and 91 students completed the Year IV and V examinations, respectively. Student-authored items, compared with clinician-authored items, tended to be written at both a lower cognitive skill and difficulty level. They contained a significantly higher rate of IWFs (2-3.5 times) and NFDs (1.18 times). However, they were equally or better discriminating items than clinician-authored items.
Students can author MCQ items with comparable discrimination to clinician-authored items, despite being inferior in other parameters. Student-authored items may be considered a potential source of material for faculty item banks; however, several barriers exist to their use in a summative setting. The overall quality of items remains suboptimal, regardless of author. This highlights the need for ongoing faculty training in item writing.
我们使用医学生临床知识形成性模拟考试比较了临床医生撰写和学生撰写的多项选择题(MCQ)的质量。
多项选择题是医学课程评估中常用的一种形式。教育工作者面临的挑战是高效地创建高质量的项目。为了方便起见,教师的标准做法是每年在考试中重复使用项目。本研究旨在比较学生撰写的项目与临床医生撰写的项目的质量,作为在教师项目库中纳入新项目的潜在来源。
我们邀请阿德莱德大学四年级和五年级的医学生参加模拟考试。参与者首先完成了关于回答和编写 MCQ 策略的在线教学模块,然后每人提交一份原创 MCQ,供潜在纳入模拟考试。构建了两个 180 项的模拟考试,一个针对每个年级,每个都包含 90 项学生撰写的项目和 90 项临床医生撰写的项目。参与者对每个项目的作者都不知情。对每个项目进行了分析,以确定项目的难度和区分度、项目编写缺陷(IWF)和无效干扰项(NFD)的数量,以及认知技能水平(使用修改后的布鲁姆分类法)。
分别有 89 名和 91 名学生完成了四年级和五年级的考试。与临床医生撰写的项目相比,学生撰写的项目往往处于较低的认知技能和难度水平。它们包含的 IWF(2-3.5 倍)和 NFD(1.18 倍)的数量明显更高。然而,它们与临床医生撰写的项目一样或更好地区分项目。
尽管在其他参数上较差,学生仍可以撰写具有与临床医生撰写的项目相当的区分度的 MCQ 项目。学生撰写的项目可能被视为教师项目库的潜在素材来源;然而,在总结性评估中使用存在几个障碍。无论作者是谁,项目的整体质量仍然不尽如人意。这凸显了教师持续进行项目编写培训的必要性。