A.R. Artino Jr is professor of medicine and deputy director of graduate programs in health professions education, Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland; ORCID: http://orcid.org/0000-0003-2661-7853. A.W. Phillips is adjunct clinical professor of emergency medicine, Department of Emergency Medicine, University of North Carolina, Chapel Hill, North Carolina. A. Utrankar is a fourth-year medical student, Vanderbilt University School of Medicine, Nashville, Tennessee. A.Q. Ta is a second-year medical student, University of Illinois College of Medicine, Chicago, Illinois. S.J. Durning is professor of medicine and pathology and director of graduate programs in health professions education, Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland.
Acad Med. 2018 Mar;93(3):456-463. doi: 10.1097/ACM.0000000000002002.
Surveys are widely used in health professions education (HPE) research, yet little is known about the quality of the instruments employed. Poorly designed survey tools containing unclear or poorly formatted items can be difficult for respondents to interpret and answer, yielding low-quality data. This study assessed the quality of published survey instruments in HPE.
In 2017, the authors performed an analysis of HPE research articles published in three high-impact journals in 2013. They included articles that employed at least one self-administered survey. They designed a coding rubric addressing five violations of established best practices for survey item design and used it to collect descriptive data on the validity and reliability evidence reported and to assess the quality of available survey items.
Thirty-six articles met inclusion criteria and included the instrument for coding, with one article using 2 surveys, yielding 37 unique surveys. Authors reported validity and reliability evidence for 13 (35.1%) and 8 (21.6%) surveys, respectively. Results of the item-quality assessment revealed that a substantial proportion of published survey instruments violated established best practices in the design and visual layout of Likert-type rating items. Overall, 35 (94.6%) of the 37 survey instruments analyzed contained at least one violation of best practices.
The majority of articles failed to report validity and reliability evidence, and a substantial proportion of the survey instruments violated established best practices in survey design. The authors suggest areas of future inquiry and provide several improvement recommendations for HPE researchers, reviewers, and journal editors.
调查在健康专业教育(HPE)研究中被广泛使用,但人们对所使用的工具的质量知之甚少。设计不佳的调查工具包含不清晰或格式不正确的项目,可能会使受访者难以理解和回答,从而产生低质量的数据。本研究评估了 HPE 中已发表的调查工具的质量。
2017 年,作者对 2013 年在三本高影响力期刊上发表的 HPE 研究文章进行了分析。他们纳入了至少使用了一种自我管理式调查的文章。他们设计了一个编码表,针对调查项目设计的五个既定最佳实践违规行为进行了收集描述性数据,并评估了报告的有效性和可靠性证据以及可用调查项目的质量。
36 篇文章符合纳入标准,并包括用于编码的工具,其中一篇文章使用了 2 个调查,共产生 37 个独特的调查。作者分别报告了 13 项(35.1%)和 8 项(21.6%)调查的有效性和可靠性证据。项目质量评估的结果表明,发表的大量调查工具在李克特量表评分项目的设计和视觉布局方面违反了既定的最佳实践。总体而言,在分析的 37 个调查工具中,有 35 个(94.6%)至少违反了一项最佳实践。
大多数文章未能报告有效性和可靠性证据,而且很大一部分调查工具违反了既定的调查设计最佳实践。作者建议未来的研究领域,并为 HPE 研究人员、审稿人和期刊编辑提供了一些改进建议。