Wynd Christine A, Schmidt Bruce, Schaefer Michelle Atkins
University of Akron College of Nursing, Ohio, USA.
West J Nurs Res. 2003 Aug;25(5):508-18. doi: 10.1177/0193945903252998.
Instrument content validity is often established through qualitative expert reviews, yet quantitative analysis of reviewer agreements is also advocated in the literature. Two quantitative approaches to content validity estimations were compared and contrasted using a newly developed instrument called the Osteoporosis Risk Assessment Tool (ORAT). Data obtained from a panel of eight expert judges were analyzed. A Content Validity Index (CVI) initially determined that only one item lacked interrater proportion agreement about its relevance to the instrument as a whole (CVI = 0.57). Concern that higher proportion agreement ratings might be due to random chance stimulated further analysis using a multirater kappa coefficient of agreement. An additional seven items had low kappas, ranging from 0.29 to 0.48 and indicating poor agreement among the experts. The findings supported the elimination or revision of eight items. Pros and cons to using both proportion agreement and kappa coefficient analysis are examined.
工具的内容效度通常通过定性的专家评审来确定,但文献中也提倡对评审者的一致性进行定量分析。使用一种新开发的名为骨质疏松症风险评估工具(ORAT)的工具,对两种内容效度估计的定量方法进行了比较和对比。对从一个由八位专家评委组成的小组获得的数据进行了分析。内容效度指数(CVI)最初确定只有一个项目在其与整个工具的相关性方面缺乏评分者间的比例一致性(CVI = 0.57)。担心较高的比例一致性评分可能是由于随机因素,因此使用多评分者kappa一致性系数进行了进一步分析。另外七个项目的kappa值较低,范围从0.29到0.48,表明专家之间的一致性较差。研究结果支持对八个项目进行删除或修订。本文探讨了使用比例一致性和kappa系数分析的优缺点。