Goldstein L B, Bertels C, Davis J N
Veterans Administration Medical Center, Durham, NC 27705.
Arch Neurol. 1989 Jun;46(6):660-2. doi: 10.1001/archneur.1989.00520420080026.
The interobserver reliability of a rating scale employed in several multicenter stroke trials was investigated. Twenty patients who had a stroke were rated with this scale by four clinical stroke fellows. Each patient was independently evaluated by one pair of observers. The degree of interrater agreement for each item on the scale was determined by calculation of the kappa statistic. Interobserver agreement was moderate to substantial for 9 of 13 items. This rating system compares favorably with other scales for which such comparisons can be made. However, the validity of this system must be established.
对多个多中心卒中试验中使用的一种评分量表的观察者间信度进行了调查。20名中风患者由4名临床卒中研究员使用该量表进行评分。每位患者由一对观察者独立评估。通过计算kappa统计量来确定量表上每个项目的评分者间一致性程度。13个项目中有9个项目的观察者间一致性为中度到高度。与其他可进行此类比较的量表相比,该评分系统具有优势。然而,该系统的有效性必须得到证实。