Archer C O, Swearingen D, Kohler A T, Messick J M, May P R
J Clin Psychol. 1979 Jan;35(1):130-9. doi: 10.1002/1097-4679(197901)35:1<130::aid-jclp2270350121>3.0.co;2-3.
Discusses the reliability of measurements made by 48 raters who used the Problem Dysfunction Rating Scale (PDRS) under simulated routine clinical record-keeping conditions. Ten- to 15-minute videotape interviews of two simulated patients with predefined problems were shown to a multidisciplinary psychiatric hospital staff of varying educational background and clinical experience. These raters were given only brief instructions and no training in the use of the PDRS. Statistical analysis included application of the usual, traditional test and retest variation studies and used variance components and comparison to random rating models. A random contrast of rater agreement was found to index most realistically reliability in instances such as this, in which a large number of raters rate a small number of items. There was greater intrarater consistency than interrater agreement, and it was concluded that when reasonably adequate information was available the degree of dysfunction due to patients' problems could be rated on the PDRS with a useful degree of consistency by untrained raters.