Silcocks P B
J Clin Pathol. 1983 Nov;36(11):1269-75. doi: 10.1136/jcp.36.11.1269.
Evaluation of histological diagnosis requires an index of agreement (to measure repeatability and validity) together with a method of assessing bias. Cohen's kappa statistic appears to be the most suitable tool for measuring levels of agreement, which if unsatisfactory may be caused by bias. Further study of bias is possible by examining levels of agreement for each diagnostic category or by searching for categories of disagreement in which more observations occur than would be expected by chance alone. This article gives reasons for choosing the kappa statistic, with examples illustrating its calculation and the investigation of bias.
组织学诊断的评估需要一个一致性指数(用于衡量重复性和有效性)以及一种评估偏倚的方法。科恩kappa统计量似乎是衡量一致性水平的最合适工具,如果一致性水平不令人满意,可能是由偏倚导致的。通过检查每个诊断类别的一致性水平或寻找其中观察结果比仅由偶然因素预期的更多的不一致类别,可以进一步研究偏倚。本文给出了选择kappa统计量的理由,并举例说明其计算方法和偏倚调查。